barbeque 0.7.0 → 1.0.0

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
checksums.yaml CHANGED
@@ -1,7 +1,7 @@
1
1
  ---
2
2
  SHA1:
3
- metadata.gz: e23cea0a69664e6f2351d1651daaf0b42355a71f
4
- data.tar.gz: 4c76a4e57cbb43baa302841f7eb565a1f3602fb6
3
+ metadata.gz: fdd657c4fecd71f71469707c7583541cc8d3dab7
4
+ data.tar.gz: f6911be9efb1970c036697bcc8e5a46edbdb605d
5
5
  SHA512:
6
- metadata.gz: 1021988ef2880d1bb8009d5c2fb397803a0e9a02577ff66cd961638f415dd877012340bb86564b6855efa1c65d44751b1bffc16aea1bce98697465f3c4a1b940
7
- data.tar.gz: 0e48ddda04e6ebb25e5cff2bcc7c21f440848b2a5588286eb9e89289d70d5b6863bdb00ebe9c7d6dc70424bba37bbc143795e7cc055fcc089518a57a3375f8c7
6
+ metadata.gz: 4d89b39fd628a67c7144613989d77c0b4fc3407c307e9409991263ad704b2e667d73981519ed59a543d24ba21c94e390830965c9e6eb3f823127acbf3d3996c2
7
+ data.tar.gz: 5697f62a1d5053fd2ab82ac60cabc4a6ad70eea87317fd6d800e251ab242e7a55edfc86299eec598c3b6d4c0111829f3f7387e21f7ddc726ce96d7e22d3e12d7
data/README.md CHANGED
@@ -25,7 +25,7 @@ In Barbeque worker, they are done on Docker container.
25
25
  ## Why Barbeque?
26
26
 
27
27
  - You can achieve job-level auto scaling using tools like [Amazon ECS](https://aws.amazon.com/ecs/) [EC2 Auto Scaling group](https://aws.amazon.com/autoscaling/)
28
- - For Amazon ECS, Barbeque has Hako runner
28
+ - For Amazon ECS, Barbeque has Hako executor
29
29
  - You don't have to manage infrastructure for each application like Resque or Sidekiq
30
30
 
31
31
  For details, see [Scalable Job Queue System Built with Docker // Speaker Deck](https://speakerdeck.com/k0kubun/scalable-job-queue-system-built-with-docker).
@@ -45,6 +45,15 @@ You also need to prepare MySQL, Amazon SQS and Amazon S3.
45
45
  $ rake barbeque:worker BARBEQUE_QUEUE=default
46
46
  ```
47
47
 
48
+ The rake task launches four worker processes.
49
+
50
+ - Two runners
51
+ - receives message from SQS queue, starts job execution and stores its identifier to the database
52
+ - One execution poller
53
+ - gets execution status and reflect it to the database
54
+ - One retry poller
55
+ - gets retried execution status and reflect it to the database
56
+
48
57
  ## Usage
49
58
 
50
59
  Web API documentation is available at [doc/toc.md](./doc/toc.md).
@@ -53,5 +62,21 @@ Web API documentation is available at [doc/toc.md](./doc/toc.md).
53
62
 
54
63
  [barbeque\_client.gem](https://github.com/cookpad/barbeque_client) has API client and ActiveJob integration.
55
64
 
65
+ ## Executor
66
+ Barbeque executor can be customized in config/barbeque.yml. Executor is responsible for starting executions and getting status of executions.
67
+
68
+ Barbeque has currently two executors.
69
+
70
+ ### Docker (default)
71
+ Barbeque::Executor::Docker starts execution by `docker run --detach` and gets status by `docker inspect`.
72
+
73
+ ### Hako
74
+ Barbeque::Executor::Hako starts execution by `hako oneshot --no-wait` and gets status from S3 task notification.
75
+
76
+ #### Requirement
77
+ You must configure CloudWatch Events for putting S3 task notification.
78
+ See Hako's documentation for detail.
79
+ https://github.com/eagletmt/hako/blob/master/docs/ecs-task-notification.md
80
+
56
81
  ## License
57
82
  The gem is available as open source under the terms of the [MIT License](http://opensource.org/licenses/MIT).
@@ -1,3 +1,5 @@
1
+ require 'barbeque/config'
2
+
1
3
  class Barbeque::JobQueuesController < Barbeque::ApplicationController
2
4
  def index
3
5
  @job_queues = Barbeque::JobQueue.all
@@ -49,10 +51,7 @@ class Barbeque::JobQueuesController < Barbeque::ApplicationController
49
51
  Aws::SQS::Client.new.create_queue(
50
52
  queue_name: job_queue.sqs_queue_name,
51
53
  attributes: {
52
- # All SQS queues' "ReceiveMessageWaitTimeSeconds" are configured to be 20s (maximum).
53
- # This should be as large as possible to reduce API-calling cost by long polling.
54
- # http://docs.aws.amazon.com/AWSSimpleQueueService/latest/APIReference/API_CreateQueue.html#API_CreateQueue_RequestParameters
55
- 'ReceiveMessageWaitTimeSeconds' => Barbeque::JobQueue::SQS_RECEIVE_MESSAGE_WAIT_TIME.to_s,
54
+ 'ReceiveMessageWaitTimeSeconds' => Barbeque.config.sqs_receive_message_wait_time.to_s,
56
55
  },
57
56
  )
58
57
  end
@@ -0,0 +1,2 @@
1
+ class Barbeque::DockerContainer < Barbeque::ApplicationRecord
2
+ end
@@ -0,0 +1,2 @@
1
+ class Barbeque::EcsHakoTask < Barbeque::ApplicationRecord
2
+ end
@@ -4,11 +4,6 @@ class Barbeque::JobQueue < Barbeque::ApplicationRecord
4
4
 
5
5
  has_many :sns_subscriptions, class_name: 'SNSSubscription', dependent: :destroy
6
6
 
7
- # All SQS queues' "ReceiveMessageWaitTimeSeconds" are configured to be 20s (maximum).
8
- # This should be as large as possible to reduce API-calling cost by long polling.
9
- # http://docs.aws.amazon.com/AWSSimpleQueueService/latest/APIReference/API_CreateQueue.html#API_CreateQueue_RequestParameters
10
- SQS_RECEIVE_MESSAGE_WAIT_TIME = 20
11
-
12
7
  # SQS queue allows [a-zA-Z0-9_-]+ as queue name. Its maximum length is 80.
13
8
  validates :name, presence: true, uniqueness: true, format: /\A[a-zA-Z0-9_-]+\z/,
14
9
  length: { maximum: SQS_NAME_MAX_LENGTH - SQS_NAME_PREFIX.length }
@@ -10,7 +10,7 @@ class Barbeque::MessageEnqueuingService
10
10
  # @param [String] application
11
11
  # @param [String] job
12
12
  # @param [Object] message
13
- # @param optional [String] queue
13
+ # @param [String] queue
14
14
  def initialize(application:, job:, message:, queue: nil)
15
15
  @application = application
16
16
  @job = job
@@ -67,7 +67,7 @@ class Barbeque::SNSSubscriptionService
67
67
  )
68
68
  end
69
69
 
70
- # @paaram [String] queue_arn
70
+ # @param [String] queue_arn
71
71
  # @param [Array<String>] topic_arns
72
72
  # @return [String] JSON formatted policy
73
73
  def generate_policy(queue_arn:, topic_arns:)
@@ -0,0 +1,12 @@
1
+ class CreateBarbequeDockerContainers < ActiveRecord::Migration[5.0]
2
+ def change
3
+ create_table :barbeque_docker_containers, options: 'ENGINE=InnoDB ROW_FORMAT=dynamic DEFAULT CHARSET=utf8mb4' do |t|
4
+ t.string :message_id, null: false
5
+ t.string :container_id, null: false
6
+
7
+ t.timestamps
8
+
9
+ t.index ['message_id'], unique: true
10
+ end
11
+ end
12
+ end
@@ -0,0 +1,13 @@
1
+ class CreateBarbequeEcsHakoTasks < ActiveRecord::Migration[5.0]
2
+ def change
3
+ create_table :barbeque_ecs_hako_tasks, options: 'ENGINE=InnoDB ROW_FORMAT=dynamic DEFAULT CHARSET=utf8mb4' do |t|
4
+ t.string :message_id, null: false
5
+ t.string :cluster, null: false
6
+ t.string :task_arn, null: false
7
+
8
+ t.timestamps
9
+
10
+ t.index ['message_id'], unique: true
11
+ end
12
+ end
13
+ end
@@ -0,0 +1,6 @@
1
+ class AddIndexToJobExecutionStatus < ActiveRecord::Migration[5.0]
2
+ def change
3
+ add_index :barbeque_job_executions, :status
4
+ add_index :barbeque_job_retries, :status
5
+ end
6
+ end
@@ -3,7 +3,7 @@ require 'yaml'
3
3
 
4
4
  module Barbeque
5
5
  class Config
6
- attr_accessor :exception_handler, :runner, :runner_options
6
+ attr_accessor :exception_handler, :executor, :executor_options, :sqs_receive_message_wait_time, :maximum_concurrent_executions, :runner_wait_seconds
7
7
 
8
8
  def initialize(options = {})
9
9
  options.each do |key, value|
@@ -13,15 +13,20 @@ module Barbeque
13
13
  raise KeyError.new("Unexpected option '#{key}' was specified.")
14
14
  end
15
15
  end
16
- runner_options.symbolize_keys!
16
+ executor_options.symbolize_keys!
17
17
  end
18
18
  end
19
19
 
20
20
  module ConfigBuilder
21
21
  DEFAULT_CONFIG = {
22
22
  'exception_handler' => 'RailsLogger',
23
- 'runner' => 'Docker',
24
- 'runner_options' => {},
23
+ 'executor' => 'Docker',
24
+ 'executor_options' => {},
25
+ # http://docs.aws.amazon.com/AWSSimpleQueueService/latest/APIReference/API_CreateQueue.html#API_CreateQueue_RequestParameters
26
+ 'sqs_receive_message_wait_time' => 10,
27
+ # nil means unlimited
28
+ 'maximum_concurrent_executions' => nil,
29
+ 'runner_wait_seconds' => 10,
25
30
  }
26
31
 
27
32
  def config
@@ -43,16 +43,7 @@ module Barbeque
43
43
  'stderr' => stderr,
44
44
  }
45
45
  else
46
- # Try to load legacy format
47
- begin
48
- s3_object = ExecutionLog.s3_client.get_object(
49
- bucket: s3_bucket_name,
50
- key: legacy_s3_key_for(execution),
51
- )
52
- JSON.parse(s3_object.body.read)
53
- rescue Aws::S3::Errors::NoSuchKey
54
- nil
55
- end
46
+ nil
56
47
  end
57
48
  end
58
49
 
@@ -68,11 +59,6 @@ module Barbeque
68
59
  "#{execution.app.name}/#{execution.job_definition.job}/#{execution.message_id}/#{filename}"
69
60
  end
70
61
 
71
- # @param [Barbeque::JobExecution,Barbeque::JobRetry] execution
72
- def legacy_s3_key_for(execution)
73
- "#{execution.app.name}/#{execution.job_definition.job}/#{execution.message_id}"
74
- end
75
-
76
62
  # @param [Barbeque::JobExecution,Barbeque::JobRetry] execution
77
63
  # @param [String] filename
78
64
  # @return [String]
@@ -0,0 +1,34 @@
1
+ module Barbeque
2
+ class ExecutionPoller
3
+ def initialize
4
+ @stop_requested = false
5
+ end
6
+
7
+ def run
8
+ Barbeque::JobExecution.running.find_in_batches do |job_executions|
9
+ job_executions.shuffle.each do |job_execution|
10
+ if @stop_requested
11
+ return
12
+ end
13
+ job_execution.with_lock do
14
+ if job_execution.running?
15
+ poll(job_execution)
16
+ end
17
+ end
18
+ end
19
+ end
20
+ sleep 1
21
+ end
22
+
23
+ def stop
24
+ @stop_requested = true
25
+ end
26
+
27
+ private
28
+
29
+ def poll(job_execution)
30
+ executor = Executor.create
31
+ executor.poll_execution(job_execution)
32
+ end
33
+ end
34
+ end
@@ -0,0 +1,29 @@
1
+ require 'barbeque/config'
2
+ require 'barbeque/executor/docker'
3
+ require 'barbeque/executor/hako'
4
+
5
+ module Barbeque
6
+ # Executor is responsible for starting executions and getting status of
7
+ # executions.
8
+ # Each executor must implement these methods.
9
+ # - #initialize(options)
10
+ # - Create a executor with executor_options specified in config/barbeque.yml.
11
+ # - #start_execution(job_execution, envs)
12
+ # - Start execution with environment variables. An executor must update the
13
+ # execution status from pending.
14
+ # - #poll_execution(job_execution)
15
+ # - Get the execution status and update the job_execution columns.
16
+ # - #start_retry(job_retry, envs)
17
+ # - Start retry with environment variables. An executor must update the
18
+ # retry status from pending and the corresponding execution status.
19
+ # - #poll_retry(job_retry)
20
+ # - Get the execution status and update the job_retry and job_execution
21
+ # columns.
22
+
23
+ module Executor
24
+ def self.create
25
+ klass = const_get(Barbeque.config.executor, false)
26
+ klass.new(Barbeque.config.executor_options)
27
+ end
28
+ end
29
+ end
@@ -0,0 +1,131 @@
1
+ require 'barbeque/docker_image'
2
+ require 'barbeque/slack_notifier'
3
+ require 'open3'
4
+
5
+ module Barbeque
6
+ module Executor
7
+ class Docker
8
+ class DockerCommandError < StandardError
9
+ end
10
+
11
+ def initialize(_options)
12
+ end
13
+
14
+ # @param [Barbeque::JobExecution] job_execution
15
+ # @param [Hash] envs
16
+ def start_execution(job_execution, envs)
17
+ docker_image = DockerImage.new(job_execution.job_definition.app.docker_image)
18
+ cmd = build_docker_run_command(docker_image, job_execution.job_definition.command, envs)
19
+ stdout, stderr, status = Open3.capture3(*cmd)
20
+ if status.success?
21
+ job_execution.update!(status: :running)
22
+ Barbeque::DockerContainer.create!(message_id: job_execution.message_id, container_id: stdout.chomp)
23
+ else
24
+ job_execution.update!(status: :failed, finished_at: Time.zone.now)
25
+ Barbeque::ExecutionLog.save_stdout_and_stderr(job_execution, stdout, stderr)
26
+ Barbeque::SlackNotifier.notify_job_execution(job_execution)
27
+ end
28
+ end
29
+
30
+ # @param [Barbeque::JobRetry] job_retry
31
+ # @param [Hash] envs
32
+ def start_retry(job_retry, envs)
33
+ job_execution = job_retry.job_execution
34
+ docker_image = DockerImage.new(job_execution.job_definition.app.docker_image)
35
+ cmd = build_docker_run_command(docker_image, job_execution.job_definition.command, envs)
36
+ stdout, stderr, status = Open3.capture3(*cmd)
37
+ if status.success?
38
+ Barbeque::DockerContainer.create!(message_id: job_retry.message_id, container_id: stdout.chomp)
39
+ Barbeque::ApplicationRecord.transaction do
40
+ job_execution.update!(status: :retried)
41
+ job_retry.update!(status: :running)
42
+ end
43
+ else
44
+ Barbeque::ExecutionLog.save_stdout_and_stderr(job_retry, stdout, stderr)
45
+ Barbeque::ApplicationRecord.transaction do
46
+ job_retry.update!(status: :failed, finished_at: Time.zone.now)
47
+ job_execution.update!(status: :failed)
48
+ end
49
+ Barbeque::SlackNotifier.notify_job_retry(job_retry)
50
+ end
51
+ end
52
+
53
+ # @param [Barbeque::JobExecution] job_execution
54
+ def poll_execution(job_execution)
55
+ container = Barbeque::DockerContainer.find_by!(message_id: job_execution.message_id)
56
+ info = inspect_container(container.container_id)
57
+ if info['State'] && info['State']['Status'] != 'running'
58
+ finished_at = Time.zone.parse(info['State']['FinishedAt'])
59
+ exit_code = info['State']['ExitCode']
60
+ job_execution.update!(status: exit_code == 0 ? :success : :failed, finished_at: finished_at)
61
+
62
+ stdout, stderr = get_logs(container.container_id)
63
+ Barbeque::ExecutionLog.save_stdout_and_stderr(job_execution, stdout, stderr)
64
+ Barbeque::SlackNotifier.notify_job_execution(job_execution)
65
+ end
66
+ end
67
+
68
+ # @param [Barbeque::JobRetry] job_retry
69
+ def poll_retry(job_retry)
70
+ container = Barbeque::DockerContainer.find_by!(message_id: job_retry.message_id)
71
+ job_execution = job_retry.job_execution
72
+ info = inspect_container(container.container_id)
73
+ if info['State'] && info['State']['Status'] != 'running'
74
+ finished_at = Time.zone.parse(info['State']['FinishedAt'])
75
+ exit_code = info['State']['ExitCode']
76
+ status = exit_code == 0 ? :success : :failed
77
+ Barbeque::ApplicationRecord.transaction do
78
+ job_retry.update!(status: status, finished_at: finished_at)
79
+ job_execution.update!(status: status)
80
+ end
81
+
82
+ stdout, stderr = get_logs(container.container_id)
83
+ Barbeque::ExecutionLog.save_stdout_and_stderr(job_retry, stdout, stderr)
84
+ Barbeque::SlackNotifier.notify_job_retry(job_retry)
85
+ end
86
+ end
87
+
88
+ private
89
+
90
+ # @param [Barbeque::DockerImage] docker_image
91
+ # @param [Array<String>] command
92
+ # @param [Hash] envs
93
+ def build_docker_run_command(docker_image, command, envs)
94
+ ['docker', 'run', '--detach', *env_options(envs), docker_image.to_s, *command]
95
+ end
96
+
97
+ def env_options(envs)
98
+ envs.flat_map do |key, value|
99
+ ['--env', "#{key}=#{value}"]
100
+ end
101
+ end
102
+
103
+ # @param [String] container_id
104
+ # @return [Hash] container info
105
+ def inspect_container(container_id)
106
+ stdout, stderr, status = Open3.capture3('docker', 'inspect', container_id)
107
+ if status.success?
108
+ begin
109
+ JSON.parse(stdout)[0]
110
+ rescue JSON::ParserError => e
111
+ raise DockerCommandError.new("Unable to parse JSON: #{e.class}: #{e.message}: #{stdout}")
112
+ end
113
+ else
114
+ raise DockerCommandError.new("Unable to inspect Docker container #{container.container_id}: STDOUT: #{stdout}; STDERR: #{stderr}")
115
+ end
116
+ end
117
+
118
+ # @param [String] container_id
119
+ # @return [String] stdout
120
+ # @return [String] stderr
121
+ def get_logs(container_id)
122
+ stdout, stderr, status = Open3.capture3('docker', 'logs', container_id)
123
+ if status.success?
124
+ [stdout, stderr]
125
+ else
126
+ raise DockerCommandError.new("Unable to get Docker container logs #{container.container_id}: STDOUT: #{stdout}; STDERR: #{stderr}")
127
+ end
128
+ end
129
+ end
130
+ end
131
+ end
@@ -0,0 +1,159 @@
1
+ require 'barbeque/docker_image'
2
+ require 'barbeque/slack_notifier'
3
+ require 'open3'
4
+ require 'uri'
5
+
6
+ module Barbeque
7
+ module Executor
8
+ class Hako
9
+ class HakoCommandError < StandardError
10
+ end
11
+
12
+ # @param [String] hako_dir
13
+ # @param [Hash] hako_env
14
+ # @param [String] yaml_dir
15
+ def initialize(hako_dir:, hako_env: {}, yaml_dir:, oneshot_notification_prefix:)
16
+ @hako_dir = hako_dir
17
+ @hako_env = hako_env
18
+ @yaml_dir = yaml_dir
19
+ uri = URI.parse(oneshot_notification_prefix)
20
+ @s3_bucket = uri.host
21
+ @s3_prefix = uri.path.sub(%r{\A/}, '')
22
+ @s3_region = URI.decode_www_form(uri.query || '').to_h['region']
23
+ end
24
+
25
+ # @param [Barbeque::JobExecution] job_execution
26
+ # @param [Hash] envs
27
+ def start_execution(job_execution, envs)
28
+ docker_image = DockerImage.new(job_execution.job_definition.app.docker_image)
29
+ cmd = build_hako_oneshot_command(docker_image, job_execution.job_definition.command, envs)
30
+ stdout, stderr, status = Bundler.with_clean_env { Open3.capture3(@hako_env, *cmd, chdir: @hako_dir) }
31
+ if status.success?
32
+ job_execution.update!(status: :running)
33
+ cluster, task_arn = extract_task_info(stdout)
34
+ Barbeque::EcsHakoTask.create!(message_id: job_execution.message_id, cluster: cluster, task_arn: task_arn)
35
+ Barbeque::ExecutionLog.save_stdout_and_stderr(job_execution, stdout, stderr)
36
+ else
37
+ job_execution.update!(status: :failed, finished_at: Time.zone.now)
38
+ Barbeque::ExecutionLog.save_stdout_and_stderr(job_execution, stdout, stderr)
39
+ Barbeque::SlackNotifier.notify_job_execution(job_execution)
40
+ end
41
+ end
42
+
43
+ # @param [Barbeque::JobRetry] job_retry
44
+ # @param [Hash] envs
45
+ def start_retry(job_retry, envs)
46
+ job_execution = job_retry.job_execution
47
+ docker_image = DockerImage.new(job_execution.job_definition.app.docker_image)
48
+ cmd = build_hako_oneshot_command(docker_image, job_execution.job_definition.command, envs)
49
+ stdout, stderr, status = Bundler.with_clean_env { Open3.capture3(@hako_env, *cmd, chdir: @hako_dir) }
50
+ if status.success?
51
+ cluster, task_arn = extract_task_info(stdout)
52
+ Barbeque::EcsHakoTask.create!(message_id: job_retry.message_id, cluster: cluster, task_arn: task_arn)
53
+ Barbeque::ExecutionLog.save_stdout_and_stderr(job_retry, stdout, stderr)
54
+ Barbeque::ApplicationRecord.transaction do
55
+ job_execution.update!(status: :retried)
56
+ job_retry.update!(status: :running)
57
+ end
58
+ else
59
+ Barbeque::ExecutionLog.save_stdout_and_stderr(job_retry, stdout, stderr)
60
+ Barbeque::ApplicationRecord.transaction do
61
+ job_retry.update!(status: :failed, finished_at: Time.zone.now)
62
+ job_execution.update!(status: :failed)
63
+ end
64
+ Barbeque::SlackNotifier.notify_job_retry(job_retry)
65
+ end
66
+ end
67
+
68
+ # @param [Barbeque::JobExecution] job_execution
69
+ def poll_execution(job_execution)
70
+ hako_task = Barbeque::EcsHakoTask.find_by!(message_id: job_execution.message_id)
71
+ result = get_stopped_result(hako_task)
72
+ if result
73
+ detail = result.fetch('detail')
74
+ task = Aws::Json::Parser.new(Aws::ECS::Client.api.operation('describe_tasks').output.shape.member(:tasks).shape.member).parse(JSON.dump(detail))
75
+ status = :failed
76
+ task.containers.each do |container|
77
+ if container.name == 'app'
78
+ status = container.exit_code == 0 ? :success : :failed
79
+ end
80
+ end
81
+ job_execution.update!(status: status, finished_at: task.stopped_at)
82
+ Barbeque::SlackNotifier.notify_job_execution(job_execution)
83
+ end
84
+ end
85
+
86
+ # @param [Barbeque::JobRetry] job_execution
87
+ def poll_retry(job_retry)
88
+ hako_task = Barbeque::EcsHakoTask.find_by!(message_id: job_retry.message_id)
89
+ job_execution = job_retry.job_execution
90
+ result = get_stopped_result(hako_task)
91
+ if result
92
+ detail = result.fetch('detail')
93
+ task = Aws::Json::Parser.new(Aws::ECS::Client.api.operation('describe_tasks').output.shape.member(:tasks).shape.member).parse(JSON.dump(detail))
94
+ status = :failed
95
+ task.containers.each do |container|
96
+ if container.name == 'app'
97
+ status = container.exit_code == 0 ? :success : :failed
98
+ end
99
+ end
100
+ Barbeque::ApplicationRecord.transaction do
101
+ job_retry.update!(status: status, finished_at: task.stopped_at)
102
+ job_execution.update!(status: status)
103
+ end
104
+ Barbeque::SlackNotifier.notify_job_retry(job_retry)
105
+ end
106
+ end
107
+
108
+ private
109
+
110
+ def build_hako_oneshot_command(docker_image, command, envs)
111
+ [
112
+ 'bundle', 'exec', 'hako', 'oneshot', '--no-wait', '--tag', docker_image.tag,
113
+ *env_options(envs), File.join(@yaml_dir, "#{docker_image.repository}.yml"), '--', *command,
114
+ ]
115
+ end
116
+
117
+ def env_options(envs)
118
+ envs.map do |key, value|
119
+ "--env=#{key}=#{value}"
120
+ end
121
+ end
122
+
123
+ def s3_key_for_stopped_result(hako_task)
124
+ "#{@s3_prefix}/#{hako_task.task_arn}/stopped.json"
125
+ end
126
+
127
+ def s3_client
128
+ @s3_client ||= Aws::S3::Client.new(region: @s3_region, http_read_timeout: 5)
129
+ end
130
+
131
+ def get_stopped_result(hako_task)
132
+ object = s3_client.get_object(bucket: @s3_bucket, key: s3_key_for_stopped_result(hako_task))
133
+ JSON.parse(object.body.read)
134
+ rescue Aws::S3::Errors::NoSuchKey
135
+ nil
136
+ end
137
+
138
+ def extract_task_info(stdout)
139
+ last_line = stdout.lines.last
140
+ if last_line
141
+ begin
142
+ task_info = JSON.parse(last_line)
143
+ cluster = task_info['cluster']
144
+ task_arn = task_info['task_arn']
145
+ if cluster && task_arn
146
+ [cluster, task_arn]
147
+ else
148
+ raise HakoCommandError.new("Unable find cluster and task_arn in JSON: #{stdout}")
149
+ end
150
+ rescue JSON::ParserError => e
151
+ raise HakoCommandError.new("Unable parse the last line as JSON: #{stdout}")
152
+ end
153
+ else
154
+ raise HakoCommandError.new('stdout is empty')
155
+ end
156
+ end
157
+ end
158
+ end
159
+ end