active_encode 0.8.0 → 1.0.0
This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
- checksums.yaml +4 -4
- data/.circleci/config.yml +26 -17
- data/.rubocop.yml +7 -3
- data/.rubocop_todo.yml +8 -1
- data/CONTRIBUTING.md +42 -12
- data/Gemfile +11 -11
- data/README.md +64 -10
- data/active_encode.gemspec +2 -4
- data/app/controllers/active_encode/encode_record_controller.rb +1 -1
- data/app/jobs/active_encode/polling_job.rb +1 -1
- data/app/models/active_encode/encode_record.rb +1 -1
- data/guides/media_convert_adapter.md +208 -0
- data/lib/active_encode/base.rb +1 -1
- data/lib/active_encode/core.rb +14 -14
- data/lib/active_encode/engine_adapter.rb +13 -13
- data/lib/active_encode/engine_adapters/elastic_transcoder_adapter.rb +158 -158
- data/lib/active_encode/engine_adapters/ffmpeg_adapter.rb +14 -3
- data/lib/active_encode/engine_adapters/matterhorn_adapter.rb +204 -202
- data/lib/active_encode/engine_adapters/media_convert_adapter.rb +435 -203
- data/lib/active_encode/engine_adapters/media_convert_output.rb +67 -5
- data/lib/active_encode/engine_adapters/pass_through_adapter.rb +3 -3
- data/lib/active_encode/engine_adapters/zencoder_adapter.rb +114 -114
- data/lib/active_encode/errors.rb +1 -1
- data/lib/active_encode/persistence.rb +19 -19
- data/lib/active_encode/version.rb +1 -1
- data/lib/file_locator.rb +6 -6
- data/spec/fixtures/ffmpeg/cancelled-id/exit_status.code +1 -0
- data/spec/fixtures/ffmpeg/completed-id/exit_status.code +1 -0
- data/spec/fixtures/ffmpeg/completed-with-warnings-id/error.log +3 -0
- data/spec/fixtures/ffmpeg/completed-with-warnings-id/exit_status.code +1 -0
- data/spec/fixtures/ffmpeg/completed-with-warnings-id/input_metadata +102 -0
- data/spec/fixtures/ffmpeg/completed-with-warnings-id/output_metadata-high +90 -0
- data/spec/fixtures/ffmpeg/completed-with-warnings-id/output_metadata-low +90 -0
- data/spec/fixtures/ffmpeg/completed-with-warnings-id/pid +1 -0
- data/spec/fixtures/ffmpeg/completed-with-warnings-id/progress +11 -0
- data/spec/fixtures/ffmpeg/completed-with-warnings-id/video-high.mp4 +0 -0
- data/spec/fixtures/ffmpeg/completed-with-warnings-id/video-low.mp4 +0 -0
- data/spec/fixtures/ffmpeg/failed-id/exit_status.code +1 -0
- data/spec/fixtures/media_convert/job_completed_empty_detail.json +1 -0
- data/spec/integration/ffmpeg_adapter_spec.rb +50 -1
- data/spec/integration/matterhorn_adapter_spec.rb +1 -2
- data/spec/integration/media_convert_adapter_spec.rb +144 -0
- data/spec/integration/pass_through_adapter_spec.rb +2 -2
- data/spec/integration/zencoder_adapter_spec.rb +3 -3
- data/spec/units/core_spec.rb +1 -1
- data/spec/units/file_locator_spec.rb +3 -3
- data/spec/units/status_spec.rb +1 -1
- metadata +52 -19
@@ -0,0 +1,208 @@
|
|
1
|
+
# MediaConvertAdapter
|
2
|
+
|
3
|
+
|
4
|
+
To use active_encode with MediaConvert, you will need:
|
5
|
+
|
6
|
+
* An AWS account that has access to create MediaConvert jobs
|
7
|
+
* An AWS [IAM service role that has access to necessary AWS resources](https://docs.aws.amazon.com/mediaconvert/latest/ug/iam-role.html).
|
8
|
+
* An S3 bucket to store source files
|
9
|
+
* An S3 bucket to store derivatives (recommended to be separate)
|
10
|
+
* Existing [MediaConvert Output presets](https://docs.aws.amazon.com/mediaconvert/latest/ug/creating-preset-from-scratch.html) to define your outputs (eg desired HLS adaptive variants)
|
11
|
+
* EventBridge/Cloudfront setup to store output (can be automatically created by code we include here)
|
12
|
+
* You have to add these gems to your project Gemfile, that are required for
|
13
|
+
the mediaconvert adapter but not included as active_encode dependencies:
|
14
|
+
* aws-sdk-cloudwatchevents
|
15
|
+
* aws-sdk-cloudwatchlogs
|
16
|
+
* aws-sdk-mediaconvert
|
17
|
+
* aws-sdk-s3
|
18
|
+
|
19
|
+
You may find this tutorial helpful to create the AWS resources you need, or debug the process: https://github.com/aws-samples/aws-media-services-simple-vod-workflow (mainly the firsst two modules).
|
20
|
+
|
21
|
+
## Note: No Technical Metadata
|
22
|
+
|
23
|
+
This adapter does _not_ perform input characterization or fill out technical metadata in the encoding job `input` object. technical metadata in `encode.input` such as `width`, `duration`,
|
24
|
+
or `video_codec` will be nil.
|
25
|
+
|
26
|
+
## CloudWatch and EventBridge setup, optionally with setup!
|
27
|
+
|
28
|
+
[AWS Elemental MediaConvert](https://aws.amazon.com/mediaconvert/) makes it difficult to acesss
|
29
|
+
detailed output information in the job description that can be pulled directly from the service. One way to work around this is capture the MediaConvert job status notification
|
30
|
+
when the job status changes to `COMPLETE`, via an
|
31
|
+
[Amazon Eventbridge](https://aws.amazon.com/eventbridge/) rule that forwards the status change
|
32
|
+
notification to another service for capture and/or handling -- for instance a CloudWatch Logs]
|
33
|
+
(https://aws.amazon.com/cloudwatch/) log group.
|
34
|
+
|
35
|
+
`ActiveEncode::EngineAdapters::MediaConvert` is written to get detailed output information from just such a setup, a CloudWatch log group that has been set up to receive MediaConvert job status `COMPLETE` notifications via an EventBridge rule.
|
36
|
+
|
37
|
+
We proide a method to create this CloudWatch and EventBridge infrastructure for you, the `#setup!` method.
|
38
|
+
|
39
|
+
|
40
|
+
```ruby
|
41
|
+
ActiveEncode::Base.engine_adapter = :media_convert
|
42
|
+
ActiveEncode::Base.engine_adapter.setup!
|
43
|
+
```
|
44
|
+
|
45
|
+
The active AWS user/role when calling the `#setup!` method will require permissions to create the
|
46
|
+
necessary CloudWatch and EventBridge resources.
|
47
|
+
|
48
|
+
The `setup!` task will create an EventBridge rule name and CloudWatch log group name based on the MediaConvert queue setting, by default `"Default"`. So:
|
49
|
+
* EventBridge rule: `active-encode-mediaconvert-Default`
|
50
|
+
* Log group name: `/aws/events/active-encode/mediaconvert/Default`
|
51
|
+
|
52
|
+
The names chosen will respect the `log_group` and `queue` config though, if set.
|
53
|
+
|
54
|
+
|
55
|
+
**Alternately**, we have an experimental flag to get and derive what output information we can
|
56
|
+
directly from the job without requiring a CloudWatch log -- this is expected to be complete
|
57
|
+
only for HLS output at present. It seems to work well for HLS output. To opt-in, and not require CloudWatch logs, try:
|
58
|
+
|
59
|
+
ActiveEncode::Base.engine_adapter.direct_output_lookup = true
|
60
|
+
|
61
|
+
|
62
|
+
## Configuration
|
63
|
+
|
64
|
+
Some parameters are set as (typically global) configuration, while others are passed in as parameters to `create`. Here we'll discuss the configuration.
|
65
|
+
|
66
|
+
|
67
|
+
* `role`. Required. An IAM role that the MediaConvert job will run under. This is [required by MediaConvert](https://docs.aws.amazon.com/mediaconvert/latest/ug/iam-role.html), it can't just use your current AWS credentials.
|
68
|
+
|
69
|
+
* `output_bucket`. Required. An S3 bucket name, all output will be written to this bucket, at a path prefix specified in the `create` call.
|
70
|
+
|
71
|
+
* `log_group`. Optional, unusual. Specify the name of the CloudWatch log group to use for logging. By default, will be constructed automatically from the MediaConvert queue to use.
|
72
|
+
|
73
|
+
* `queue`. Optional, unusual. Specify the name of the [MediaConvert queue](https://docs.aws.amazon.com/mediaconvert/latest/ug/working-with-queues.html) to use. By default it will use the MediaConvert default, called `"Default"`. Ordinarily there is no reason to set this.
|
74
|
+
|
75
|
+
```ruby
|
76
|
+
ActiveEncode::Base.engine_adapter = :media_convert
|
77
|
+
|
78
|
+
ActiveEncode::Base.engine_adapter.role = 'arn:aws:iam::11111111111111:role/my-role-name'
|
79
|
+
ActiveEncode::Base.engine_adapter.output_bucket = 'my-bucket-name'
|
80
|
+
```
|
81
|
+
|
82
|
+
|
83
|
+
## Input and output options, and the masterfile_bucket
|
84
|
+
|
85
|
+
The adapter can take a local file as argument (via `file://` URL or any other standard way for ActiveEncode), _or_ an `s3://` URL.
|
86
|
+
|
87
|
+
The input, whether local file _or_ remote S3 file, is normally _copied_ to a random-string-path location on the `masterfile_bucket`, and then that copy is used as input for the MediaConvert process. Unless the input is already an `s3://` URL located in the `masterfile_bucket`, then it is just used.
|
88
|
+
|
89
|
+
|
90
|
+
```ruby
|
91
|
+
ActiveEncode::Base.create(
|
92
|
+
"file://path/to/file.mp4",
|
93
|
+
{
|
94
|
+
masterfile_bucket: "my-masterfile-bucket"
|
95
|
+
output_prefix: "path/to/output/base_name_of_outputs",
|
96
|
+
outputs: [
|
97
|
+
{ preset: "my-hls-preset-high", modifier: "_high" },
|
98
|
+
{ preset: "my-hls-preset-medium", modifier: "_medium" },
|
99
|
+
{ preset: "my-hls-preset-low", modifier: "_low" }
|
100
|
+
]
|
101
|
+
}
|
102
|
+
)
|
103
|
+
# your input will be COPIED to my-masterfile-bucket and that copy passed
|
104
|
+
# as an input to the MediaConvert operation.
|
105
|
+
```
|
106
|
+
|
107
|
+
However, if you pass the `use_original_url` bucket, then an `s3://` input URL you pass in will _not_ be copied to `masterfile_bucket`, but passed direct as input to the MediaConvert process.
|
108
|
+
|
109
|
+
```ruby
|
110
|
+
ActiveEncode::Base.create(
|
111
|
+
"s3://some-other-bucket/path/to/file.mp4",
|
112
|
+
{
|
113
|
+
masterfile_bucket: "my-masterfile-bucket"
|
114
|
+
use_original_url: true,
|
115
|
+
output_prefix: "path/to/output/base_name_of_outputs",
|
116
|
+
outputs: [
|
117
|
+
{ preset: "my-hls-preset-high", modifier: "_high" },
|
118
|
+
{ preset: "my-hls-preset-medium", modifier: "_medium" },
|
119
|
+
]
|
120
|
+
}
|
121
|
+
)
|
122
|
+
# the S3 input will be used directly as input to the MediaConvert process,
|
123
|
+
# it will not be copied to the masterfile_bucket first.
|
124
|
+
```
|
125
|
+
|
126
|
+
Only in this case of `use_original_url` and an `s3://` input source, the `masterfile_bucket` argument can be ommitted, since it will be used.
|
127
|
+
|
128
|
+
You can also use `destination` instead of `output_prefix`, to supply a complete `s3://` url,
|
129
|
+
ignoring `output_bucket` config. With `use_original_url` you can now supply inputs and
|
130
|
+
outputs as simple s3 urls.
|
131
|
+
|
132
|
+
```ruby
|
133
|
+
ActiveEncode::Base.create(
|
134
|
+
"s3://some-other-bucket/path/to/file.mp4",
|
135
|
+
{
|
136
|
+
use_original_url: true,
|
137
|
+
destination: "s3://my-output-bucket/path/to/output/base_name_of_outputs",
|
138
|
+
outputs: [
|
139
|
+
{ preset: "my-hls-preset-high", modifier: "_high" },
|
140
|
+
{ preset: "my-hls-preset-medium", modifier: "_medium" },
|
141
|
+
]
|
142
|
+
}
|
143
|
+
)
|
144
|
+
```
|
145
|
+
|
146
|
+
## AWS Auth Credentials
|
147
|
+
|
148
|
+
The adapter, when interacting with AWS services, will interact with AWS using the [current AWS credentials looked up from environment](https://docs.aws.amazon.com/sdk-for-ruby/v3/developer-guide/setup-config.html#aws-ruby-sdk-setting-credentials) in the standard way, such as the `AWS_ACCESS_KEY_ID` and `AWS_SECRET_ACCESS_KEY` [environmental variables](https://docs.aws.amazon.com/sdkref/latest/guide/environment-variables.html), or [on disk at in a credentials file](https://docs.aws.amazon.com/sdkref/latest/guide/file-format.html).
|
149
|
+
|
150
|
+
The IAM identity, in order to issue MediaConvert jobs and get output information from CloudWatch, will need the following permissions, such as in this example policy:
|
151
|
+
|
152
|
+
```json
|
153
|
+
{
|
154
|
+
"Version": "2012-10-17",
|
155
|
+
"Statement": [
|
156
|
+
{
|
157
|
+
"Sid": "mediaconvertActions",
|
158
|
+
"Effect": "Allow",
|
159
|
+
"Action": "mediaconvert:*",
|
160
|
+
"Resource": "*"
|
161
|
+
},
|
162
|
+
{
|
163
|
+
"Sid": "iamPassRole",
|
164
|
+
"Effect": "Allow",
|
165
|
+
"Action": "iam:PassRole",
|
166
|
+
"Resource": "arn:aws:iam::111122223333:role/MediaConvertRole"
|
167
|
+
},
|
168
|
+
{
|
169
|
+
"Sid": "logsStartQuery",
|
170
|
+
"Effect": "Allow",
|
171
|
+
"Action": "logs:StartQuery",
|
172
|
+
"Resource": "*"
|
173
|
+
},
|
174
|
+
{
|
175
|
+
"Sid": "logsGetQuery",
|
176
|
+
"Effect": "Allow",
|
177
|
+
"Action": "logs:GetQueryResult",
|
178
|
+
"Resource": "*"
|
179
|
+
},
|
180
|
+
]
|
181
|
+
}
|
182
|
+
```
|
183
|
+
|
184
|
+
Where the `iamPassRole` resource is the role you will pass in the `role` configuration. The `logsStartQuery` and `logsGetQuery` permissions could probably additionally be limited to the specific CloudWatch log group.
|
185
|
+
|
186
|
+
MediaConvert necessarily [requires you to pass a separate IAM role](https://docs.aws.amazon.com/mediaconvert/latest/ug/iam-role.html) that will be used by the actual MediaConvert operation -- the `role` config for this adapter. That role will need this permission:
|
187
|
+
|
188
|
+
```json
|
189
|
+
{
|
190
|
+
"Version": "2012-10-17",
|
191
|
+
"Statement": [
|
192
|
+
{
|
193
|
+
"Effect": "Allow",
|
194
|
+
"Action": [
|
195
|
+
"execute-api:Invoke",
|
196
|
+
"execute-api:ManageConnections"
|
197
|
+
],
|
198
|
+
"Resource": "arn:aws:execute-api:*:*:*"
|
199
|
+
}
|
200
|
+
]
|
201
|
+
}
|
202
|
+
```
|
203
|
+
|
204
|
+
In addition to read/write access to the relevant S3 buckets.
|
205
|
+
|
206
|
+
Also see [this tutorial](https://github.com/aws-samples/aws-media-services-simple-vod-workflow/blob/master/1-IAMandS3/README.md#1-create-an-iam-role-to-use-with-aws-elemental-mediaconvert)
|
207
|
+
|
208
|
+
|
data/lib/active_encode/base.rb
CHANGED
data/lib/active_encode/core.rb
CHANGED
@@ -73,19 +73,19 @@ module ActiveEncode
|
|
73
73
|
|
74
74
|
protected
|
75
75
|
|
76
|
-
|
77
|
-
|
78
|
-
|
79
|
-
|
80
|
-
|
81
|
-
|
82
|
-
|
83
|
-
|
84
|
-
|
85
|
-
|
86
|
-
|
87
|
-
|
88
|
-
|
89
|
-
|
76
|
+
def merge!(encode)
|
77
|
+
@id = encode.id
|
78
|
+
@input = encode.input
|
79
|
+
@output = encode.output
|
80
|
+
@options = encode.options
|
81
|
+
@state = encode.state
|
82
|
+
@errors = encode.errors
|
83
|
+
@created_at = encode.created_at
|
84
|
+
@updated_at = encode.updated_at
|
85
|
+
@current_operations = encode.current_operations
|
86
|
+
@percent_complete = encode.percent_complete
|
87
|
+
|
88
|
+
self
|
89
|
+
end
|
90
90
|
end
|
91
91
|
end
|
@@ -6,7 +6,7 @@ require 'active_support/core_ext/string/inflections'
|
|
6
6
|
module ActiveEncode
|
7
7
|
# The <tt>ActiveEncode::EngineAdapter</tt> module is used to load the
|
8
8
|
# correct adapter. The default engine adapter is the :active_job engine.
|
9
|
-
module EngineAdapter
|
9
|
+
module EngineAdapter # :nodoc:
|
10
10
|
extend ActiveSupport::Concern
|
11
11
|
|
12
12
|
included do
|
@@ -29,21 +29,21 @@ module ActiveEncode
|
|
29
29
|
|
30
30
|
private
|
31
31
|
|
32
|
-
|
33
|
-
|
34
|
-
|
35
|
-
|
36
|
-
|
37
|
-
|
38
|
-
|
39
|
-
end
|
32
|
+
def interpret_adapter(name_or_adapter_or_class)
|
33
|
+
case name_or_adapter_or_class
|
34
|
+
when Symbol, String
|
35
|
+
ActiveEncode::EngineAdapters.lookup(name_or_adapter_or_class).new
|
36
|
+
else
|
37
|
+
name_or_adapter_or_class if engine_adapter?(name_or_adapter_or_class)
|
38
|
+
raise ArgumentError unless engine_adapter?(name_or_adapter_or_class)
|
40
39
|
end
|
40
|
+
end
|
41
41
|
|
42
|
-
|
42
|
+
ENGINE_ADAPTER_METHODS = %i[create find cancel].freeze
|
43
43
|
|
44
|
-
|
45
|
-
|
46
|
-
|
44
|
+
def engine_adapter?(object)
|
45
|
+
ENGINE_ADAPTER_METHODS.all? { |meth| object.respond_to?(meth) }
|
46
|
+
end
|
47
47
|
end
|
48
48
|
end
|
49
49
|
end
|
@@ -41,194 +41,194 @@ module ActiveEncode
|
|
41
41
|
|
42
42
|
private
|
43
43
|
|
44
|
-
|
45
|
-
|
46
|
-
|
47
|
-
|
44
|
+
# Needs region and credentials setup per http://docs.aws.amazon.com/sdkforruby/api/Aws/ElasticTranscoder/Client.html
|
45
|
+
def client
|
46
|
+
@client ||= Aws::ElasticTranscoder::Client.new
|
47
|
+
end
|
48
48
|
|
49
|
-
|
50
|
-
|
51
|
-
|
49
|
+
def s3client
|
50
|
+
Aws::S3::Client.new
|
51
|
+
end
|
52
52
|
|
53
|
-
|
54
|
-
|
55
|
-
|
53
|
+
def get_job_details(job_id)
|
54
|
+
client.read_job(id: job_id)&.job
|
55
|
+
end
|
56
56
|
|
57
|
-
|
58
|
-
|
59
|
-
|
60
|
-
|
61
|
-
|
62
|
-
|
63
|
-
|
64
|
-
|
65
|
-
|
66
|
-
|
67
|
-
encode.output = convert_output(job)
|
68
|
-
encode.errors = job.outputs.select { |o| o.status == "Error" }.collect(&:status_detail).compact
|
69
|
-
|
70
|
-
tech_md = convert_tech_metadata(job.input.detected_properties)
|
71
|
-
[:width, :height, :frame_rate, :duration, :file_size].each do |field|
|
72
|
-
encode.input.send("#{field}=", tech_md[field])
|
73
|
-
end
|
57
|
+
def build_encode(job)
|
58
|
+
return nil if job.nil?
|
59
|
+
encode = ActiveEncode::Base.new(convert_input(job), {})
|
60
|
+
encode.id = job.id
|
61
|
+
encode.state = JOB_STATES[job.status]
|
62
|
+
encode.current_operations = []
|
63
|
+
encode.percent_complete = convert_percent_complete(job)
|
64
|
+
encode.created_at = convert_time(job.timing["submit_time_millis"])
|
65
|
+
encode.updated_at = convert_time(job.timing["finish_time_millis"]) || convert_time(job.timing["start_time_millis"]) || encode.created_at
|
74
66
|
|
75
|
-
|
76
|
-
|
77
|
-
encode.input.created_at = encode.created_at
|
78
|
-
encode.input.updated_at = encode.updated_at
|
67
|
+
encode.output = convert_output(job)
|
68
|
+
encode.errors = job.outputs.select { |o| o.status == "Error" }.collect(&:status_detail).compact
|
79
69
|
|
80
|
-
|
70
|
+
tech_md = convert_tech_metadata(job.input.detected_properties)
|
71
|
+
[:width, :height, :frame_rate, :duration, :file_size].each do |field|
|
72
|
+
encode.input.send("#{field}=", tech_md[field])
|
81
73
|
end
|
82
74
|
|
83
|
-
|
84
|
-
|
85
|
-
|
86
|
-
|
75
|
+
encode.input.id = job.id
|
76
|
+
encode.input.state = encode.state
|
77
|
+
encode.input.created_at = encode.created_at
|
78
|
+
encode.input.updated_at = encode.updated_at
|
87
79
|
|
88
|
-
|
89
|
-
|
90
|
-
(rate.to_f * 1024).to_s
|
91
|
-
end
|
80
|
+
encode
|
81
|
+
end
|
92
82
|
|
93
|
-
|
94
|
-
|
95
|
-
|
96
|
-
|
97
|
-
when "Canceled"
|
98
|
-
:cancelled
|
99
|
-
when "Error"
|
100
|
-
:failed
|
101
|
-
when "Complete"
|
102
|
-
:completed
|
103
|
-
end
|
104
|
-
end
|
83
|
+
def convert_time(time_millis)
|
84
|
+
return nil if time_millis.nil?
|
85
|
+
Time.at(time_millis / 1000).utc
|
86
|
+
end
|
105
87
|
|
106
|
-
|
107
|
-
|
108
|
-
|
88
|
+
def convert_bitrate(rate)
|
89
|
+
return nil if rate.nil?
|
90
|
+
(rate.to_f * 1024).to_s
|
91
|
+
end
|
109
92
|
|
110
|
-
|
111
|
-
|
112
|
-
|
113
|
-
|
114
|
-
|
115
|
-
|
116
|
-
|
117
|
-
|
118
|
-
|
119
|
-
|
120
|
-
end
|
93
|
+
def convert_state(job)
|
94
|
+
case job.status
|
95
|
+
when "Submitted", "Progressing" # Should there be a queued state?
|
96
|
+
:running
|
97
|
+
when "Canceled"
|
98
|
+
:cancelled
|
99
|
+
when "Error"
|
100
|
+
:failed
|
101
|
+
when "Complete"
|
102
|
+
:completed
|
121
103
|
end
|
104
|
+
end
|
122
105
|
|
123
|
-
|
124
|
-
|
125
|
-
|
106
|
+
def convert_percent_complete(job)
|
107
|
+
job.outputs.inject(0) { |sum, output| sum + output_percentage(output) } / job.outputs.length
|
108
|
+
end
|
126
109
|
|
127
|
-
|
128
|
-
|
129
|
-
|
130
|
-
|
131
|
-
|
132
|
-
|
133
|
-
|
110
|
+
def output_percentage(output)
|
111
|
+
case output.status
|
112
|
+
when "Submitted"
|
113
|
+
10
|
114
|
+
when "Progressing", "Canceled", "Error"
|
115
|
+
50
|
116
|
+
when "Complete"
|
117
|
+
100
|
118
|
+
else
|
119
|
+
0
|
134
120
|
end
|
121
|
+
end
|
135
122
|
|
136
|
-
|
137
|
-
|
138
|
-
|
139
|
-
if s3_object.bucket_name == source_bucket
|
140
|
-
# logger.info("Already in bucket `#{source_bucket}'")
|
141
|
-
s3_object.key
|
142
|
-
else
|
143
|
-
s3_key = File.join(SecureRandom.uuid, s3_object.key)
|
144
|
-
# logger.info("Copying to `#{source_bucket}/#{input_url}'")
|
145
|
-
target = Aws::S3::Object.new(bucket_name: source_bucket, key: input_url)
|
146
|
-
target.copy_from(s3_object, multipart_copy: s3_object.size > 15_728_640) # 15.megabytes
|
147
|
-
s3_key
|
148
|
-
end
|
149
|
-
end
|
123
|
+
def convert_input(job)
|
124
|
+
job.input.key
|
125
|
+
end
|
150
126
|
|
151
|
-
|
152
|
-
|
153
|
-
|
154
|
-
|
155
|
-
|
156
|
-
|
157
|
-
|
158
|
-
|
127
|
+
def copy_to_input_bucket(input_url, bucket)
|
128
|
+
case Addressable::URI.parse(input_url).scheme
|
129
|
+
when nil, 'file'
|
130
|
+
upload_to_s3 input_url, bucket
|
131
|
+
when 's3'
|
132
|
+
check_s3_bucket input_url, bucket
|
133
|
+
end
|
134
|
+
end
|
159
135
|
|
136
|
+
def check_s3_bucket(input_url, source_bucket)
|
137
|
+
# logger.info("Checking `#{input_url}'")
|
138
|
+
s3_object = FileLocator::S3File.new(input_url).object
|
139
|
+
if s3_object.bucket_name == source_bucket
|
140
|
+
# logger.info("Already in bucket `#{source_bucket}'")
|
141
|
+
s3_object.key
|
142
|
+
else
|
143
|
+
s3_key = File.join(SecureRandom.uuid, s3_object.key)
|
144
|
+
# logger.info("Copying to `#{source_bucket}/#{input_url}'")
|
145
|
+
target = Aws::S3::Object.new(bucket_name: source_bucket, key: input_url)
|
146
|
+
target.copy_from(s3_object, multipart_copy: s3_object.size > 15_728_640) # 15.megabytes
|
160
147
|
s3_key
|
161
148
|
end
|
149
|
+
end
|
162
150
|
|
163
|
-
|
164
|
-
|
165
|
-
|
166
|
-
|
151
|
+
def upload_to_s3(input_url, source_bucket)
|
152
|
+
# original_input = input_url
|
153
|
+
bucket = Aws::S3::Resource.new(client: s3client).bucket(source_bucket)
|
154
|
+
filename = FileLocator.new(input_url).location
|
155
|
+
s3_key = File.join(SecureRandom.uuid, File.basename(filename))
|
156
|
+
# logger.info("Copying `#{original_input}' to `#{source_bucket}/#{input_url}'")
|
157
|
+
obj = bucket.object(s3_key)
|
158
|
+
obj.upload_file filename
|
167
159
|
|
168
|
-
|
169
|
-
|
170
|
-
job.outputs.collect do |joutput|
|
171
|
-
preset = read_preset(joutput.preset_id)
|
172
|
-
extension = preset.container == 'ts' ? '.m3u8' : ''
|
173
|
-
additional_metadata = {
|
174
|
-
managed: false,
|
175
|
-
id: joutput.id,
|
176
|
-
label: joutput.key.split("/", 2).first,
|
177
|
-
url: "s3://#{@pipeline.output_bucket}/#{job.output_key_prefix}#{joutput.key}#{extension}"
|
178
|
-
}
|
179
|
-
tech_md = convert_tech_metadata(joutput, preset).merge(additional_metadata)
|
180
|
-
|
181
|
-
output = ActiveEncode::Output.new
|
182
|
-
output.state = convert_state(joutput)
|
183
|
-
output.created_at = convert_time(job.timing["submit_time_millis"])
|
184
|
-
output.updated_at = convert_time(job.timing["finish_time_millis"] || job.timing["start_time_millis"]) || output.created_at
|
185
|
-
|
186
|
-
[:width, :height, :frame_rate, :duration, :checksum, :audio_codec, :video_codec,
|
187
|
-
:audio_bitrate, :video_bitrate, :file_size, :label, :url, :id].each do |field|
|
188
|
-
output.send("#{field}=", tech_md[field])
|
189
|
-
end
|
190
|
-
|
191
|
-
output
|
192
|
-
end
|
193
|
-
end
|
160
|
+
s3_key
|
161
|
+
end
|
194
162
|
|
195
|
-
|
196
|
-
|
197
|
-
|
163
|
+
def read_preset(id)
|
164
|
+
@presets ||= {}
|
165
|
+
@presets[id] ||= client.read_preset(id: id).preset
|
166
|
+
end
|
198
167
|
|
199
|
-
|
200
|
-
|
201
|
-
|
202
|
-
|
203
|
-
|
204
|
-
|
205
|
-
|
206
|
-
|
207
|
-
|
168
|
+
def convert_output(job)
|
169
|
+
@pipeline ||= client.read_pipeline(id: job.pipeline_id).pipeline
|
170
|
+
job.outputs.collect do |joutput|
|
171
|
+
preset = read_preset(joutput.preset_id)
|
172
|
+
extension = preset.container == 'ts' ? '.m3u8' : ''
|
173
|
+
additional_metadata = {
|
174
|
+
managed: false,
|
175
|
+
id: joutput.id,
|
176
|
+
label: joutput.key.split("/", 2).first,
|
177
|
+
url: "s3://#{@pipeline.output_bucket}/#{job.output_key_prefix}#{joutput.key}#{extension}"
|
208
178
|
}
|
179
|
+
tech_md = convert_tech_metadata(joutput, preset).merge(additional_metadata)
|
209
180
|
|
210
|
-
|
211
|
-
|
212
|
-
|
213
|
-
|
214
|
-
next if conversion.nil?
|
215
|
-
metadata[conversion[:key]] = value.send(conversion[:method])
|
216
|
-
end
|
181
|
+
output = ActiveEncode::Output.new
|
182
|
+
output.state = convert_state(joutput)
|
183
|
+
output.created_at = convert_time(job.timing["submit_time_millis"])
|
184
|
+
output.updated_at = convert_time(job.timing["finish_time_millis"] || job.timing["start_time_millis"]) || output.created_at
|
217
185
|
|
218
|
-
|
219
|
-
|
220
|
-
|
221
|
-
metadata.merge!(
|
222
|
-
audio_codec: audio&.codec,
|
223
|
-
audio_channels: audio&.channels,
|
224
|
-
audio_bitrate: convert_bitrate(audio&.bit_rate),
|
225
|
-
video_codec: video&.codec,
|
226
|
-
video_bitrate: convert_bitrate(video&.bit_rate)
|
227
|
-
)
|
186
|
+
[:width, :height, :frame_rate, :duration, :checksum, :audio_codec, :video_codec,
|
187
|
+
:audio_bitrate, :video_bitrate, :file_size, :label, :url, :id].each do |field|
|
188
|
+
output.send("#{field}=", tech_md[field])
|
228
189
|
end
|
229
190
|
|
230
|
-
|
191
|
+
output
|
231
192
|
end
|
193
|
+
end
|
194
|
+
|
195
|
+
def convert_errors(job)
|
196
|
+
job.outputs.select { |o| o.status == "Error" }.collect(&:status_detail).compact
|
197
|
+
end
|
198
|
+
|
199
|
+
def convert_tech_metadata(props, preset = nil)
|
200
|
+
return {} if props.blank?
|
201
|
+
metadata_fields = {
|
202
|
+
file_size: { key: :file_size, method: :itself },
|
203
|
+
duration_millis: { key: :duration, method: :to_i },
|
204
|
+
frame_rate: { key: :frame_rate, method: :to_i },
|
205
|
+
segment_duration: { key: :segment_duration, method: :itself },
|
206
|
+
width: { key: :width, method: :itself },
|
207
|
+
height: { key: :height, method: :itself }
|
208
|
+
}
|
209
|
+
|
210
|
+
metadata = {}
|
211
|
+
props.each_pair do |key, value|
|
212
|
+
next if value.nil?
|
213
|
+
conversion = metadata_fields[key.to_sym]
|
214
|
+
next if conversion.nil?
|
215
|
+
metadata[conversion[:key]] = value.send(conversion[:method])
|
216
|
+
end
|
217
|
+
|
218
|
+
unless preset.nil?
|
219
|
+
audio = preset.audio
|
220
|
+
video = preset.video
|
221
|
+
metadata.merge!(
|
222
|
+
audio_codec: audio&.codec,
|
223
|
+
audio_channels: audio&.channels,
|
224
|
+
audio_bitrate: convert_bitrate(audio&.bit_rate),
|
225
|
+
video_codec: video&.codec,
|
226
|
+
video_bitrate: convert_bitrate(video&.bit_rate)
|
227
|
+
)
|
228
|
+
end
|
229
|
+
|
230
|
+
metadata
|
231
|
+
end
|
232
232
|
end
|
233
233
|
end
|
234
234
|
end
|