burner 1.0.0 → 1.3.0

Sign up to get free protection for your applications and to get access to all the features.
checksums.yaml CHANGED
@@ -1,7 +1,7 @@
1
1
  ---
2
2
  SHA256:
3
- metadata.gz: d3d251463ade2de965d0dd1e0d635967571d3d1d4cab46a7d1271ab057d1c2e3
4
- data.tar.gz: 2eac66d3f745888be07cf5d2d66f6e8429c61b0255ad0916f7c6f8c792194dcd
3
+ metadata.gz: be4b0f77b37a352fc98dc859e4593750670442f8b9353ef86836749e0b9dee8c
4
+ data.tar.gz: b5cd388f6886fd3956a33db754fbad454e8d94a68d9920cbdb0f82f40bec9a04
5
5
  SHA512:
6
- metadata.gz: c57dabbc95d1b58b5f0fe15429f36682d0dec0771951bb01b6e8c6345ddd9e95b5805ad57f0e0166e8e4e87d0296885915c022895818e46c3389049207e0e719
7
- data.tar.gz: f23ecf1374570ad2f5cc6dfb70c88c895e4d45e0c2eb7453878038dc507d1a808ca3894c50a320b4a259df09ad7f1368a4b8c7bd790624b27baac487df1b48cf
6
+ metadata.gz: 942fc77175829d47a2b070d5dbdc7f7bb024e6f3be58254cd3b42ab5e70a64f688dd2d3fcd48aa7b193d349a1b497cdde0a7b2dfd95fd25078b56d59f52d94ad
7
+ data.tar.gz: dd6e6e5ce6de426c2fbf0e4950adbd3c908f7ed49e2314d242e43538bf90768de5216db34fe940050c02231b2dac6115ded03a29ae415a2c5d591aeb771d49ab
@@ -1,3 +1,25 @@
1
+ # 1.3.0 (December 11th, 2020)
2
+
3
+ Additions:
4
+
5
+ * Decoupled storage: `Burner::Disks` factory, `Burner::Disks::Local` reference implementation, and `b/io/*` `disk` option for configuring IO jobs to use custom disks.
6
+ # 1.2.0 (November 25th, 2020)
7
+
8
+ #### Enhancements:
9
+
10
+ * All for a pipeline to be configured with null steps. When null, just execute all jobs in positional order.
11
+ * Allow Collection::Transform job attributes to implicitly start from a resolve transformer. `explicit: true` can be passed in as an option in case the desire is to begin from the record and not a specific value.
12
+
13
+ #### Added Jobs:
14
+
15
+ * b/collection/nested_aggregate
16
+ # 1.1.0 (November 16, 2020)
17
+
18
+ Added Jobs:
19
+
20
+ * b/collection/coalesce
21
+ * b/collection/group
22
+
1
23
  # 1.0.0 (November 5th, 2020)
2
24
 
3
25
  Initial version publication.
data/README.md CHANGED
@@ -42,7 +42,7 @@ pipeline = {
42
42
  {
43
43
  name: :output_value,
44
44
  type: 'b/echo',
45
- message: 'The current value is: {__value}'
45
+ message: 'The current value is: {__default_register}'
46
46
  },
47
47
  {
48
48
  name: :parse,
@@ -89,40 +89,22 @@ Some notes:
89
89
 
90
90
  * Some values are able to be string-interpolated using the provided Payload#params. This allows for the passing runtime configuration/data into pipelines/jobs.
91
91
  * The job's ID can be accessed using the `__id` key.
92
- * The current job's payload value can be accessed using the `__value` key.
92
+ * The current payload registers' values can be accessed using the `__<register_name>_register` key.
93
93
  * Jobs can be re-used (just like the output_id and output_value jobs).
94
+ * If steps is nil then all jobs will execute in their declared order.
94
95
 
95
96
  ### Capturing Feedback / Output
96
97
 
97
98
  By default, output will be emitted to `$stdout`. You can add or change listeners by passing in optional values into Pipeline#execute. For example, say we wanted to capture the output from our json-to-yaml example:
98
99
 
99
100
  ````ruby
100
- class StringOut
101
- def initialize
102
- @io = StringIO.new
103
- end
104
-
105
- def puts(msg)
106
- tap { io.write("#{msg}\n") }
107
- end
108
-
109
- def read
110
- io.rewind
111
- io.read
112
- end
113
-
114
- private
115
-
116
- attr_reader :io
117
- end
118
-
119
- string_out = StringOut.new
120
- output = Burner::Output.new(outs: string_out)
121
- payload = Burner::Payload.new(params: params)
101
+ io = StringIO.new
102
+ output = Burner::Output.new(outs: io)
103
+ payload = Burner::Payload.new(params: params)
122
104
 
123
105
  Burner::Pipeline.make(pipeline).execute(output: output, payload: payload)
124
106
 
125
- log = string_out.read
107
+ log = io.string
126
108
  ````
127
109
 
128
110
  The value of `log` should now look similar to:
@@ -181,7 +163,7 @@ jobs:
181
163
 
182
164
  - name: output_value
183
165
  type: b/echo
184
- message: 'The current value is: {__value}'
166
+ message: 'The current value is: {__default_register}'
185
167
 
186
168
  - name: parse
187
169
  type: b/deserialize/json
@@ -234,11 +216,14 @@ This library only ships with very basic, rudimentary jobs that are meant to just
234
216
  #### Collection
235
217
 
236
218
  * **b/collection/arrays_to_objects** [mappings, register]: Convert an array of arrays to an array of objects.
219
+ * **b/collection/coalesce** [register, grouped_register, key_mappings, keys, separator]: Merge two datasets together based on the key values of one dataset (array) with a grouped dataset (hash).
237
220
  * **b/collection/concatenate** [from_registers, to_register]: Concatenate each from_register's value and place the newly concatenated array into the to_register. Note: this does not do any deep copying and should be assumed it is shallow copying all objects.
238
221
  * **b/collection/graph** [config, key, register]: Use [Hashematics](https://github.com/bluemarblepayroll/hashematics) to turn a flat array of objects into a deeply nested object tree.
222
+ * **b/collection/group** [keys, register, separator]: Take a register's value (an array of objects) and group the objects by the specified keys.
223
+ * **b/collection/nested_aggregate** [register, key_mappings, key, separator]: Traverse a set of objects, resolving key's value for each object, optionally copying down key_mappings to the child records, then merging all the inner records together.
239
224
  * **b/collection/objects_to_arrays** [mappings, register]: Convert an array of objects to an array of arrays.
240
225
  * **b/collection/shift** [amount, register]: Remove the first N number of elements from an array.
241
- * **b/collection/transform** [attributes, exclusive, separator, register]: Iterate over all objects and transform each key per the attribute transformers specifications. If exclusive is set to false then the current object will be overridden/merged. Separator can also be set for key path support. This job uses [Realize](https://github.com/bluemarblepayroll/realize), which provides its own extendable value-transformation pipeline.
226
+ * **b/collection/transform** [attributes, exclusive, separator, register]: Iterate over all objects and transform each key per the attribute transformers specifications. If exclusive is set to false then the current object will be overridden/merged. Separator can also be set for key path support. This job uses [Realize](https://github.com/bluemarblepayroll/realize), which provides its own extendable value-transformation pipeline. If an attribute is not set with `explicit: true` then it will automatically start from the key's value from the record. If `explicit: true` is started, then it will start from the record itself.
242
227
  * **b/collection/unpivot** [pivot_set, register]: Take an array of objects and unpivot specific sets of keys into rows. Under the hood it uses [HashMath's Unpivot class](https://github.com/bluemarblepayroll/hash_math#unpivot-hash-key-coalescence-and-row-extrapolation).
243
228
  * **b/collection/validate** [invalid_register, join_char, message_key, register, separator, validations]: Take an array of objects, run it through each declared validator, and split the objects into two registers. The valid objects will be split into the current register while the invalid ones will go into the invalid_register as declared. Optional arguments, join_char and message_key, help determine the compiled error messages. The separator option can be utilized to use dot-notation for validating keys. See each validation's options by viewing their classes within the `lib/modeling/validations` directory.
244
229
  * **b/collection/values** [include_keys, register]: Take an array of objects and call `#values` on each object. If include_keys is true (it is false by default), then call `#keys` on the first object and inject that as a "header" object.
@@ -251,9 +236,11 @@ This library only ships with very basic, rudimentary jobs that are meant to just
251
236
 
252
237
  #### IO
253
238
 
254
- * **b/io/exist** [path, short_circuit]: Check to see if a file exists. The path parameter can be interpolated using `Payload#params`. If short_circuit was set to true (defaults to false) and the file does not exist then the pipeline will be short-circuited.
255
- * **b/io/read** [binary, path, register]: Read in a local file. The path parameter can be interpolated using `Payload#params`. If the contents are binary, pass in `binary: true` to open it up in binary+read mode.
256
- * **b/io/write** [binary, path, register]: Write to a local file. The path parameter can be interpolated using `Payload#params`. If the contents are binary, pass in `binary: true` to open it up in binary+write mode.
239
+ By default all jobs will use the `Burner::Disks::Local` disk for its persistence. But this is configurable by implementing and registering custom disk-based classes in the `Burner::Disks` factory. For example: a consumer application may also want to interact with cloud-based storage providers and could leverage this as its job library instead of implementing custom jobs.
240
+
241
+ * **b/io/exist** [disk, path, short_circuit]: Check to see if a file exists. The path parameter can be interpolated using `Payload#params`. If short_circuit was set to true (defaults to false) and the file does not exist then the pipeline will be short-circuited.
242
+ * **b/io/read** [binary, disk, path, register]: Read in a local file. The path parameter can be interpolated using `Payload#params`. If the contents are binary, pass in `binary: true` to open it up in binary+read mode.
243
+ * **b/io/write** [binary, disk, path, register]: Write to a local file. The path parameter can be interpolated using `Payload#params`. If the contents are binary, pass in `binary: true` to open it up in binary+write mode.
257
244
 
258
245
  #### Serialization
259
246
 
@@ -314,7 +301,7 @@ pipeline = {
314
301
  {
315
302
  name: :output_value,
316
303
  type: 'b/echo',
317
- message: 'The current value is: {__value}'
304
+ message: 'The current value is: {__default_register}'
318
305
  },
319
306
  {
320
307
  name: :parse,
@@ -10,6 +10,7 @@
10
10
  require 'acts_as_hashable'
11
11
  require 'benchmark'
12
12
  require 'csv'
13
+ require 'fileutils'
13
14
  require 'forwardable'
14
15
  require 'hash_math'
15
16
  require 'hashematics'
@@ -23,6 +24,7 @@ require 'time'
23
24
  require 'yaml'
24
25
 
25
26
  # Common/Shared
27
+ require_relative 'burner/disks'
26
28
  require_relative 'burner/modeling'
27
29
  require_relative 'burner/side_effects'
28
30
  require_relative 'burner/util'
@@ -0,0 +1,26 @@
1
+ # frozen_string_literal: true
2
+
3
+ #
4
+ # Copyright (c) 2020-present, Blue Marble Payroll, LLC
5
+ #
6
+ # This source code is licensed under the MIT license found in the
7
+ # LICENSE file in the root directory of this source tree.
8
+ #
9
+
10
+ require_relative 'disks/local'
11
+
12
+ module Burner
13
+ # A factory to register and emit instances that conform to the Disk interface with requests
14
+ # the instance responds to: #exist?, #read, and #write. See an example implementation within
15
+ # the lib/burner/disks directory.
16
+ #
17
+ # The benefit to this pluggable disk model is a consumer application can decide which file
18
+ # backend to use and how to store files. For example: an application may choose to use
19
+ # some cloud provider with their own file store implementation. This can be wrapped up
20
+ # in a Disk class and registered here and then referenced in the Pipeline's IO jobs.
21
+ class Disks
22
+ acts_as_hashable_factory
23
+
24
+ register 'local', '', Disks::Local
25
+ end
26
+ end
@@ -0,0 +1,61 @@
1
+ # frozen_string_literal: true
2
+
3
+ #
4
+ # Copyright (c) 2020-present, Blue Marble Payroll, LLC
5
+ #
6
+ # This source code is licensed under the MIT license found in the
7
+ # LICENSE file in the root directory of this source tree.
8
+ #
9
+
10
+ module Burner
11
+ class Disks
12
+ # Operations against the local file system.
13
+ class Local
14
+ acts_as_hashable
15
+
16
+ # Check to see if the passed in path exists within the local file system.
17
+ # It will not make assumptions on what the 'file' is, only that it is recognized
18
+ # by Ruby's File class.
19
+ def exist?(path)
20
+ File.exist?(path)
21
+ end
22
+
23
+ # Open and read the contents of a local file. If binary is passed in as true then the file
24
+ # will be opened in binary mode.
25
+ def read(path, binary: false)
26
+ File.open(path, read_mode(binary), &:read)
27
+ end
28
+
29
+ # Open and write the specified data to a local file. If binary is passed in as true then
30
+ # the file will be opened in binary mode. It is important to note that if the file's
31
+ # directory structure will be automatically created if it does not exist.
32
+ def write(path, data, binary: false)
33
+ ensure_directory_exists(path)
34
+
35
+ File.open(path, write_mode(binary)) { |io| io.write(data) }
36
+
37
+ path
38
+ end
39
+
40
+ private
41
+
42
+ def ensure_directory_exists(path)
43
+ dirname = File.dirname(path)
44
+
45
+ return if File.exist?(dirname)
46
+
47
+ FileUtils.mkdir_p(dirname)
48
+
49
+ nil
50
+ end
51
+
52
+ def write_mode(binary)
53
+ binary ? 'wb' : 'w'
54
+ end
55
+
56
+ def read_mode(binary)
57
+ binary ? 'rb' : 'r'
58
+ end
59
+ end
60
+ end
61
+ end
@@ -23,8 +23,11 @@ module Burner
23
23
  register 'b/sleep', Library::Sleep
24
24
 
25
25
  register 'b/collection/arrays_to_objects', Library::Collection::ArraysToObjects
26
+ register 'b/collection/coalesce', Library::Collection::Coalesce
26
27
  register 'b/collection/concatenate', Library::Collection::Concatenate
27
28
  register 'b/collection/graph', Library::Collection::Graph
29
+ register 'b/collection/group', Library::Collection::Group
30
+ register 'b/collection/nested_aggregate', Library::Collection::NestedAggregate
28
31
  register 'b/collection/objects_to_arrays', Library::Collection::ObjectsToArrays
29
32
  register 'b/collection/shift', Library::Collection::Shift
30
33
  register 'b/collection/transform', Library::Collection::Transform
@@ -14,8 +14,11 @@ require_relative 'library/nothing'
14
14
  require_relative 'library/sleep'
15
15
 
16
16
  require_relative 'library/collection/arrays_to_objects'
17
+ require_relative 'library/collection/coalesce'
17
18
  require_relative 'library/collection/concatenate'
18
19
  require_relative 'library/collection/graph'
20
+ require_relative 'library/collection/group'
21
+ require_relative 'library/collection/nested_aggregate'
19
22
  require_relative 'library/collection/objects_to_arrays'
20
23
  require_relative 'library/collection/shift'
21
24
  require_relative 'library/collection/transform'
@@ -0,0 +1,73 @@
1
+ # frozen_string_literal: true
2
+
3
+ #
4
+ # Copyright (c) 2020-present, Blue Marble Payroll, LLC
5
+ #
6
+ # This source code is licensed under the MIT license found in the
7
+ # LICENSE file in the root directory of this source tree.
8
+ #
9
+
10
+ module Burner
11
+ module Library
12
+ module Collection
13
+ # This is generally used right after the Group job has been executed on a separate
14
+ # dataset in a separate register. This job can match up specified values in its dataset
15
+ # with lookup values in another. If it finds a match then it will (shallow) copy over
16
+ # the values into the respective dataset.
17
+ #
18
+ # Expected Payload[register] input: array of objects.
19
+ # Payload[register] output: array of objects.
20
+ class Coalesce < JobWithRegister
21
+ attr_reader :grouped_register, :key_mappings, :keys, :resolver
22
+
23
+ def initialize(
24
+ name:,
25
+ grouped_register:,
26
+ key_mappings: [],
27
+ keys: [],
28
+ register: DEFAULT_REGISTER,
29
+ separator: ''
30
+ )
31
+ super(name: name, register: register)
32
+
33
+ @grouped_register = grouped_register.to_s
34
+ @key_mappings = Modeling::KeyMapping.array(key_mappings)
35
+ @keys = Array(keys)
36
+ @resolver = Objectable.resolver(separator: separator.to_s)
37
+
38
+ raise ArgumentError, 'at least one key is required' if @keys.empty?
39
+
40
+ freeze
41
+ end
42
+
43
+ def perform(output, payload)
44
+ payload[register] = array(payload[register])
45
+ count = payload[register].length
46
+
47
+ output.detail("Coalescing based on key(s): #{keys} for #{count} records(s)")
48
+
49
+ payload[register].each do |record|
50
+ key = make_key(record)
51
+ lookup = find_lookup(payload, key)
52
+
53
+ key_mappings.each do |key_mapping|
54
+ value = resolver.get(lookup, key_mapping.from)
55
+
56
+ resolver.set(record, key_mapping.to, value)
57
+ end
58
+ end
59
+ end
60
+
61
+ private
62
+
63
+ def find_lookup(payload, key)
64
+ (payload[grouped_register] || {})[key] || {}
65
+ end
66
+
67
+ def make_key(record)
68
+ keys.map { |key| resolver.get(record, key) }
69
+ end
70
+ end
71
+ end
72
+ end
73
+ end
@@ -19,7 +19,8 @@ module Burner
19
19
  attr_reader :key, :groups
20
20
 
21
21
  def initialize(
22
- name:, key:,
22
+ name:,
23
+ key:,
23
24
  config: Hashematics::Configuration.new,
24
25
  register: DEFAULT_REGISTER
25
26
  )
@@ -0,0 +1,68 @@
1
+ # frozen_string_literal: true
2
+
3
+ #
4
+ # Copyright (c) 2020-present, Blue Marble Payroll, LLC
5
+ #
6
+ # This source code is licensed under the MIT license found in the
7
+ # LICENSE file in the root directory of this source tree.
8
+ #
9
+
10
+ module Burner
11
+ module Library
12
+ module Collection
13
+ # Take a register's value (an array of objects) and group the objects by the specified keys.
14
+ # It essentially creates a hash from an array. This is useful for creating a O(1) lookup
15
+ # which can then be used in conjunction with the Coalesce Job for another array of data.
16
+ # It is worth noting that the resulting hashes values are singular objects and not an array
17
+ # like Ruby's Enumerable#group_by method.
18
+ #
19
+ # An example of this specific job:
20
+ #
21
+ # input: [{ id: 1, code: 'a' }, { id: 2, code: 'b' }]
22
+ # keys: [:code]
23
+ # output: { ['a'] => { id: 1, code: 'a' }, ['b'] => { id: 2, code: 'b' } }
24
+ #
25
+ # Expected Payload[register] input: array of objects.
26
+ # Payload[register] output: hash.
27
+ class Group < JobWithRegister
28
+ attr_reader :keys, :resolver
29
+
30
+ def initialize(
31
+ name:,
32
+ keys: [],
33
+ register: DEFAULT_REGISTER,
34
+ separator: ''
35
+ )
36
+ super(name: name, register: register)
37
+
38
+ @keys = Array(keys)
39
+ @resolver = Objectable.resolver(separator: separator.to_s)
40
+
41
+ raise ArgumentError, 'at least one key is required' if @keys.empty?
42
+
43
+ freeze
44
+ end
45
+
46
+ def perform(output, payload)
47
+ payload[register] = array(payload[register])
48
+ count = payload[register].length
49
+
50
+ output.detail("Grouping based on key(s): #{keys} for #{count} records(s)")
51
+
52
+ grouped_records = payload[register].each_with_object({}) do |record, memo|
53
+ key = make_key(record)
54
+ memo[key] = record
55
+ end
56
+
57
+ payload[register] = grouped_records
58
+ end
59
+
60
+ private
61
+
62
+ def make_key(record)
63
+ keys.map { |key| resolver.get(record, key) }
64
+ end
65
+ end
66
+ end
67
+ end
68
+ end
@@ -0,0 +1,67 @@
1
+ # frozen_string_literal: true
2
+
3
+ #
4
+ # Copyright (c) 2020-present, Blue Marble Payroll, LLC
5
+ #
6
+ # This source code is licensed under the MIT license found in the
7
+ # LICENSE file in the root directory of this source tree.
8
+ #
9
+
10
+ module Burner
11
+ module Library
12
+ module Collection
13
+ # Iterate over a collection of objects, calling key on each object, then aggregating the
14
+ # returns of key together into one array. This new derived array will be set as the value
15
+ # for the payload's register. Leverage the key_mappings option to optionally copy down
16
+ # keys and values from outer to inner records. This job is particularly useful
17
+ # if you have nested arrays but wish to deal with each level/depth in the aggregate.
18
+ #
19
+ # Expected Payload[register] input: array of objects.
20
+ # Payload[register] output: array of objects.
21
+ class NestedAggregate < JobWithRegister
22
+ attr_reader :key, :key_mappings, :resolver
23
+
24
+ def initialize(name:, key:, key_mappings: [], register: DEFAULT_REGISTER, separator: '')
25
+ super(name: name, register: register)
26
+
27
+ raise ArgumentError, 'key is required' if key.to_s.empty?
28
+
29
+ @key = key.to_s
30
+ @key_mappings = Modeling::KeyMapping.array(key_mappings)
31
+ @resolver = Objectable.resolver(separator: separator.to_s)
32
+
33
+ freeze
34
+ end
35
+
36
+ def perform(output, payload)
37
+ records = array(payload[register])
38
+ count = records.length
39
+
40
+ output.detail("Aggregating on key: #{key} for #{count} records(s)")
41
+
42
+ # Outer loop on parent records
43
+ payload[register] = records.each_with_object([]) do |record, memo|
44
+ inner_records = resolver.get(record, key)
45
+
46
+ # Inner loop on child records
47
+ array(inner_records).each do |inner_record|
48
+ memo << copy_key_mappings(record, inner_record)
49
+ end
50
+ end
51
+ end
52
+
53
+ private
54
+
55
+ def copy_key_mappings(source_record, destination_record)
56
+ key_mappings.each do |key_mapping|
57
+ value = resolver.get(source_record, key_mapping.from)
58
+
59
+ resolver.set(destination_record, key_mapping.to, value)
60
+ end
61
+
62
+ destination_record
63
+ end
64
+ end
65
+ end
66
+ end
67
+ end
@@ -7,8 +7,6 @@
7
7
  # LICENSE file in the root directory of this source tree.
8
8
  #
9
9
 
10
- require_relative 'base'
11
-
12
10
  module Burner
13
11
  module Library
14
12
  module IO
@@ -17,13 +15,14 @@ module Burner
17
15
  #
18
16
  # Note: this does not use Payload#registers.
19
17
  class Exist < Job
20
- attr_reader :path, :short_circuit
18
+ attr_reader :disk, :path, :short_circuit
21
19
 
22
- def initialize(name:, path:, short_circuit: false)
20
+ def initialize(name:, path:, disk: {}, short_circuit: false)
23
21
  super(name: name)
24
22
 
25
23
  raise ArgumentError, 'path is required' if path.to_s.empty?
26
24
 
25
+ @disk = Disks.make(disk)
27
26
  @path = path.to_s
28
27
  @short_circuit = short_circuit || false
29
28
  end
@@ -31,7 +30,7 @@ module Burner
31
30
  def perform(output, payload)
32
31
  compiled_path = job_string_template(path, output, payload)
33
32
 
34
- exists = File.exist?(compiled_path)
33
+ exists = disk.exist?(compiled_path)
35
34
  verb = exists ? 'does' : 'does not'
36
35
 
37
36
  output.detail("The path: #{compiled_path} #{verb} exist")
@@ -10,16 +10,20 @@
10
10
  module Burner
11
11
  module Library
12
12
  module IO
13
- # Common configuration/code for all IO Job subclasses.
14
- class Base < JobWithRegister
15
- attr_reader :path
13
+ # Common configuration/code for all IO Job subclasses that open a file.
14
+ class OpenFileBase < JobWithRegister
15
+ attr_reader :binary, :disk, :path
16
16
 
17
- def initialize(name:, path:, register: DEFAULT_REGISTER)
17
+ def initialize(name:, path:, binary: false, disk: {}, register: DEFAULT_REGISTER)
18
18
  super(name: name, register: register)
19
19
 
20
20
  raise ArgumentError, 'path is required' if path.to_s.empty?
21
21
 
22
- @path = path.to_s
22
+ @binary = binary || false
23
+ @disk = Disks.make(disk)
24
+ @path = path.to_s
25
+
26
+ freeze
23
27
  end
24
28
  end
25
29
  end
@@ -7,7 +7,7 @@
7
7
  # LICENSE file in the root directory of this source tree.
8
8
  #
9
9
 
10
- require_relative 'base'
10
+ require_relative 'open_file_base'
11
11
 
12
12
  module Burner
13
13
  module Library
@@ -16,29 +16,13 @@ module Burner
16
16
  #
17
17
  # Expected Payload[register] input: nothing.
18
18
  # Payload[register] output: contents of the specified file.
19
- class Read < Base
20
- attr_reader :binary
21
-
22
- def initialize(name:, path:, binary: false, register: DEFAULT_REGISTER)
23
- super(name: name, path: path, register: register)
24
-
25
- @binary = binary || false
26
-
27
- freeze
28
- end
29
-
19
+ class Read < OpenFileBase
30
20
  def perform(output, payload)
31
21
  compiled_path = job_string_template(path, output, payload)
32
22
 
33
23
  output.detail("Reading: #{compiled_path}")
34
24
 
35
- payload[register] = File.open(compiled_path, mode, &:read)
36
- end
37
-
38
- private
39
-
40
- def mode
41
- binary ? 'rb' : 'r'
25
+ payload[register] = disk.read(compiled_path, binary: binary)
42
26
  end
43
27
  end
44
28
  end
@@ -7,7 +7,7 @@
7
7
  # LICENSE file in the root directory of this source tree.
8
8
  #
9
9
 
10
- require_relative 'base'
10
+ require_relative 'open_file_base'
11
11
 
12
12
  module Burner
13
13
  module Library
@@ -16,54 +16,27 @@ module Burner
16
16
  #
17
17
  # Expected Payload[register] input: anything.
18
18
  # Payload[register] output: whatever was passed in.
19
- class Write < Base
20
- attr_reader :binary
21
-
22
- def initialize(name:, path:, binary: false, register: DEFAULT_REGISTER)
23
- super(name: name, path: path, register: register)
24
-
25
- @binary = binary || false
26
-
27
- freeze
28
- end
29
-
19
+ class Write < OpenFileBase
30
20
  def perform(output, payload)
31
- compiled_path = job_string_template(path, output, payload)
32
-
33
- ensure_directory_exists(output, compiled_path)
21
+ logical_filename = job_string_template(path, output, payload)
22
+ physical_filename = nil
34
23
 
35
- output.detail("Writing: #{compiled_path}")
24
+ output.detail("Writing: #{logical_filename}")
36
25
 
37
26
  time_in_seconds = Benchmark.measure do
38
- File.open(compiled_path, mode) { |io| io.write(payload[register]) }
27
+ physical_filename = disk.write(logical_filename, payload[register], binary: binary)
39
28
  end.real
40
29
 
30
+ output.detail("Wrote to: #{physical_filename}")
31
+
41
32
  side_effect = SideEffects::WrittenFile.new(
42
- logical_filename: compiled_path,
43
- physical_filename: compiled_path,
33
+ logical_filename: logical_filename,
34
+ physical_filename: physical_filename,
44
35
  time_in_seconds: time_in_seconds
45
36
  )
46
37
 
47
38
  payload.add_side_effect(side_effect)
48
39
  end
49
-
50
- private
51
-
52
- def ensure_directory_exists(output, compiled_path)
53
- dirname = File.dirname(compiled_path)
54
-
55
- return if File.exist?(dirname)
56
-
57
- output.detail("Outer directory does not exist, creating: #{dirname}")
58
-
59
- FileUtils.mkdir_p(dirname)
60
-
61
- nil
62
- end
63
-
64
- def mode
65
- binary ? 'wb' : 'w'
66
- end
67
40
  end
68
41
  end
69
42
  end
@@ -10,4 +10,5 @@
10
10
  require_relative 'modeling/attribute'
11
11
  require_relative 'modeling/attribute_renderer'
12
12
  require_relative 'modeling/key_index_mapping'
13
+ require_relative 'modeling/key_mapping'
13
14
  require_relative 'modeling/validations'
@@ -13,19 +13,37 @@ module Burner
13
13
  # to set the key to. The transformers that can be passed in can be any Realize::Transformers
14
14
  # subclasses. For more information, see the Realize library at:
15
15
  # https://github.com/bluemarblepayroll/realize
16
+ #
17
+ # Note that if explicit: true is set then no transformers will be automatically injected.
18
+ # If explicit is not true (default) then it will have a resolve job automatically injected
19
+ # in the beginning of the pipeline. This is the observed default behavior, with the
20
+ # exception having to be initially cross-mapped using a custom resolve transformation.
16
21
  class Attribute
17
22
  acts_as_hashable
18
23
 
24
+ RESOLVE_TYPE = 'r/value/resolve'
25
+
19
26
  attr_reader :key, :transformers
20
27
 
21
- def initialize(key:, transformers: [])
28
+ def initialize(key:, explicit: false, transformers: [])
22
29
  raise ArgumentError, 'key is required' if key.to_s.empty?
23
30
 
24
31
  @key = key.to_s
25
- @transformers = Realize::Transformers.array(transformers)
32
+ @transformers = base_transformers(explicit) + Realize::Transformers.array(transformers)
26
33
 
27
34
  freeze
28
35
  end
36
+
37
+ private
38
+
39
+ # When explicit, this will return an empty array.
40
+ # When not explicit, this will return an array with a basic transformer that simply
41
+ # gets the key's value. This establishes a good majority base case.
42
+ def base_transformers(explicit)
43
+ return [] if explicit
44
+
45
+ [Realize::Transformers.make(type: RESOLVE_TYPE, key: key)]
46
+ end
29
47
  end
30
48
  end
31
49
  end
@@ -0,0 +1,29 @@
1
+ # frozen_string_literal: true
2
+
3
+ #
4
+ # Copyright (c) 2020-present, Blue Marble Payroll, LLC
5
+ #
6
+ # This source code is licensed under the MIT license found in the
7
+ # LICENSE file in the root directory of this source tree.
8
+ #
9
+
10
+ module Burner
11
+ module Modeling
12
+ # Generic mapping from a key to another key. The argument 'to' is optional
13
+ # and if it is blank then the 'from' value will be used for the 'to' as well.
14
+ class KeyMapping
15
+ acts_as_hashable
16
+
17
+ attr_reader :from, :to
18
+
19
+ def initialize(from:, to: '')
20
+ raise ArgumentError, 'from is required' if from.to_s.empty?
21
+
22
+ @from = from.to_s
23
+ @to = to.to_s.empty? ? @from : to.to_s
24
+
25
+ freeze
26
+ end
27
+ end
28
+ end
29
+ end
@@ -14,7 +14,8 @@ require_relative 'step'
14
14
 
15
15
  module Burner
16
16
  # The root package. A Pipeline contains the job configurations along with the steps. The steps
17
- # referens jobs and tell you the order of the jobs to run.
17
+ # reference jobs and tell you the order of the jobs to run. If steps is nil then all jobs
18
+ # will execute in their declared order.
18
19
  class Pipeline
19
20
  acts_as_hashable
20
21
 
@@ -23,14 +24,16 @@ module Burner
23
24
 
24
25
  attr_reader :steps
25
26
 
26
- def initialize(jobs: [], steps: [])
27
+ def initialize(jobs: [], steps: nil)
27
28
  jobs = Jobs.array(jobs)
28
29
 
29
30
  assert_unique_job_names(jobs)
30
31
 
31
32
  jobs_by_name = jobs.map { |job| [job.name, job] }.to_h
32
33
 
33
- @steps = Array(steps).map do |step_name|
34
+ step_names = steps ? Array(steps) : jobs_by_name.keys
35
+
36
+ @steps = step_names.map do |step_name|
34
37
  job = jobs_by_name[step_name.to_s]
35
38
 
36
39
  raise JobNotFoundError, "#{step_name} was not declared as a job" unless job
@@ -8,5 +8,5 @@
8
8
  #
9
9
 
10
10
  module Burner
11
- VERSION = '1.0.0'
11
+ VERSION = '1.3.0'
12
12
  end
metadata CHANGED
@@ -1,14 +1,14 @@
1
1
  --- !ruby/object:Gem::Specification
2
2
  name: burner
3
3
  version: !ruby/object:Gem::Version
4
- version: 1.0.0
4
+ version: 1.3.0
5
5
  platform: ruby
6
6
  authors:
7
7
  - Matthew Ruggio
8
8
  autorequire:
9
9
  bindir: exe
10
10
  cert_chain: []
11
- date: 2020-11-05 00:00:00.000000000 Z
11
+ date: 2020-12-12 00:00:00.000000000 Z
12
12
  dependencies:
13
13
  - !ruby/object:Gem::Dependency
14
14
  name: acts_as_hashable
@@ -220,13 +220,18 @@ files:
220
220
  - exe/burner
221
221
  - lib/burner.rb
222
222
  - lib/burner/cli.rb
223
+ - lib/burner/disks.rb
224
+ - lib/burner/disks/local.rb
223
225
  - lib/burner/job.rb
224
226
  - lib/burner/job_with_register.rb
225
227
  - lib/burner/jobs.rb
226
228
  - lib/burner/library.rb
227
229
  - lib/burner/library/collection/arrays_to_objects.rb
230
+ - lib/burner/library/collection/coalesce.rb
228
231
  - lib/burner/library/collection/concatenate.rb
229
232
  - lib/burner/library/collection/graph.rb
233
+ - lib/burner/library/collection/group.rb
234
+ - lib/burner/library/collection/nested_aggregate.rb
230
235
  - lib/burner/library/collection/objects_to_arrays.rb
231
236
  - lib/burner/library/collection/shift.rb
232
237
  - lib/burner/library/collection/transform.rb
@@ -237,8 +242,8 @@ files:
237
242
  - lib/burner/library/deserialize/json.rb
238
243
  - lib/burner/library/deserialize/yaml.rb
239
244
  - lib/burner/library/echo.rb
240
- - lib/burner/library/io/base.rb
241
245
  - lib/burner/library/io/exist.rb
246
+ - lib/burner/library/io/open_file_base.rb
242
247
  - lib/burner/library/io/read.rb
243
248
  - lib/burner/library/io/write.rb
244
249
  - lib/burner/library/nothing.rb
@@ -252,6 +257,7 @@ files:
252
257
  - lib/burner/modeling/attribute.rb
253
258
  - lib/burner/modeling/attribute_renderer.rb
254
259
  - lib/burner/modeling/key_index_mapping.rb
260
+ - lib/burner/modeling/key_mapping.rb
255
261
  - lib/burner/modeling/validations.rb
256
262
  - lib/burner/modeling/validations/base.rb
257
263
  - lib/burner/modeling/validations/blank.rb