burner 1.1.0 → 1.2.0

Sign up to get free protection for your applications and to get access to all the features.
checksums.yaml CHANGED
@@ -1,7 +1,7 @@
1
1
  ---
2
2
  SHA256:
3
- metadata.gz: f1dd679013fe833b143b12340e68d5598ce4a3d53a6d5fcc6dc8c9df02bc7348
4
- data.tar.gz: e347add004c9846a9d7e083d4d7f35b6aa664f0eb340b3bc0eceffcd3c0875e5
3
+ metadata.gz: a2c3e9af2bbfbf80cd5c5884e05b1282f52871566103baf4ca1b599fc76af8ca
4
+ data.tar.gz: e7c2eb8a00086e1937a4d9e39383d38a60c5ebaa5efa1d67da4e6af9741261cd
5
5
  SHA512:
6
- metadata.gz: a92860ad5611e298eb1f2d1acaca9271324402ee375a2d842ac32a41f89c23a12abd8913596ee2fbe7fd59e071fd1b7ab77bbfc5317c57d2ca342dc75f6091a2
7
- data.tar.gz: cb529e0a81f7a3bef34e9966463ca8f71b2025101ca4f6a6bd0cef03396fcb2a6ceb6bad7681079e4edc0630f21e5d6c19f7b6489fb2d198b716a93dfd5c3938
6
+ metadata.gz: e885c7bf710f613323bbc3fc49162815632d7785aa6e753f3cd53bb04d2b53e74f92be549f75b6997d4dedce5c2dabe01fd032796a6d6e2fadfd6e84c633b168
7
+ data.tar.gz: e09d491b3fb7ef1b79932ac6094adedd74f806170a2a11aa1ff1eda838b6bf76cec1684dbfaba12d07a6e071062d432bcd5f3939f275830fc13fb71cabb39f8c
@@ -1,3 +1,13 @@
1
+ # 1.2.0 (November 25th, 2020)
2
+
3
+ #### Enhancements:
4
+
5
+ * All for a pipeline to be configured with null steps. When null, just execute all jobs in positional order.
6
+ * Allow Collection::Transform job attributes to implicitly start from a resolve transformer. `explicit: true` can be passed in as an option in case the desire is to begin from the record and not a specific value.
7
+
8
+ #### Added Jobs:
9
+
10
+ * b/collection/nested_aggregate
1
11
  # 1.1.0 (November 16, 2020)
2
12
 
3
13
  Added Jobs:
data/README.md CHANGED
@@ -91,38 +91,20 @@ Some notes:
91
91
  * The job's ID can be accessed using the `__id` key.
92
92
  * The current job's payload value can be accessed using the `__value` key.
93
93
  * Jobs can be re-used (just like the output_id and output_value jobs).
94
+ * If steps is nil then all jobs will execute in their declared order.
94
95
 
95
96
  ### Capturing Feedback / Output
96
97
 
97
98
  By default, output will be emitted to `$stdout`. You can add or change listeners by passing in optional values into Pipeline#execute. For example, say we wanted to capture the output from our json-to-yaml example:
98
99
 
99
100
  ````ruby
100
- class StringOut
101
- def initialize
102
- @io = StringIO.new
103
- end
104
-
105
- def puts(msg)
106
- tap { io.write("#{msg}\n") }
107
- end
108
-
109
- def read
110
- io.rewind
111
- io.read
112
- end
113
-
114
- private
115
-
116
- attr_reader :io
117
- end
118
-
119
- string_out = StringOut.new
120
- output = Burner::Output.new(outs: string_out)
121
- payload = Burner::Payload.new(params: params)
101
+ io = StringIO.new
102
+ output = Burner::Output.new(outs: io)
103
+ payload = Burner::Payload.new(params: params)
122
104
 
123
105
  Burner::Pipeline.make(pipeline).execute(output: output, payload: payload)
124
106
 
125
- log = string_out.read
107
+ log = io.string
126
108
  ````
127
109
 
128
110
  The value of `log` should now look similar to:
@@ -238,9 +220,10 @@ This library only ships with very basic, rudimentary jobs that are meant to just
238
220
  * **b/collection/concatenate** [from_registers, to_register]: Concatenate each from_register's value and place the newly concatenated array into the to_register. Note: this does not do any deep copying and should be assumed it is shallow copying all objects.
239
221
  * **b/collection/graph** [config, key, register]: Use [Hashematics](https://github.com/bluemarblepayroll/hashematics) to turn a flat array of objects into a deeply nested object tree.
240
222
  * **b/collection/group** [keys, register, separator]: Take a register's value (an array of objects) and group the objects by the specified keys.
223
+ * **b/collection/nested_aggregate** [register, key_mappings, key, separator]: Traverse a set of objects, resolving key's value for each object, optionally copying down key_mappings to the child records, then merging all the inner records together.
241
224
  * **b/collection/objects_to_arrays** [mappings, register]: Convert an array of objects to an array of arrays.
242
225
  * **b/collection/shift** [amount, register]: Remove the first N number of elements from an array.
243
- * **b/collection/transform** [attributes, exclusive, separator, register]: Iterate over all objects and transform each key per the attribute transformers specifications. If exclusive is set to false then the current object will be overridden/merged. Separator can also be set for key path support. This job uses [Realize](https://github.com/bluemarblepayroll/realize), which provides its own extendable value-transformation pipeline.
226
+ * **b/collection/transform** [attributes, exclusive, separator, register]: Iterate over all objects and transform each key per the attribute transformers specifications. If exclusive is set to false then the current object will be overridden/merged. Separator can also be set for key path support. This job uses [Realize](https://github.com/bluemarblepayroll/realize), which provides its own extendable value-transformation pipeline. If an attribute is not set with `explicit: true` then it will automatically start from the key's value from the record. If `explicit: true` is started, then it will start from the record itself.
244
227
  * **b/collection/unpivot** [pivot_set, register]: Take an array of objects and unpivot specific sets of keys into rows. Under the hood it uses [HashMath's Unpivot class](https://github.com/bluemarblepayroll/hash_math#unpivot-hash-key-coalescence-and-row-extrapolation).
245
228
  * **b/collection/validate** [invalid_register, join_char, message_key, register, separator, validations]: Take an array of objects, run it through each declared validator, and split the objects into two registers. The valid objects will be split into the current register while the invalid ones will go into the invalid_register as declared. Optional arguments, join_char and message_key, help determine the compiled error messages. The separator option can be utilized to use dot-notation for validating keys. See each validation's options by viewing their classes within the `lib/modeling/validations` directory.
246
229
  * **b/collection/values** [include_keys, register]: Take an array of objects and call `#values` on each object. If include_keys is true (it is false by default), then call `#keys` on the first object and inject that as a "header" object.
@@ -27,6 +27,7 @@ module Burner
27
27
  register 'b/collection/concatenate', Library::Collection::Concatenate
28
28
  register 'b/collection/graph', Library::Collection::Graph
29
29
  register 'b/collection/group', Library::Collection::Group
30
+ register 'b/collection/nested_aggregate', Library::Collection::NestedAggregate
30
31
  register 'b/collection/objects_to_arrays', Library::Collection::ObjectsToArrays
31
32
  register 'b/collection/shift', Library::Collection::Shift
32
33
  register 'b/collection/transform', Library::Collection::Transform
@@ -18,6 +18,7 @@ require_relative 'library/collection/coalesce'
18
18
  require_relative 'library/collection/concatenate'
19
19
  require_relative 'library/collection/graph'
20
20
  require_relative 'library/collection/group'
21
+ require_relative 'library/collection/nested_aggregate'
21
22
  require_relative 'library/collection/objects_to_arrays'
22
23
  require_relative 'library/collection/shift'
23
24
  require_relative 'library/collection/transform'
@@ -0,0 +1,67 @@
1
+ # frozen_string_literal: true
2
+
3
+ #
4
+ # Copyright (c) 2020-present, Blue Marble Payroll, LLC
5
+ #
6
+ # This source code is licensed under the MIT license found in the
7
+ # LICENSE file in the root directory of this source tree.
8
+ #
9
+
10
+ module Burner
11
+ module Library
12
+ module Collection
13
+ # Iterate over a collection of objects, calling key on each object, then aggregating the
14
+ # returns of key together into one array. This new derived array will be set as the value
15
+ # for the payload's register. Leverage the key_mappings option to optionally copy down
16
+ # keys and values from outer to inner records. This job is particularly useful
17
+ # if you have nested arrays but wish to deal with each level/depth in the aggregate.
18
+ #
19
+ # Expected Payload[register] input: array of objects.
20
+ # Payload[register] output: array of objects.
21
+ class NestedAggregate < JobWithRegister
22
+ attr_reader :key, :key_mappings, :resolver
23
+
24
+ def initialize(name:, key:, key_mappings: [], register: DEFAULT_REGISTER, separator: '')
25
+ super(name: name, register: register)
26
+
27
+ raise ArgumentError, 'key is required' if key.to_s.empty?
28
+
29
+ @key = key.to_s
30
+ @key_mappings = Modeling::KeyMapping.array(key_mappings)
31
+ @resolver = Objectable.resolver(separator: separator.to_s)
32
+
33
+ freeze
34
+ end
35
+
36
+ def perform(output, payload)
37
+ records = array(payload[register])
38
+ count = records.length
39
+
40
+ output.detail("Aggregating on key: #{key} for #{count} records(s)")
41
+
42
+ # Outer loop on parent records
43
+ payload[register] = records.each_with_object([]) do |record, memo|
44
+ inner_records = resolver.get(record, key)
45
+
46
+ # Inner loop on child records
47
+ array(inner_records).each do |inner_record|
48
+ memo << copy_key_mappings(record, inner_record)
49
+ end
50
+ end
51
+ end
52
+
53
+ private
54
+
55
+ def copy_key_mappings(source_record, destination_record)
56
+ key_mappings.each do |key_mapping|
57
+ value = resolver.get(source_record, key_mapping.from)
58
+
59
+ resolver.set(destination_record, key_mapping.to, value)
60
+ end
61
+
62
+ destination_record
63
+ end
64
+ end
65
+ end
66
+ end
67
+ end
@@ -13,19 +13,37 @@ module Burner
13
13
  # to set the key to. The transformers that can be passed in can be any Realize::Transformers
14
14
  # subclasses. For more information, see the Realize library at:
15
15
  # https://github.com/bluemarblepayroll/realize
16
+ #
17
+ # Note that if explicit: true is set then no transformers will be automatically injected.
18
+ # If explicit is not true (default) then it will have a resolve job automatically injected
19
+ # in the beginning of the pipeline. This is the observed default behavior, with the
20
+ # exception having to be initially cross-mapped using a custom resolve transformation.
16
21
  class Attribute
17
22
  acts_as_hashable
18
23
 
24
+ RESOLVE_TYPE = 'r/value/resolve'
25
+
19
26
  attr_reader :key, :transformers
20
27
 
21
- def initialize(key:, transformers: [])
28
+ def initialize(key:, explicit: false, transformers: [])
22
29
  raise ArgumentError, 'key is required' if key.to_s.empty?
23
30
 
24
31
  @key = key.to_s
25
- @transformers = Realize::Transformers.array(transformers)
32
+ @transformers = base_transformers(explicit) + Realize::Transformers.array(transformers)
26
33
 
27
34
  freeze
28
35
  end
36
+
37
+ private
38
+
39
+ # When explicit, this will return an empty array.
40
+ # When not explicit, this will return an array with a basic transformer that simply
41
+ # gets the key's value. This establishes a good majority base case.
42
+ def base_transformers(explicit)
43
+ return [] if explicit
44
+
45
+ [Realize::Transformers.make(type: RESOLVE_TYPE, key: key)]
46
+ end
29
47
  end
30
48
  end
31
49
  end
@@ -9,18 +9,18 @@
9
9
 
10
10
  module Burner
11
11
  module Modeling
12
- # Generic mapping from a key to another key.
12
+ # Generic mapping from a key to another key. The argument 'to' is optional
13
+ # and if it is blank then the 'from' value will be used for the 'to' as well.
13
14
  class KeyMapping
14
15
  acts_as_hashable
15
16
 
16
17
  attr_reader :from, :to
17
18
 
18
- def initialize(from:, to:)
19
+ def initialize(from:, to: '')
19
20
  raise ArgumentError, 'from is required' if from.to_s.empty?
20
- raise ArgumentError, 'to is required' if to.to_s.empty?
21
21
 
22
22
  @from = from.to_s
23
- @to = to.to_s
23
+ @to = to.to_s.empty? ? @from : to.to_s
24
24
 
25
25
  freeze
26
26
  end
@@ -14,7 +14,8 @@ require_relative 'step'
14
14
 
15
15
  module Burner
16
16
  # The root package. A Pipeline contains the job configurations along with the steps. The steps
17
- # referens jobs and tell you the order of the jobs to run.
17
+ # reference jobs and tell you the order of the jobs to run. If steps is nil then all jobs
18
+ # will execute in their declared order.
18
19
  class Pipeline
19
20
  acts_as_hashable
20
21
 
@@ -23,14 +24,16 @@ module Burner
23
24
 
24
25
  attr_reader :steps
25
26
 
26
- def initialize(jobs: [], steps: [])
27
+ def initialize(jobs: [], steps: nil)
27
28
  jobs = Jobs.array(jobs)
28
29
 
29
30
  assert_unique_job_names(jobs)
30
31
 
31
32
  jobs_by_name = jobs.map { |job| [job.name, job] }.to_h
32
33
 
33
- @steps = Array(steps).map do |step_name|
34
+ step_names = steps ? Array(steps) : jobs_by_name.keys
35
+
36
+ @steps = step_names.map do |step_name|
34
37
  job = jobs_by_name[step_name.to_s]
35
38
 
36
39
  raise JobNotFoundError, "#{step_name} was not declared as a job" unless job
@@ -8,5 +8,5 @@
8
8
  #
9
9
 
10
10
  module Burner
11
- VERSION = '1.1.0'
11
+ VERSION = '1.2.0'
12
12
  end
metadata CHANGED
@@ -1,14 +1,14 @@
1
1
  --- !ruby/object:Gem::Specification
2
2
  name: burner
3
3
  version: !ruby/object:Gem::Version
4
- version: 1.1.0
4
+ version: 1.2.0
5
5
  platform: ruby
6
6
  authors:
7
7
  - Matthew Ruggio
8
8
  autorequire:
9
9
  bindir: exe
10
10
  cert_chain: []
11
- date: 2020-11-16 00:00:00.000000000 Z
11
+ date: 2020-11-25 00:00:00.000000000 Z
12
12
  dependencies:
13
13
  - !ruby/object:Gem::Dependency
14
14
  name: acts_as_hashable
@@ -229,6 +229,7 @@ files:
229
229
  - lib/burner/library/collection/concatenate.rb
230
230
  - lib/burner/library/collection/graph.rb
231
231
  - lib/burner/library/collection/group.rb
232
+ - lib/burner/library/collection/nested_aggregate.rb
232
233
  - lib/burner/library/collection/objects_to_arrays.rb
233
234
  - lib/burner/library/collection/shift.rb
234
235
  - lib/burner/library/collection/transform.rb