pallets 0.3.0 → 0.4.0

Sign up to get free protection for your applications and to get access to all the features.
checksums.yaml CHANGED
@@ -1,7 +1,7 @@
1
1
  ---
2
- SHA256:
3
- metadata.gz: 7e0544b5ac0bf2eb5cbf4aa0b1795e4588df46485ed36e7bc5560d26cf47684a
4
- data.tar.gz: 95a3e7f6ca57048773ba2f2ac72304090c5899b562766e2e99e6ad1911fcc103
2
+ SHA1:
3
+ metadata.gz: 6bd6ebdb788c39b7d3cb434b60f94d927dacd47c
4
+ data.tar.gz: b463178c06885308a2192bbcf3d549c52391acbd
5
5
  SHA512:
6
- metadata.gz: 296c3ce03c89e6081e707014d28d8c48d65d22d6adac4ee5534551f39853eba171d12d143b52fa271857266342f12a965639749fd436126a66118fc911f47309
7
- data.tar.gz: fccac5f172c6f8b03bf4409cd37592efc81f93d951ac1e5ac0163a6e74deabdecb607f3b5d9b2a5e7e11ab8c2e7fbecb7aa2b48c8544c18d0041e4ebd2b59467
6
+ metadata.gz: 7e976084d28bba01327e0a8c41b8a948e27f6b3041d6a3a31665825d66b57225867ec410e9bf4f24afcf13241f41b855e1150601f37de37425fc72ee9582b5a4
7
+ data.tar.gz: c05e17653dac7716b9f52958b0bfb295517d614f5f97a00ffb91ad7100e83fbbf20f526f08f8315a1344829d01bd992c4920a7b0881aec4fea9b68eb0d259ff3
@@ -6,6 +6,24 @@ and this project adheres to [Semantic Versioning](https://semver.org/spec/v2.0.0
6
6
 
7
7
  ## [Unreleased]
8
8
 
9
+ ## [0.4.0] - 2019-04-07
10
+ ### Added
11
+ - give up workflow before it finishes by returning `false` in any of its tasks (#25)
12
+ - jobs have a JID (#30)
13
+ - Rails support (#27)
14
+
15
+ ### Changed
16
+ - contexts are serialized and accept basic Ruby types as values (#24)
17
+ - workflow tasks are defined using classes (#26)
18
+ - some job and Redis keys have been renamed (#28)
19
+ - job retry backoff has a random component (#32)
20
+ - missing dependencies raise a `WorkflowError` (#31)
21
+ - Redis backend uses `EVALSHA` for Lua scripts (#34)
22
+ - the `pool_size` configuration is inferred from `concurrency` (#33)
23
+
24
+ ### Removed
25
+ - backend namespaces (#28)
26
+
9
27
  ## [0.3.0] - 2019-02-08
10
28
  ### Added
11
29
  - shared contexts (#9)
data/README.md CHANGED
@@ -11,10 +11,10 @@ Toy workflow engine, written in Ruby
11
11
  require 'pallets'
12
12
 
13
13
  class MyWorkflow < Pallets::Workflow
14
- task :foo
15
- task :bar => :foo
16
- task :baz => :foo
17
- task :qux => [:bar, :baz]
14
+ task Foo
15
+ task Bar => Foo
16
+ task Baz => Foo
17
+ task Qux => [Bar, Baz]
18
18
  end
19
19
 
20
20
  class Foo < Pallets::Task
@@ -38,6 +38,7 @@ That's basically it! Curious for more? Read on or [check the examples](examples/
38
38
  * retries failed tasks
39
39
  * Redis backend out of the box
40
40
  * JSON and msgpack serializers out of the box
41
+ * Rails support
41
42
  * beautiful DSL
42
43
  * convention over configuration
43
44
  * thoroughly tested
@@ -74,6 +75,49 @@ end
74
75
 
75
76
  For the complete set of options, see [pallets/configuration.rb](lib/pallets/configuration.rb)
76
77
 
78
+ ## Cookbook
79
+
80
+ ### DSL
81
+
82
+ Pallets is designed for developers' happiness. Its DSL aims to be as beautiful
83
+ and readable as possible, while still enabling complex scenarios to be performed.
84
+
85
+ ```ruby
86
+ # All workflows must subclass Pallets::Workflow
87
+ class WashClothes < Pallets::Workflow
88
+ # The simplest task
89
+ task BuyDetergent
90
+
91
+ # Another task; since it has no dependencies, it will be executed in parallel
92
+ # with BuyDetergent
93
+ # TIP: Use a String argument when task is not _yet_ loaded
94
+ task 'BuySoftener'
95
+
96
+ # We're not doing it in real life, but we use it to showcase our first dependency!
97
+ task DilluteDetergent => BuyDetergent
98
+
99
+ # We're getting more complex here! This is the alternate way of defining
100
+ # dependencies (which can be several, by the way!). Choose the style that fits
101
+ # you best
102
+ task TurnOnWashingMachine, depends_on: [BuyDetergent, 'BuySoftener']
103
+
104
+ # Specify how many times a task is allowed to fail. If max_failures is reached
105
+ # the task is given up
106
+ task SelectProgram => TurnOnWashingMachine, max_failures: 2
107
+ end
108
+
109
+ # Every task must be a subclass of Pallets::Task
110
+ class BuyDetergent < Pallets::Task
111
+ # Tasks must implement this method; here you can define whatever rocket science
112
+ # your task needs to perform!
113
+ def run
114
+ # ...do whatever...
115
+ end
116
+ end
117
+
118
+ # We're omitting the other task definitions for now; you shouldn't!
119
+ ```
120
+
77
121
  ## Motivation
78
122
 
79
123
  The main reason for Pallet's existence was the need of a fast, simple and reliable workflow engine, one that is easily extensible with various backends and serializer, one that does not lose your data and one that is intelligent enough to concurrently schedule a workflow's tasks.
@@ -26,8 +26,8 @@ Pallets.configure do |c|
26
26
  end
27
27
 
28
28
  class ConfigSavvy < Pallets::Workflow
29
- task :volatile
30
- task :success => :volatile
29
+ task 'Volatile'
30
+ task 'Success' => 'Volatile'
31
31
  end
32
32
 
33
33
  class Volatile < Pallets::Task
@@ -1,22 +1,17 @@
1
1
  require 'pallets'
2
2
 
3
- Pallets.configure do |c|
4
- c.backend_args = { url: 'redis://127.0.0.1:6379/13' }
5
- end
6
-
7
3
  class DoGroceries < Pallets::Workflow
8
- task :enter_shop
9
- task :get_shopping_cart => :enter_shop
10
- task :put_milk => :get_shopping_cart
11
- task :put_bread => :get_shopping_cart
12
- task :pay => [:put_milk, :put_bread]
13
- task :go_home => :pay
4
+ task 'EnterShop'
5
+ task 'GetShoppingCart' => 'EnterShop'
6
+ task 'PutMilk' => 'GetShoppingCart'
7
+ task 'PutBread' => 'GetShoppingCart'
8
+ task 'Pay' => ['PutMilk', 'PutBread']
9
+ task 'GoHome' => 'Pay'
14
10
  end
15
11
 
16
12
  class EnterShop < Pallets::Task
17
13
  def run
18
14
  puts "Entering #{context['shop_name']}"
19
- raise 'Cannot enter shop!' if (context['i'].to_i % 10).zero?
20
15
  end
21
16
  end
22
17
 
@@ -36,7 +31,6 @@ end
36
31
 
37
32
  class PutBread < Pallets::Task
38
33
  def run
39
- raise 'Out of bread!' if (context['i'].to_i % 30).zero?
40
34
  puts "Got the bread"
41
35
  end
42
36
  end
@@ -45,7 +39,6 @@ class Pay < Pallets::Task
45
39
  def run
46
40
  puts "Paying by #{context['pay_by']}"
47
41
  sleep 2
48
- raise 'Payment failed!' if (context['i'].to_i % 100).zero?
49
42
  end
50
43
  end
51
44
 
@@ -59,4 +52,4 @@ class GoHome < Pallets::Task
59
52
  end
60
53
  end
61
54
 
62
- 1_000.times { |i| DoGroceries.new(shop_name: 'Pallet Shop', pay_by: :card, i: i).run }
55
+ DoGroceries.new(shop_name: 'Pallet Shop', pay_by: :card).run
@@ -1,7 +1,7 @@
1
1
  require 'pallets'
2
2
 
3
3
  class HelloWorld < Pallets::Workflow
4
- task :echo
4
+ task 'Echo'
5
5
  end
6
6
 
7
7
  class Echo < Pallets::Task
@@ -34,7 +34,6 @@ module Pallets
34
34
  @backend ||= begin
35
35
  cls = Pallets::Util.constantize("Pallets::Backends::#{configuration.backend.capitalize}")
36
36
  cls.new(
37
- namespace: configuration.namespace,
38
37
  blocking_timeout: configuration.blocking_timeout,
39
38
  failed_job_lifespan: configuration.failed_job_lifespan,
40
39
  job_timeout: configuration.job_timeout,
@@ -3,50 +3,49 @@ require 'redis'
3
3
  module Pallets
4
4
  module Backends
5
5
  class Redis < Base
6
- def initialize(namespace:, blocking_timeout:, failed_job_lifespan:, job_timeout:, pool_size:, **options)
7
- @namespace = namespace
6
+ QUEUE_KEY = 'queue'
7
+ RELIABILITY_QUEUE_KEY = 'reliability-queue'
8
+ RELIABILITY_SET_KEY = 'reliability-set'
9
+ RETRY_SET_KEY = 'retry-set'
10
+ GIVEN_UP_SET_KEY = 'given-up-set'
11
+ WORKFLOW_QUEUE_KEY = 'workflow-queue:%s'
12
+ CONTEXT_KEY = 'context:%s'
13
+ REMAINING_KEY = 'remaining:%s'
14
+
15
+ def initialize(blocking_timeout:, failed_job_lifespan:, job_timeout:, pool_size:, **options)
8
16
  @blocking_timeout = blocking_timeout
9
17
  @failed_job_lifespan = failed_job_lifespan
10
18
  @job_timeout = job_timeout
11
19
  @pool = Pallets::Pool.new(pool_size) { ::Redis.new(options) }
12
20
 
13
- @queue_key = "#{namespace}:queue"
14
- @reliability_queue_key = "#{namespace}:reliability-queue"
15
- @reliability_set_key = "#{namespace}:reliability-set"
16
- @retry_set_key = "#{namespace}:retry-set"
17
- @given_up_set_key = "#{namespace}:given-up-set"
18
- @workflow_key = "#{namespace}:workflows:%s"
19
- @context_key = "#{namespace}:contexts:%s"
20
- @eta_key = "#{namespace}:etas:%s"
21
-
22
21
  register_scripts
23
22
  end
24
23
 
25
24
  def pick
26
25
  @pool.execute do |client|
27
- job = client.brpoplpush(@queue_key, @reliability_queue_key, timeout: @blocking_timeout)
26
+ job = client.brpoplpush(QUEUE_KEY, RELIABILITY_QUEUE_KEY, timeout: @blocking_timeout)
28
27
  if job
29
28
  # We store the job's timeout so we know when to retry jobs that are
30
29
  # still on the reliability queue. We do this separately since there is
31
30
  # no other way to atomically BRPOPLPUSH from the main queue to a
32
31
  # sorted set
33
- client.zadd(@reliability_set_key, Time.now.to_f + @job_timeout, job)
32
+ client.zadd(RELIABILITY_SET_KEY, Time.now.to_f + @job_timeout, job)
34
33
  end
35
34
  job
36
35
  end
37
36
  end
38
37
 
39
- def get_context(workflow_id)
38
+ def get_context(wfid)
40
39
  @pool.execute do |client|
41
- client.hgetall(@context_key % workflow_id)
40
+ client.hgetall(CONTEXT_KEY % wfid)
42
41
  end
43
42
  end
44
43
 
45
- def save(workflow_id, job, context_buffer)
44
+ def save(wfid, job, context_buffer)
46
45
  @pool.execute do |client|
47
- client.eval(
46
+ client.evalsha(
48
47
  @scripts['save'],
49
- [@workflow_key % workflow_id, @queue_key, @reliability_queue_key, @reliability_set_key, @context_key % workflow_id, @eta_key % workflow_id],
48
+ [WORKFLOW_QUEUE_KEY % wfid, QUEUE_KEY, RELIABILITY_QUEUE_KEY, RELIABILITY_SET_KEY, CONTEXT_KEY % wfid, REMAINING_KEY % wfid],
50
49
  context_buffer.to_a << job
51
50
  )
52
51
  end
@@ -54,9 +53,9 @@ module Pallets
54
53
 
55
54
  def retry(job, old_job, at)
56
55
  @pool.execute do |client|
57
- client.eval(
56
+ client.evalsha(
58
57
  @scripts['retry'],
59
- [@retry_set_key, @reliability_queue_key, @reliability_set_key],
58
+ [RETRY_SET_KEY, RELIABILITY_QUEUE_KEY, RELIABILITY_SET_KEY],
60
59
  [at, job, old_job]
61
60
  )
62
61
  end
@@ -64,9 +63,9 @@ module Pallets
64
63
 
65
64
  def give_up(job, old_job)
66
65
  @pool.execute do |client|
67
- client.eval(
66
+ client.evalsha(
68
67
  @scripts['give_up'],
69
- [@given_up_set_key, @reliability_queue_key, @reliability_set_key],
68
+ [GIVEN_UP_SET_KEY, RELIABILITY_QUEUE_KEY, RELIABILITY_SET_KEY],
70
69
  [Time.now.to_f, job, old_job, Time.now.to_f - @failed_job_lifespan]
71
70
  )
72
71
  end
@@ -74,23 +73,23 @@ module Pallets
74
73
 
75
74
  def reschedule_all(earlier_than)
76
75
  @pool.execute do |client|
77
- client.eval(
76
+ client.evalsha(
78
77
  @scripts['reschedule_all'],
79
- [@reliability_set_key, @reliability_queue_key, @retry_set_key, @queue_key],
78
+ [RELIABILITY_SET_KEY, RELIABILITY_QUEUE_KEY, RETRY_SET_KEY, QUEUE_KEY],
80
79
  [earlier_than]
81
80
  )
82
81
  end
83
82
  end
84
83
 
85
- def run_workflow(workflow_id, jobs_with_order, context)
84
+ def run_workflow(wfid, jobs_with_order, context_buffer)
86
85
  @pool.execute do |client|
87
86
  client.multi do
88
- client.eval(
87
+ client.evalsha(
89
88
  @scripts['run_workflow'],
90
- [@workflow_key % workflow_id, @queue_key, @eta_key % workflow_id],
89
+ [WORKFLOW_QUEUE_KEY % wfid, QUEUE_KEY, REMAINING_KEY % wfid],
91
90
  jobs_with_order
92
91
  )
93
- client.hmset(@context_key % workflow_id, *context.to_a) unless context.empty?
92
+ client.hmset(CONTEXT_KEY % wfid, *context_buffer.to_a) unless context_buffer.empty?
94
93
  end
95
94
  end
96
95
  end
@@ -98,11 +97,14 @@ module Pallets
98
97
  private
99
98
 
100
99
  def register_scripts
101
- @scripts ||= Dir["#{__dir__}/scripts/*.lua"].map do |file|
102
- name = File.basename(file, '.lua')
103
- script = File.read(file)
104
- [name, script]
105
- end.to_h
100
+ @scripts ||= @pool.execute do |client|
101
+ Dir["#{__dir__}/scripts/*.lua"].map do |file|
102
+ name = File.basename(file, '.lua')
103
+ script = File.read(file)
104
+ sha = client.script(:load, script)
105
+ [name, sha]
106
+ end.to_h
107
+ end
106
108
  end
107
109
  end
108
110
  end
@@ -65,10 +65,6 @@ module Pallets
65
65
  Pallets.configuration.failed_job_lifespan = failed_job_lifespan
66
66
  end
67
67
 
68
- opts.on('-n', '--namespace NAME', 'Namespace to use for backend') do |namespace|
69
- Pallets.configuration.namespace = namespace
70
- end
71
-
72
68
  opts.on('-p', '--pool-size NUM', Integer, 'Size of backend pool') do |pool_size|
73
69
  Pallets.configuration.pool_size = pool_size
74
70
  end
@@ -77,8 +73,12 @@ module Pallets
77
73
  Pallets.logger.level = Logger::ERROR
78
74
  end
79
75
 
80
- opts.on('-r', '--require PATH', 'Path containing workflow definitions') do |path|
81
- require(path)
76
+ opts.on('-r', '--require PATH', 'Path to file containing workflow definitions or directory containing Rails application') do |path|
77
+ if File.directory?(path)
78
+ require File.expand_path("#{path}/config/environment.rb")
79
+ else
80
+ require(path)
81
+ end
82
82
  end
83
83
 
84
84
  opts.on('-s', '--serializer NAME', 'Serializer to use') do |serializer|
@@ -24,11 +24,8 @@ module Pallets
24
24
  # per task basis
25
25
  attr_accessor :max_failures
26
26
 
27
- # Namespace used by the backend to store information
28
- attr_accessor :namespace
29
-
30
27
  # Number of connections to the backend
31
- attr_accessor :pool_size
28
+ attr_writer :pool_size
32
29
 
33
30
  # Serializer used for jobs
34
31
  attr_accessor :serializer
@@ -41,9 +38,11 @@ module Pallets
41
38
  @failed_job_lifespan = 7_776_000 # 3 months
42
39
  @job_timeout = 1_800 # 30 minutes
43
40
  @max_failures = 3
44
- @namespace = 'pallets'
45
- @pool_size = 5
46
41
  @serializer = :json
47
42
  end
43
+
44
+ def pool_size
45
+ @pool_size || @concurrency + 1
46
+ end
48
47
  end
49
48
  end
@@ -1,11 +1,17 @@
1
1
  module Pallets
2
2
  # Hash-like class that additionally holds a buffer for all write operations
3
+ # that occur after initialization
3
4
  class Context < Hash
4
5
  def []=(key, value)
5
6
  buffer[key] = value
6
7
  super
7
8
  end
8
9
 
10
+ def merge!(other_hash)
11
+ buffer.merge!(other_hash)
12
+ super
13
+ end
14
+
9
15
  def buffer
10
16
  @buffer ||= {}
11
17
  end
@@ -1,29 +1,23 @@
1
1
  module Pallets
2
2
  module DSL
3
3
  module Workflow
4
- def task(*args, **options, &block)
5
- name, depends_on = if args.any?
6
- [args.first, options[:depends_on]]
4
+ def task(arg, depends_on: nil, max_failures: nil, &block)
5
+ klass, dependencies = case arg
6
+ when Hash
7
+ # The `task Foo => Bar` notation
8
+ arg.first
7
9
  else
8
- options.first
10
+ # The `task Foo, depends_on: Bar` notation
11
+ [arg, depends_on]
9
12
  end
10
13
 
11
- unless name
12
- raise WorkflowError, "Task has no name. Provide a name using " \
13
- "`task :name, *args` or `task name: :arg` syntax"
14
- end
15
-
16
- # Handle nils, symbols or arrays consistently
17
- name = name.to_sym
18
- dependencies = Array(depends_on).compact.map(&:to_sym)
19
- graph.add(name, dependencies)
20
-
21
- class_name = options[:class_name] || Pallets::Util.camelize(name)
22
- max_failures = options[:max_failures] || Pallets.configuration.max_failures
14
+ task_class = klass.to_s
15
+ dependencies = Array(dependencies).compact.uniq.map(&:to_s)
16
+ graph.add(task_class, dependencies)
23
17
 
24
- task_config[name] = {
25
- 'class_name' => class_name,
26
- 'max_failures' => max_failures
18
+ task_config[task_class] = {
19
+ 'task_class' => task_class,
20
+ 'max_failures' => max_failures || Pallets.configuration.max_failures
27
21
  }
28
22
 
29
23
  nil
@@ -43,6 +43,8 @@ module Pallets
43
43
 
44
44
  def tsort_each_child(node, &block)
45
45
  @nodes.fetch(node).each(&block)
46
+ rescue KeyError
47
+ raise WorkflowError, "Task #{node} is marked as a dependency but not defined"
46
48
  end
47
49
  end
48
50
  end
@@ -8,6 +8,18 @@ module Pallets
8
8
  def load(data)
9
9
  raise NotImplementedError
10
10
  end
11
+
12
+ alias_method :dump_job, :dump
13
+ alias_method :load_job, :load
14
+
15
+ # Context hashes only need their values (de)serialized
16
+ def dump_context(data)
17
+ data.map { |k, v| [k.to_s, dump(v)] }.to_h
18
+ end
19
+
20
+ def load_context(data)
21
+ data.map { |k, v| [k, load(v)] }.to_h
22
+ end
11
23
  end
12
24
  end
13
25
  end
@@ -2,13 +2,14 @@ require 'json'
2
2
 
3
3
  module Pallets
4
4
  module Serializers
5
- class Json
5
+ class Json < Base
6
6
  def dump(data)
7
- JSON.generate(data)
7
+ # TODO: Remove option after dropping support for Ruby 2.3
8
+ JSON.generate(data, quirks_mode: true)
8
9
  end
9
10
 
10
11
  def load(data)
11
- JSON.parse(data)
12
+ JSON.parse(data, quirks_mode: true)
12
13
  end
13
14
  end
14
15
  end
@@ -2,7 +2,7 @@ require 'msgpack'
2
2
 
3
3
  module Pallets
4
4
  module Serializers
5
- class Msgpack
5
+ class Msgpack < Base
6
6
  def dump(data)
7
7
  MessagePack.pack(data)
8
8
  end
@@ -2,8 +2,10 @@ module Pallets
2
2
  module Util
3
3
  extend self
4
4
 
5
- def camelize(str)
6
- str.to_s.gsub(/(?:^|_)([a-z])/) { $1.upcase }
5
+ def generate_id(str)
6
+ initials = str.gsub(/[^A-Z]+([A-Z])/, '\1')[0,3]
7
+ random = SecureRandom.hex(5)
8
+ "#{initials}#{random}"
7
9
  end
8
10
 
9
11
  def constantize(str)
@@ -1,3 +1,3 @@
1
1
  module Pallets
2
- VERSION = "0.3.0"
2
+ VERSION = "0.4.0"
3
3
  end
@@ -71,16 +71,22 @@ module Pallets
71
71
 
72
72
  Pallets.logger.info "Started", extract_metadata(job_hash)
73
73
 
74
- context = Context[backend.get_context(job_hash['workflow_id'])]
74
+ context = Context[
75
+ serializer.load_context(backend.get_context(job_hash['wfid']))
76
+ ]
75
77
 
76
- task_class = Pallets::Util.constantize(job_hash["class_name"])
78
+ task_class = Pallets::Util.constantize(job_hash["task_class"])
77
79
  task = task_class.new(context)
78
80
  begin
79
- task.run
81
+ task_result = task.run
80
82
  rescue => ex
81
83
  handle_job_error(ex, job, job_hash)
82
84
  else
83
- handle_job_success(context, job, job_hash)
85
+ if task_result == false
86
+ handle_job_return_false(job, job_hash)
87
+ else
88
+ handle_job_success(context, job, job_hash)
89
+ end
84
90
  end
85
91
  end
86
92
 
@@ -90,9 +96,10 @@ module Pallets
90
96
  failures = job_hash.fetch('failures', 0) + 1
91
97
  new_job = serializer.dump(job_hash.merge(
92
98
  'failures' => failures,
93
- 'failed_at' => Time.now.to_f,
99
+ 'given_up_at' => Time.now.to_f,
94
100
  'error_class' => ex.class.name,
95
- 'error_message' => ex.message
101
+ 'error_message' => ex.message,
102
+ 'reason' => 'error'
96
103
  ))
97
104
  if failures < job_hash['max_failures']
98
105
  retry_at = Time.now.to_f + backoff_in_seconds(failures)
@@ -103,22 +110,32 @@ module Pallets
103
110
  end
104
111
  end
105
112
 
113
+ def handle_job_return_false(job, job_hash)
114
+ new_job = serializer.dump(job_hash.merge(
115
+ 'given_up_at' => Time.now.to_f,
116
+ 'reason' => 'returned_false'
117
+ ))
118
+ backend.give_up(new_job, job)
119
+ Pallets.logger.info "Gave up after returning false", extract_metadata(job_hash)
120
+ end
121
+
106
122
  def handle_job_success(context, job, job_hash)
107
- backend.save(job_hash['workflow_id'], job, context.buffer)
123
+ backend.save(job_hash['wfid'], job, serializer.dump_context(context.buffer))
108
124
  Pallets.logger.info "Done", extract_metadata(job_hash)
109
125
  end
110
126
 
111
127
  def extract_metadata(job_hash)
112
128
  {
113
129
  wid: id,
114
- wfid: job_hash['workflow_id'],
115
- wf: job_hash['workflow_class_name'],
116
- tsk: job_hash['class_name']
130
+ wfid: job_hash['wfid'],
131
+ jid: job_hash['jid'],
132
+ wf: job_hash['workflow_class'],
133
+ tsk: job_hash['task_class']
117
134
  }
118
135
  end
119
136
 
120
137
  def backoff_in_seconds(count)
121
- count ** 4 + 6
138
+ count ** 4 + rand(6..10)
122
139
  end
123
140
 
124
141
  def backend
@@ -4,42 +4,40 @@ module Pallets
4
4
 
5
5
  attr_reader :context
6
6
 
7
- def initialize(context = {})
7
+ def initialize(context_hash = {})
8
8
  @id = nil
9
- @context = context
9
+ # Passed in context hash needs to be buffered
10
+ @context = Context.new.merge!(context_hash)
10
11
  end
11
12
 
12
13
  def run
13
14
  raise WorkflowError, "#{self.class.name} has no tasks. Workflows "\
14
15
  "must contain at least one task" if self.class.graph.empty?
15
16
 
16
- backend.run_workflow(id, jobs_with_order, context)
17
+ backend.run_workflow(id, jobs_with_order, serializer.dump_context(context.buffer))
17
18
  id
18
19
  end
19
20
 
20
21
  def id
21
- @id ||= begin
22
- initials = self.class.name.gsub(/[^A-Z]+([A-Z])/, '\1')[0,3]
23
- random = SecureRandom.hex(5)
24
- "P#{initials}#{random}".upcase
25
- end
22
+ @id ||= "P#{Pallets::Util.generate_id(self.class.name)}".upcase
26
23
  end
27
24
 
28
25
  private
29
26
 
30
27
  def jobs_with_order
31
- self.class.graph.sorted_with_order.map do |task_name, order|
32
- job = serializer.dump(job_hash.merge(self.class.task_config[task_name]))
28
+ self.class.graph.sorted_with_order.map do |task_class, order|
29
+ job = serializer.dump(construct_job(task_class))
33
30
  [order, job]
34
31
  end
35
32
  end
36
33
 
37
- def job_hash
38
- {
39
- 'workflow_id' => id,
40
- 'workflow_class_name' => self.class.name,
41
- 'created_at' => Time.now.to_f
42
- }
34
+ def construct_job(task_class)
35
+ {}.tap do |job|
36
+ job['wfid'] = id
37
+ job['jid'] = "J#{Pallets::Util.generate_id(task_class)}".upcase
38
+ job['workflow_class'] = self.class.name
39
+ job['created_at'] = Time.now.to_f
40
+ end.merge(self.class.task_config[task_class])
43
41
  end
44
42
 
45
43
  def backend
metadata CHANGED
@@ -1,14 +1,14 @@
1
1
  --- !ruby/object:Gem::Specification
2
2
  name: pallets
3
3
  version: !ruby/object:Gem::Version
4
- version: 0.3.0
4
+ version: 0.4.0
5
5
  platform: ruby
6
6
  authors:
7
7
  - Andrei Horak
8
8
  autorequire:
9
9
  bindir: bin
10
10
  cert_chain: []
11
- date: 2019-02-08 00:00:00.000000000 Z
11
+ date: 2019-04-07 00:00:00.000000000 Z
12
12
  dependencies:
13
13
  - !ruby/object:Gem::Dependency
14
14
  name: redis
@@ -106,7 +106,7 @@ required_rubygems_version: !ruby/object:Gem::Requirement
106
106
  version: '0'
107
107
  requirements: []
108
108
  rubyforge_project:
109
- rubygems_version: 2.7.6
109
+ rubygems_version: 2.5.2.3
110
110
  signing_key:
111
111
  specification_version: 4
112
112
  summary: Toy workflow engine, written in Ruby