sidekiq-hierarchy 1.0.0 → 1.1.0
Sign up to get free protection for your applications and to get access to all the features.
- checksums.yaml +4 -4
- data/CHANGELOG.md +7 -0
- data/README.md +25 -0
- data/lib/sidekiq/hierarchy.rb +16 -8
- data/lib/sidekiq/hierarchy/client/middleware.rb +1 -1
- data/lib/sidekiq/hierarchy/job.rb +9 -21
- data/lib/sidekiq/hierarchy/redis_connection.rb +43 -0
- data/lib/sidekiq/hierarchy/version.rb +1 -1
- data/lib/sidekiq/hierarchy/web.rb +13 -7
- data/lib/sidekiq/hierarchy/workflow_set.rb +9 -7
- data/sidekiq-hierarchy.gemspec +1 -0
- data/web/views/_job_progress_bar.erb +5 -5
- data/web/views/_workflow_progress_bar.erb +4 -4
- data/web/views/_workflow_tree.erb +1 -1
- data/web/views/_workflow_tree_node.erb +1 -1
- data/web/views/job.erb +4 -4
- data/web/views/status.erb +4 -4
- data/web/views/workflow.erb +4 -4
- data/web/views/workflow_set.erb +1 -1
- metadata +17 -2
checksums.yaml
CHANGED
@@ -1,7 +1,7 @@
|
|
1
1
|
---
|
2
2
|
SHA1:
|
3
|
-
metadata.gz:
|
4
|
-
data.tar.gz:
|
3
|
+
metadata.gz: 7bbc58dd27887b22cd4028b50d0baa627155c604
|
4
|
+
data.tar.gz: a19dd469928472862f286c9408471b898f30f839
|
5
5
|
SHA512:
|
6
|
-
metadata.gz:
|
7
|
-
data.tar.gz:
|
6
|
+
metadata.gz: cadcdc2e2ab393ecae67ea16f8cc7a609df3e00f30e43b1d144f8dd1706f141646156708afd17ddfb37b54b75235987c4cdf45f55226e29ec36c1cdb18d248a0
|
7
|
+
data.tar.gz: 37e3e7654896e53377e4b462f76282dafabaa8967fb22827c4d1c0df08fc9bd8b023cf555c40933c238ea56a905bdea7cfc1c29480382f035b7b8636edb39839
|
data/CHANGELOG.md
CHANGED
@@ -2,6 +2,13 @@
|
|
2
2
|
All notable changes to this project will be documented in this file.
|
3
3
|
This project adheres to [Semantic Versioning](http://semver.org/).
|
4
4
|
|
5
|
+
## [1.1.0] - 2015-12-04
|
6
|
+
### Changed
|
7
|
+
- Use Sinatra template helpers to get template caching for views
|
8
|
+
|
9
|
+
### Added
|
10
|
+
- Support using a separate Redis connection/pool for workflow storage
|
11
|
+
|
5
12
|
## [1.0.0] - 2015-11-19
|
6
13
|
### Changed
|
7
14
|
- Nothing: bumping for first production release
|
data/README.md
CHANGED
@@ -32,6 +32,7 @@ Disclaimer: Sidekiq-hierarchy supports Sidekiq 3.x, and thus MRI 2.0+ and JRuby;
|
|
32
32
|
- [Advanced Options](#advanced-options)
|
33
33
|
- [Additional Job Info](#additional-job-info)
|
34
34
|
- [CompleteSet and FailedSet](#completeset-and-failedset)
|
35
|
+
- [Separate Redis Storage](#separate-redis-storage)
|
35
36
|
- [More Examples](#more-examples)
|
36
37
|
- [Fail-fast workflow cancellation](#fail-fast-workflow-cancellation)
|
37
38
|
- [Workflow Metrics Dashboard](#workflow-metrics-dashboard)
|
@@ -303,6 +304,30 @@ Two pruning strategies are employed, running on every workflow insertion: one wh
|
|
303
304
|
- `timeout`: `:dead_timeout_in_seconds` setting, also used by Sidekiq to prune dead jobs (default 6 months)
|
304
305
|
- `max_workflows`: the first of `:dead_max_workflows` and `:dead_max_jobs`, whichever is set; the latter is used internally by Sidekiq to prune dead jobs (`:dead_max_jobs` default 10,000)
|
305
306
|
|
307
|
+
###Separate Redis Storage
|
308
|
+
|
309
|
+
Depending on the size of your workflows, the default storage of all information in Sidekiq's redis instance may not be right for you. Sidekiq-hierarchy makes an effort to use as little overhead as possible, about 200 bytes per job on average. Depending on factors like the length of your worker class names, the additional job info you choose to store, and the number of children each job has, you may see more or less space usage; test on your own data to be sure.
|
310
|
+
|
311
|
+
Because this data is usually less critical and more disposable than your Sidekiq queues or other Redis information, Sidekiq-hierarchy offers the option of using a separate Redis instance/cluster to store its metadata. This has three big advantages over the default of `Sidekiq.redis`:
|
312
|
+
|
313
|
+
- prevents memory pressure on your primary Redis instance,
|
314
|
+
- permits usage of a less robust, smaller, and/or cheaper Redis server for hierarchy data,
|
315
|
+
- and most importantly, allows sharing of the Redis instance between services, letting you track workflows between services (provided that network integration is set up).
|
316
|
+
|
317
|
+
Sidekiq-hierarchy accepts either a raw Redis connection or a ConnectionPool, though a ConnectionPool with appropriate size and timeout is highly recommended (see [mperham/connection_pool](https://github.com/mperham/connection_pool) for details). In either case, configuration can be performed at initialization:
|
318
|
+
|
319
|
+
```ruby
|
320
|
+
# with a bare Redis connection
|
321
|
+
alt_redis = Redis.new(db: 1)
|
322
|
+
Sidekiq::Hierarchy.redis = alt_redis
|
323
|
+
|
324
|
+
# with a Redis connection pool
|
325
|
+
conn_pool = ConnectionPool.new(size: 10, timeout: 2) { Redis.new(host: 'data-redis-master') }
|
326
|
+
Sidekiq::Hierarchy.redis = conn_pool
|
327
|
+
```
|
328
|
+
|
329
|
+
Using the same redis server with multiple services that talk to one another via async jobs is a quick and dirty way to get a map of your SOA, as long as you are aware of its limitations (no tracking connections not initiated through Sidekiq).
|
330
|
+
|
306
331
|
## More Examples
|
307
332
|
|
308
333
|
These are just a few ways in which Sidekiq-hierarchy could help you:
|
data/lib/sidekiq/hierarchy.rb
CHANGED
@@ -1,6 +1,8 @@
|
|
1
1
|
require 'sidekiq'
|
2
2
|
require 'sidekiq/hierarchy/version'
|
3
3
|
|
4
|
+
require 'sidekiq/hierarchy/redis_connection'
|
5
|
+
|
4
6
|
require 'sidekiq/hierarchy/job'
|
5
7
|
require 'sidekiq/hierarchy/workflow'
|
6
8
|
require 'sidekiq/hierarchy/workflow_set'
|
@@ -16,6 +18,12 @@ module Sidekiq
|
|
16
18
|
module Hierarchy
|
17
19
|
class << self
|
18
20
|
|
21
|
+
### Global redis store -- overrides default Sidekiq redis
|
22
|
+
|
23
|
+
def redis=(conn)
|
24
|
+
RedisConnection.redis = conn
|
25
|
+
end
|
26
|
+
|
19
27
|
### Per-thread context tracking
|
20
28
|
|
21
29
|
# Checks if tracking is enabled based on whether the workflow is known
|
@@ -47,18 +55,18 @@ module Sidekiq
|
|
47
55
|
|
48
56
|
### Workflow execution updates
|
49
57
|
|
50
|
-
def record_job_enqueued(job
|
58
|
+
def record_job_enqueued(job)
|
51
59
|
return unless !!job['workflow']
|
52
60
|
if current_jid.nil?
|
53
61
|
# this is a root-level job, i.e., start of a workflow
|
54
|
-
queued_job =
|
62
|
+
queued_job = Job.create(job['jid'], job)
|
55
63
|
queued_job.enqueue! # initial status: enqueued
|
56
64
|
elsif current_jid == job['jid']
|
57
65
|
# this is a job requeuing itself, ignore it
|
58
66
|
else
|
59
67
|
# this is an intermediate job, having both parent and children
|
60
|
-
current_job =
|
61
|
-
queued_job =
|
68
|
+
current_job = Job.find(current_jid)
|
69
|
+
queued_job = Job.create(job['jid'], job)
|
62
70
|
current_job.add_child(queued_job)
|
63
71
|
queued_job.enqueue! # initial status: enqueued
|
64
72
|
end
|
@@ -66,22 +74,22 @@ module Sidekiq
|
|
66
74
|
|
67
75
|
def record_job_running
|
68
76
|
return unless enabled? && current_jid
|
69
|
-
|
77
|
+
Job.find(current_jid).run!
|
70
78
|
end
|
71
79
|
|
72
80
|
def record_job_complete
|
73
81
|
return unless enabled? && current_jid
|
74
|
-
|
82
|
+
Job.find(current_jid).complete!
|
75
83
|
end
|
76
84
|
|
77
85
|
def record_job_requeued
|
78
86
|
return unless enabled? && current_jid
|
79
|
-
|
87
|
+
Job.find(current_jid).requeue!
|
80
88
|
end
|
81
89
|
|
82
90
|
def record_job_failed
|
83
91
|
return unless enabled? && current_jid
|
84
|
-
|
92
|
+
Job.find(current_jid).fail!
|
85
93
|
end
|
86
94
|
|
87
95
|
|
@@ -15,7 +15,7 @@ module Sidekiq
|
|
15
15
|
def call(worker_class, msg, queue, redis_pool=nil)
|
16
16
|
msg['workflow'] = Sidekiq::Hierarchy.current_workflow.jid if Sidekiq::Hierarchy.current_workflow
|
17
17
|
# if block returns nil/false, job was cancelled before queueing by middleware
|
18
|
-
yield.tap { |job| Sidekiq::Hierarchy.record_job_enqueued(job
|
18
|
+
yield.tap { |job| Sidekiq::Hierarchy.record_job_enqueued(job) if job }
|
19
19
|
end
|
20
20
|
end
|
21
21
|
end
|
@@ -1,6 +1,8 @@
|
|
1
1
|
module Sidekiq
|
2
2
|
module Hierarchy
|
3
3
|
class Job
|
4
|
+
include RedisConnection
|
5
|
+
|
4
6
|
# Job hash keys
|
5
7
|
INFO_FIELD = 'i'.freeze
|
6
8
|
PARENT_FIELD = 'p'.freeze
|
@@ -25,16 +27,15 @@ module Sidekiq
|
|
25
27
|
|
26
28
|
attr_reader :jid
|
27
29
|
|
28
|
-
def initialize(jid
|
30
|
+
def initialize(jid)
|
29
31
|
@jid = jid
|
30
|
-
@redis_pool = redis_pool
|
31
32
|
end
|
32
33
|
|
33
34
|
class << self
|
34
35
|
alias_method :find, :new
|
35
36
|
|
36
|
-
def create(jid, job_hash
|
37
|
-
new(jid
|
37
|
+
def create(jid, job_hash)
|
38
|
+
new(jid).tap do |job|
|
38
39
|
job[INFO_FIELD] = Sidekiq.dump_json(filtered_job_hash(job_hash))
|
39
40
|
end
|
40
41
|
end
|
@@ -54,9 +55,7 @@ module Sidekiq
|
|
54
55
|
end
|
55
56
|
|
56
57
|
def exists?
|
57
|
-
redis
|
58
|
-
conn.exists(redis_job_hkey)
|
59
|
-
end
|
58
|
+
redis { |conn| conn.exists(redis_job_hkey) }
|
60
59
|
end
|
61
60
|
|
62
61
|
def ==(other_job)
|
@@ -66,9 +65,7 @@ module Sidekiq
|
|
66
65
|
|
67
66
|
# Magic getter backed by redis hash
|
68
67
|
def [](key)
|
69
|
-
redis
|
70
|
-
conn.hget(redis_job_hkey, key)
|
71
|
-
end
|
68
|
+
redis { |conn| conn.hget(redis_job_hkey, key) }
|
72
69
|
end
|
73
70
|
|
74
71
|
# Magic setter backed by redis hash
|
@@ -91,13 +88,13 @@ module Sidekiq
|
|
91
88
|
|
92
89
|
def parent
|
93
90
|
if parent_jid = self[PARENT_FIELD]
|
94
|
-
self.class.find(parent_jid
|
91
|
+
self.class.find(parent_jid)
|
95
92
|
end
|
96
93
|
end
|
97
94
|
|
98
95
|
def children
|
99
96
|
redis do |conn|
|
100
|
-
conn.lrange(redis_children_lkey, 0, -1).map { |jid| self.class.find(jid
|
97
|
+
conn.lrange(redis_children_lkey, 0, -1).map { |jid| self.class.find(jid) }
|
101
98
|
end
|
102
99
|
end
|
103
100
|
|
@@ -276,15 +273,6 @@ module Sidekiq
|
|
276
273
|
def redis_children_lkey
|
277
274
|
"#{redis_job_hkey}:children"
|
278
275
|
end
|
279
|
-
|
280
|
-
def redis(&blk)
|
281
|
-
if @redis_pool
|
282
|
-
@redis_pool.with(&blk)
|
283
|
-
else
|
284
|
-
Sidekiq.redis(&blk)
|
285
|
-
end
|
286
|
-
end
|
287
|
-
private :redis
|
288
276
|
end
|
289
277
|
end
|
290
278
|
end
|
@@ -0,0 +1,43 @@
|
|
1
|
+
module Sidekiq
|
2
|
+
module Hierarchy
|
3
|
+
module RedisConnection
|
4
|
+
# A translation class turning a Redis object into a ConnectionPool-alike
|
5
|
+
class ConnectionProxy
|
6
|
+
attr_reader :redis
|
7
|
+
|
8
|
+
def initialize(redis_conn)
|
9
|
+
raise 'connection must be an instance of Redis' unless redis_conn.is_a?(::Redis)
|
10
|
+
@redis = redis_conn
|
11
|
+
end
|
12
|
+
|
13
|
+
def with(&blk)
|
14
|
+
blk.call(redis)
|
15
|
+
end
|
16
|
+
end
|
17
|
+
|
18
|
+
class << self
|
19
|
+
attr_reader :redis
|
20
|
+
|
21
|
+
# Set global redis
|
22
|
+
def redis=(conn)
|
23
|
+
@redis = if conn.nil?
|
24
|
+
nil
|
25
|
+
elsif conn.is_a?(::ConnectionPool)
|
26
|
+
conn
|
27
|
+
else
|
28
|
+
ConnectionProxy.new(conn)
|
29
|
+
end
|
30
|
+
end
|
31
|
+
end
|
32
|
+
|
33
|
+
# Use global redis if set, with a fallback to Sidekiq's redis pool
|
34
|
+
def redis(&blk)
|
35
|
+
if RedisConnection.redis
|
36
|
+
RedisConnection.redis.with(&blk)
|
37
|
+
else
|
38
|
+
Sidekiq.redis(&blk)
|
39
|
+
end
|
40
|
+
end
|
41
|
+
end
|
42
|
+
end
|
43
|
+
end
|
@@ -7,8 +7,13 @@ module Sidekiq
|
|
7
7
|
module Hierarchy
|
8
8
|
module Web
|
9
9
|
module Helpers
|
10
|
-
|
11
|
-
|
10
|
+
# Override find_template logic to process arrays of view directories
|
11
|
+
# warning: this may be incompatible with other overrides of find_template,
|
12
|
+
# though that really shouldn't happen if they match the method contract
|
13
|
+
def find_template(views, name, engine, &block)
|
14
|
+
Array(views).each do |view_dir|
|
15
|
+
super(view_dir, name, engine, &block)
|
16
|
+
end
|
12
17
|
end
|
13
18
|
|
14
19
|
def job_url(job=nil)
|
@@ -54,10 +59,11 @@ module Sidekiq
|
|
54
59
|
PER_PAGE = 20
|
55
60
|
|
56
61
|
def self.registered(app)
|
62
|
+
app.set :views, [*app.views, VIEW_PATH]
|
57
63
|
app.helpers Helpers
|
58
64
|
|
59
65
|
app.not_found do
|
60
|
-
erb
|
66
|
+
erb :not_found
|
61
67
|
end
|
62
68
|
|
63
69
|
app.get '/hierarchy/?' do
|
@@ -69,7 +75,7 @@ module Sidekiq
|
|
69
75
|
@complete = @complete_set.each.take(PER_PAGE)
|
70
76
|
@failed = @failed_set.each.take(PER_PAGE)
|
71
77
|
|
72
|
-
erb
|
78
|
+
erb :status
|
73
79
|
end
|
74
80
|
|
75
81
|
app.delete '/hierarchy/?' do
|
@@ -81,7 +87,7 @@ module Sidekiq
|
|
81
87
|
@status = status.to_sym
|
82
88
|
if @workflow_set = WorkflowSet.for_status(@status)
|
83
89
|
@workflows = @workflow_set.each.take(PER_PAGE)
|
84
|
-
erb
|
90
|
+
erb :workflow_set
|
85
91
|
else
|
86
92
|
halt 404
|
87
93
|
end
|
@@ -108,7 +114,7 @@ module Sidekiq
|
|
108
114
|
app.get %r{\A/hierarchy/workflows/(\h{24})\z} do |workflow_jid|
|
109
115
|
@workflow = Workflow.find_by_jid(workflow_jid)
|
110
116
|
if @workflow.exists?
|
111
|
-
erb
|
117
|
+
erb :workflow
|
112
118
|
else
|
113
119
|
halt 404
|
114
120
|
end
|
@@ -134,7 +140,7 @@ module Sidekiq
|
|
134
140
|
@job = Job.find(jid)
|
135
141
|
@workflow = @job.workflow
|
136
142
|
if @job.exists? && @workflow.exists?
|
137
|
-
erb
|
143
|
+
erb :job
|
138
144
|
else
|
139
145
|
halt 404
|
140
146
|
end
|
@@ -5,6 +5,8 @@ module Sidekiq
|
|
5
5
|
|
6
6
|
# A sorted set of Workflows that permits enumeration
|
7
7
|
class WorkflowSet
|
8
|
+
include RedisConnection
|
9
|
+
|
8
10
|
PAGE_SIZE = 100
|
9
11
|
|
10
12
|
def self.for_status(status)
|
@@ -28,15 +30,15 @@ module Sidekiq
|
|
28
30
|
end
|
29
31
|
|
30
32
|
def size
|
31
|
-
|
33
|
+
redis { |conn| conn.zcard(redis_zkey) }
|
32
34
|
end
|
33
35
|
|
34
36
|
def add(workflow)
|
35
|
-
|
37
|
+
redis { |conn| conn.zadd(redis_zkey, Time.now.to_f, workflow.jid) }
|
36
38
|
end
|
37
39
|
|
38
40
|
def contains?(workflow)
|
39
|
-
!!
|
41
|
+
!!redis { |conn| conn.zscore(redis_zkey, workflow.jid) }
|
40
42
|
end
|
41
43
|
|
42
44
|
# Remove a workflow from the set if it is present. This operation can
|
@@ -45,7 +47,7 @@ module Sidekiq
|
|
45
47
|
# memory leaks.
|
46
48
|
def remove(workflow)
|
47
49
|
raise 'Workflow still exists' if workflow.exists?
|
48
|
-
|
50
|
+
redis { |conn| conn.zrem(redis_zkey, workflow.jid) }
|
49
51
|
end
|
50
52
|
|
51
53
|
# Move a workflow to this set from its current one
|
@@ -53,7 +55,7 @@ module Sidekiq
|
|
53
55
|
# so there is a potential race condition in which a workflow could end up in
|
54
56
|
# multiple sets. the effect of this is minimal, so we'll fix it later.
|
55
57
|
def move(workflow, from_set=nil)
|
56
|
-
|
58
|
+
redis do |conn|
|
57
59
|
conn.multi do
|
58
60
|
conn.zrem(from_set.redis_zkey, workflow.jid) if from_set
|
59
61
|
conn.zadd(redis_zkey, Time.now.to_f, workflow.jid)
|
@@ -67,7 +69,7 @@ module Sidekiq
|
|
67
69
|
elements = []
|
68
70
|
last_max_score = Time.now.to_f
|
69
71
|
loop do
|
70
|
-
elements =
|
72
|
+
elements = redis do |conn|
|
71
73
|
conn.zrevrangebyscore(redis_zkey, last_max_score, '-inf', limit: [0, PAGE_SIZE], with_scores: true)
|
72
74
|
.drop_while { |elt| elements.include?(elt) }
|
73
75
|
end
|
@@ -100,7 +102,7 @@ module Sidekiq
|
|
100
102
|
end
|
101
103
|
|
102
104
|
def prune
|
103
|
-
|
105
|
+
redis do |conn|
|
104
106
|
conn.multi do
|
105
107
|
conn.zrangebyscore(redis_zkey, '-inf', Time.now.to_f - self.class.timeout) # old workflows
|
106
108
|
conn.zrevrange(redis_zkey, self.class.max_workflows, -1) # excess workflows
|
data/sidekiq-hierarchy.gemspec
CHANGED
@@ -20,6 +20,7 @@ Gem::Specification.new do |spec|
|
|
20
20
|
spec.require_paths = ["lib"]
|
21
21
|
|
22
22
|
spec.add_dependency 'sidekiq', '~> 3.3'
|
23
|
+
spec.add_dependency 'connection_pool', '~> 2.0'
|
23
24
|
|
24
25
|
spec.add_development_dependency 'bundler', '~> 1.10'
|
25
26
|
spec.add_development_dependency 'rake', '~> 10.0'
|
@@ -3,26 +3,26 @@
|
|
3
3
|
<% run_at = job.run_at %>
|
4
4
|
|
5
5
|
<% if status == :enqueued %>
|
6
|
-
<%= erb
|
6
|
+
<%= erb :_progress_bar, locals: {bars: [[:enqueued, 1.0]]} %>
|
7
7
|
|
8
8
|
<% elsif status == :requeued %>
|
9
|
-
<%= erb
|
9
|
+
<%= erb :_progress_bar, locals: {bars: [[:requeued, 1.0]]} %>
|
10
10
|
|
11
11
|
<% elsif status == :running %>
|
12
12
|
<% runtime = Time.now - enqueued_at %>
|
13
13
|
<% enqueued_pct = (run_at - enqueued_at) / runtime %>
|
14
14
|
<% run_pct = 1.0 - enqueued_pct %>
|
15
|
-
<%= erb
|
15
|
+
<%= erb :_progress_bar, locals: {bars: [[:enqueued, enqueued_pct], [:running, run_pct]], active: true} %>
|
16
16
|
|
17
17
|
<% elsif status == :complete %>
|
18
18
|
<% runtime = job.complete_at - enqueued_at %>
|
19
19
|
<% enqueued_pct = (run_at - enqueued_at) / runtime %>
|
20
20
|
<% run_pct = 1.0 - enqueued_pct %>
|
21
|
-
<%= erb
|
21
|
+
<%= erb :_progress_bar, locals: {bars: [[:enqueued, enqueued_pct], [:complete, run_pct]]} %>
|
22
22
|
|
23
23
|
<% elsif status == :failed %>
|
24
24
|
<% runtime = job.failed_at - enqueued_at %>
|
25
25
|
<% enqueued_pct = (run_at - enqueued_at) / runtime %>
|
26
26
|
<% failed_pct = 1.0 - enqueued_pct %>
|
27
|
-
<%= erb
|
27
|
+
<%= erb :_progress_bar, locals: {bars: [[:enqueued, enqueued_pct], [:failed, failed_pct]]} %>
|
28
28
|
<% end %>
|
@@ -2,23 +2,23 @@
|
|
2
2
|
<% run_at = workflow.run_at %>
|
3
3
|
|
4
4
|
<% if workflow.root.enqueued? %>
|
5
|
-
<%= erb
|
5
|
+
<%= erb :_progress_bar, locals: {bars: [[:enqueued, 1.0]]} %>
|
6
6
|
|
7
7
|
<% elsif workflow.running? %>
|
8
8
|
<% runtime = Time.now - enqueued_at %>
|
9
9
|
<% enqueued_pct = (run_at - enqueued_at) / runtime %>
|
10
10
|
<% run_pct = 1.0 - enqueued_pct %>
|
11
|
-
<%= erb
|
11
|
+
<%= erb :_progress_bar, locals: {bars: [[:enqueued, enqueued_pct], [:running, run_pct]], active: true} %>
|
12
12
|
|
13
13
|
<% elsif complete_at = workflow.complete_at %>
|
14
14
|
<% runtime = complete_at - enqueued_at %>
|
15
15
|
<% enqueued_pct = (run_at - enqueued_at) / runtime %>
|
16
16
|
<% run_pct = 1.0 - enqueued_pct %>
|
17
|
-
<%= erb
|
17
|
+
<%= erb :_progress_bar, locals: {bars: [[:enqueued, enqueued_pct], [:complete, run_pct]]} %>
|
18
18
|
|
19
19
|
<% elsif failed_at = workflow.failed_at %>
|
20
20
|
<% runtime = failed_at - enqueued_at %>
|
21
21
|
<% enqueued_pct = (run_at - enqueued_at) / runtime %>
|
22
22
|
<% failed_pct = 1.0 - enqueued_pct %>
|
23
|
-
<%= erb
|
23
|
+
<%= erb :_progress_bar, locals: {bars: [[:enqueued, enqueued_pct], [:failed, failed_pct]]} %>
|
24
24
|
<% end %>
|
data/web/views/job.erb
CHANGED
@@ -3,10 +3,10 @@
|
|
3
3
|
</a>
|
4
4
|
<h4>Job <%= @job.jid %></h4>
|
5
5
|
|
6
|
-
<%= erb
|
6
|
+
<%= erb :_job_progress_bar, locals: {job: @job} %>
|
7
7
|
|
8
|
-
<%= erb
|
9
|
-
<%= erb
|
8
|
+
<%= erb :_job_table, locals: {jobs: [@job]} %>
|
9
|
+
<%= erb :_job_timings, locals: {job: @job} %>
|
10
10
|
|
11
11
|
<h4>Job Tree</h4>
|
12
|
-
<%= erb
|
12
|
+
<%= erb :_workflow_tree, locals: {root: @job} %>
|
data/web/views/status.erb
CHANGED
@@ -1,7 +1,7 @@
|
|
1
1
|
<h3>Sidekiq Hierarchy</h3>
|
2
2
|
|
3
|
-
<%= erb
|
4
|
-
<%= erb
|
3
|
+
<%= erb :_summary_bar %>
|
4
|
+
<%= erb :_search_bar %>
|
5
5
|
|
6
6
|
<div class="panel panel-info">
|
7
7
|
<div class="panel-heading">
|
@@ -77,7 +77,7 @@
|
|
77
77
|
</table>
|
78
78
|
</div>
|
79
79
|
|
80
|
-
<%= erb
|
80
|
+
<%= erb :_workflow_set_clear, locals: {status: :complete} %>
|
81
81
|
</div>
|
82
82
|
</div>
|
83
83
|
|
@@ -118,6 +118,6 @@
|
|
118
118
|
</table>
|
119
119
|
</div>
|
120
120
|
|
121
|
-
<%= erb
|
121
|
+
<%= erb :_workflow_set_clear, locals: {status: :failed} %>
|
122
122
|
</div>
|
123
123
|
</div>
|
data/web/views/workflow.erb
CHANGED
@@ -1,6 +1,6 @@
|
|
1
1
|
<h3>Workflow <%= @workflow.jid %></h3>
|
2
2
|
|
3
|
-
<%= erb
|
3
|
+
<%= erb :_workflow_progress_bar, locals: {workflow: @workflow} %>
|
4
4
|
|
5
5
|
<div class="table_container">
|
6
6
|
<table class="table table-condensed table-white">
|
@@ -26,13 +26,13 @@
|
|
26
26
|
</table>
|
27
27
|
</div>
|
28
28
|
|
29
|
-
<%= erb
|
29
|
+
<%= erb :_workflow_timings, locals: {workflow: @workflow} %>
|
30
30
|
|
31
31
|
<h4>Job Tree</h4>
|
32
|
-
<%= erb
|
32
|
+
<%= erb :_workflow_tree, locals: {root: @workflow.root} %>
|
33
33
|
|
34
34
|
<h4>Jobs</h4>
|
35
|
-
<%= erb
|
35
|
+
<%= erb :_job_table, locals: {jobs: @workflow.jobs} %>
|
36
36
|
|
37
37
|
<% unless @workflow.running? %>
|
38
38
|
<form action="<%= workflow_url(@workflow) %>" method="post">
|
data/web/views/workflow_set.erb
CHANGED
@@ -1,5 +1,5 @@
|
|
1
1
|
<h3><%= @status.to_s.capitalize %> Workflows</h3>
|
2
2
|
|
3
|
-
<%= erb
|
3
|
+
<%= erb :_workflow_table, locals: {workflows: @workflows} %>
|
4
4
|
|
5
5
|
Displaying <%= @workflows.size %> newest of <%= @workflow_set.size %>
|
metadata
CHANGED
@@ -1,14 +1,14 @@
|
|
1
1
|
--- !ruby/object:Gem::Specification
|
2
2
|
name: sidekiq-hierarchy
|
3
3
|
version: !ruby/object:Gem::Version
|
4
|
-
version: 1.
|
4
|
+
version: 1.1.0
|
5
5
|
platform: ruby
|
6
6
|
authors:
|
7
7
|
- Anuj Das
|
8
8
|
autorequire:
|
9
9
|
bindir: exe
|
10
10
|
cert_chain: []
|
11
|
-
date: 2015-
|
11
|
+
date: 2015-12-04 00:00:00.000000000 Z
|
12
12
|
dependencies:
|
13
13
|
- !ruby/object:Gem::Dependency
|
14
14
|
requirement: !ruby/object:Gem::Requirement
|
@@ -24,6 +24,20 @@ dependencies:
|
|
24
24
|
- - ~>
|
25
25
|
- !ruby/object:Gem::Version
|
26
26
|
version: '3.3'
|
27
|
+
- !ruby/object:Gem::Dependency
|
28
|
+
requirement: !ruby/object:Gem::Requirement
|
29
|
+
requirements:
|
30
|
+
- - ~>
|
31
|
+
- !ruby/object:Gem::Version
|
32
|
+
version: '2.0'
|
33
|
+
name: connection_pool
|
34
|
+
prerelease: false
|
35
|
+
type: :runtime
|
36
|
+
version_requirements: !ruby/object:Gem::Requirement
|
37
|
+
requirements:
|
38
|
+
- - ~>
|
39
|
+
- !ruby/object:Gem::Version
|
40
|
+
version: '2.0'
|
27
41
|
- !ruby/object:Gem::Dependency
|
28
42
|
requirement: !ruby/object:Gem::Requirement
|
29
43
|
requirements:
|
@@ -173,6 +187,7 @@ files:
|
|
173
187
|
- lib/sidekiq/hierarchy/observers/job_update.rb
|
174
188
|
- lib/sidekiq/hierarchy/observers/workflow_update.rb
|
175
189
|
- lib/sidekiq/hierarchy/rack/middleware.rb
|
190
|
+
- lib/sidekiq/hierarchy/redis_connection.rb
|
176
191
|
- lib/sidekiq/hierarchy/server/middleware.rb
|
177
192
|
- lib/sidekiq/hierarchy/version.rb
|
178
193
|
- lib/sidekiq/hierarchy/web.rb
|