redact 0.1

Sign up to get free protection for your applications and to get access to all the features.
Files changed (7) hide show
  1. checksums.yaml +7 -0
  2. data/COPYING +10 -0
  3. data/README +94 -0
  4. data/bin/redact-monitor +79 -0
  5. data/lib/redact.rb +316 -0
  6. data/views/index.erb +100 -0
  7. metadata +84 -0
@@ -0,0 +1,7 @@
1
+ ---
2
+ SHA1:
3
+ metadata.gz: 9b4920a30f30707dd2e3dc66e8d42f49e7876aa5
4
+ data.tar.gz: 8cecbebad2961e3b3b6d80b209a7a9379397f344
5
+ SHA512:
6
+ metadata.gz: e7009e06e5775b20378c19241b29cccc5f9e551987fded43187b0e48d129b76abc48065939d0b67372ab43e5dc479b70ff563d0f95236a0b30832360de65dc9a
7
+ data.tar.gz: 460b2a1791d4f36455cd90cba768cf5e97babbb0ec5c6f857a45edd16da9e0ccefdb3085a69099087f212b638ca2228befc71e98e614dc45da2215f765d8b329
data/COPYING ADDED
@@ -0,0 +1,10 @@
1
+ Redact is copyright (c) 2014 William Morgan <wmorgan@masanjin.net>
2
+ All rights reserved.
3
+
4
+ Redistribution and use in source and binary forms, with or without modification, are permitted provided that the following conditions are met:
5
+
6
+ * Redistributions of source code must retain the above copyright notice, this list of conditions and the following disclaimer.
7
+ * Redistributions in binary form must reproduce the above copyright notice, this list of conditions and the following disclaimer in the documentation and/or other materials provided with the distribution.
8
+ * Neither the name of Whistlepig nor the names of its contributors may be used to endorse or promote products derived from this software without specific prior written permission.
9
+
10
+ THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
data/README ADDED
@@ -0,0 +1,94 @@
1
+ Redact is a dependency-based work planner for Redis. It allows you to express
2
+ your work as a set of tasks that are dependent on other tasks, and to execute
3
+ runs across this graph, in such a way that all dependencies for a task are
4
+ guaranteed to be satisfied before the task itself is executed.
5
+
6
+ It does everything you'd expect from a production planner system:
7
+ * You can make use of arbitrary number of worker processes.
8
+ * You can have arbitrary concurrent runs through the task graph.
9
+ * An application exception causes the task to be retried a fixed number of times.
10
+ * Tasks lost as part of an application crash or Ruby segfault are recoverable.
11
+
12
+ For debugging purposes, you can use Redact#enqueued_tasks, #in_progress_tasks,
13
+ and #done_tasks to iterate over all tasks, past and present. Note that the
14
+ output from these methods may change rapidly, and that calls are not guaranteed
15
+ to be consisent with each other. The gem provides a simple webserver,
16
+ redact-monitor, for visualizing the current state of the planner.
17
+
18
+ == Synopsis
19
+
20
+ ############ starter.rb #############
21
+ r = Redis.new
22
+ p = Redact.new r, namespace: "food/"
23
+
24
+ ## set up tasks and dependencies
25
+ p.add_task :eat, :cook, :lay_table
26
+ p.add_task :cook, :chop
27
+ p.add_task :chop, :wash
28
+ p.add_task :wash, :buy
29
+ p.add_task :lay_table, :clean_table
30
+
31
+ ## publish the graph
32
+ p.publish_graph!
33
+
34
+ ## schedule a run
35
+ p.do! :eat, "dinner"
36
+
37
+ ############ worker.rb #############
38
+ r = Redis.new
39
+ p = Redact.new r, namespace: "food/"
40
+
41
+ p.each do |task, run_id|
42
+ puts "#{task} #{run_id}"
43
+ sleep 1 # do work
44
+ end
45
+
46
+ This prints something like:
47
+
48
+ buy dinner
49
+ wash dinner
50
+ chop dinner
51
+ cook dinner
52
+ clean_table dinner
53
+ lay_table dinner
54
+ eat dinner
55
+
56
+ You can run multiple copies of worker.rb concurrently to see parallel magic.
57
+
58
+ == Using Redact
59
+
60
+ First, some terminology: each node in the graph is a "task". Task depends on
61
+ other tasks. To perform work, you select one of these tasks to be executed;
62
+ this is the "target". An execution of a "target" across the graph is a "run";
63
+ each run has a unique run_id, which you supply. In terms of Ruby objects,
64
+ run_ids are represented as strings, and tasks are always represented as
65
+ symbols.
66
+
67
+ Adding circular dependencies will result in an error. Specifying a
68
+ previous-used run_id will do something undefined. Otherwise, workers will
69
+ receive tasks in dependency order, and only those tasks which are necessary for
70
+ the execution of a target will be executed.
71
+
72
+ A run may have key/value parameters. These parameters are supplied to every
73
+ task that is executed as part of that run. These parameters may not be modified
74
+ as part of the run.
75
+
76
+ You may only have one graph per namespace. However, this graph may have an
77
+ arbitrary number of tasks, and any task may be used as a target.
78
+
79
+ You must publish the graph (with #publish_graph!) before workers can receive
80
+ it. Worker processes reload the graph before processing every task, so updating
81
+ the graph without restarting worker processes is acceptable. (Of course,
82
+ workers must know how to execute any new tasks you add.)
83
+
84
+ == Error recovery
85
+
86
+ Workers that crash due to non-application exceptions (Ruby crashes, bugs in Redact)
87
+ will leave their tasks in the "in processing" queue. You can use
88
+ Redact#in_progress_tasks to iterate over these items; excessively old items are
89
+ probably the result of a process crash.
90
+
91
+ == Bug reports
92
+
93
+ Please file bugs here: https://github.com/wmorgan/redact/issues
94
+ Please send comments to: wmorgan-redact-readme@masanjin.net.
@@ -0,0 +1,79 @@
1
+ #!/usr/bin/env ruby
2
+
3
+ require 'trollop'
4
+ require 'redis'
5
+ require "redact"
6
+ require 'sinatra/base'
7
+ require 'ostruct'
8
+
9
+ class Server < Sinatra::Base
10
+ def initialize redact
11
+ @redact = redact
12
+ super
13
+ end
14
+
15
+ get '/' do
16
+ enqueued = @redact.enqueued_tasks.map { |t| OpenStruct.new t }
17
+ in_progress = @redact.in_progress_tasks.map { |t| OpenStruct.new t }
18
+ done = @redact.done_tasks.map { |t| OpenStruct.new t }
19
+
20
+ (enqueued + in_progress + done).each do |x|
21
+ x.ago = pretty_time_diff((Time.now - x.ts).abs)
22
+ x.time_in_queue = x.time_waiting ? pretty_time_diff(x.time_waiting.to_i) : ""
23
+ x.time_in_progress = x.time_processing ? pretty_time_diff(x.time_processing.to_i) : ""
24
+ x.params = x.params.map { |k, v| "<b>#{k}</b>: #{v}<br/>" }.join
25
+ x.state_happiness = case x.state
26
+ when "done"; "success"
27
+ when "error"; "danger"
28
+ when "skipped"; "warning"
29
+ when "in_progress"; x.tries > 0 ? "warning" : ""
30
+ end
31
+
32
+ end
33
+
34
+ erb :index, locals: { enqueued: enqueued, in_progress: in_progress, done: done }
35
+ end
36
+
37
+ def pretty_time_diff diff
38
+ if diff < 60; sprintf("%ds", diff)
39
+ elsif diff < 60*60; sprintf("%dm", diff / 60)
40
+ elsif diff < 60*60*24; sprintf("%dh", diff / 60 / 60)
41
+ else sprintf("%dd", diff / 60 / 60 / 24)
42
+ end
43
+ end
44
+
45
+ get "/favicon.ico" do
46
+ end
47
+ end
48
+ Server.set :root, File.expand_path(File.join(File.dirname(__FILE__), ".."))
49
+
50
+ opts, env = Trollop::options do
51
+ opt :host, "Server host", default: "localhost"
52
+ opt :port, "Server port", default: 3000
53
+ opt :rack_handler, "Rack handler", default: "webrick"
54
+ opt :redis_url, "Redis url", default: "redis://localhost:6379/"
55
+ opt :namespace, "Redact namespace", type: :string
56
+ end
57
+
58
+ redis = Redis.new url: opts.redis_url
59
+ begin
60
+ redis.ping
61
+ rescue Redis::CannotConnectError => e
62
+ Trollop::die "Can't reach redis host: #{e.message}"
63
+ end
64
+
65
+ redact = Redact.new redis, namespace: opts.namespace, blocking: false
66
+ server = Server.new redact
67
+
68
+ app = Rack::Builder.new do
69
+ use Rack::CommonLogger, $stdout
70
+ use Rack::ShowExceptions
71
+ use Rack::Lint
72
+ run server
73
+ end.to_app
74
+
75
+ handler = Rack::Handler.get opts.rack_handler.downcase
76
+
77
+ ## OK HERE WE GO!!!
78
+ Signal.trap('INT') { handler.shutdown }
79
+ handler.run app, :Port => opts.port, :Host => opts.host
@@ -0,0 +1,316 @@
1
+ require 'tsort'
2
+ require 'socket'
3
+ require 'json'
4
+ require 'redis'
5
+
6
+ ## A distributed, dependency-aware job scheduler for Redis. Like distributed
7
+ ## make---you define the dependencies between different parts of your job, and
8
+ ## Redact handles the scheduling.
9
+ class Redact
10
+ include Enumerable
11
+
12
+ class TSortHash < Hash
13
+ include TSort
14
+ alias tsort_each_node each_key
15
+ def tsort_each_child node, &block
16
+ deps = self[node] || []
17
+ deps.each(&block)
18
+ end
19
+ end
20
+
21
+ ## Options:
22
+ ## * +namespace+: prefix for Redis keys, e.g. "redact/"
23
+ def initialize redis, opts={}
24
+ @namespace = opts[:namespace]
25
+ @redis = redis
26
+ @dag = TSortHash.new
27
+
28
+ @queue = [@namespace, "q"].join
29
+ @processing_list = [@namespace, "processing"].join
30
+ @done_list = [@namespace, "done"].join
31
+ @dag_key = [@namespace, "dag"].join
32
+ @metadata_prefix = [@namespace, "metadata"].join
33
+ @params_key = [@namespace, "params"].join
34
+ end
35
+
36
+ ## Drop all data and reset the planner.
37
+ def reset!
38
+ keys = [@queue, @processing_list, @done_list, @dag_key, @params_key]
39
+ keys += @redis.keys("#@metadata_prefix/*")
40
+ keys.each { |k| @redis.del k }
41
+ end
42
+
43
+ class CyclicDependencyError < StandardError; end
44
+
45
+ ## Add a task with dependencies. +What+ is the name of a task (either a symbol
46
+ ## or a string). +Deps+ are any tasks that are dependencies of +what+. +Deps+
47
+ ## may refer to tasks not already added by #add_task; these will be
48
+ ## automatically added without dependencies.
49
+ ##
50
+ ## Raises a CyclicDependencyError exception if adding these dependencies
51
+ ## would result in a cyclic dependency.
52
+ def add_task what, *deps deps = deps.flatten # be nice and allow arrays to be
53
+ passed in raise ArgumentError, "expecting dependencies to be zero or more
54
+ task ids" unless deps.all? { |x| x.is_a?(Symbol) } @dag[what] = deps
55
+
56
+ @dag.strongly_connected_components.each do |x|
57
+ raise CyclicDependencyError, "cyclic dependency #{x.inspect}" if x.size != 1
58
+ end
59
+ end
60
+
61
+ ## Publish the dependency graph. Must be called at least once before #do!.
62
+ def publish_graph!
63
+ @redis.set @dag_key, @dag.to_json
64
+ end
65
+
66
+ ## Schedules +target+ for completion among worker processes listening with
67
+ ## #each. Returns immediately.
68
+ ##
69
+ ## Targets scheduled with #do! have their tasks dispatched in generally FIFO
70
+ ## order; i.e., work for earlier targets will generally be scheduled before
71
+ ## work for later targets. Of course, the actual completion order of targets
72
+ ## depends on the completion orders of dependent tasks, the time required for
73
+ ## these tasks, etc.
74
+ ##
75
+ ## You must call #publish_graph! at least once before calling this.
76
+ ##
77
+ ## +run_id+ is the unique identifier for this run. Don't reuse these.
78
+ ##
79
+ ## +run_params+ are parameters that will be passed to all tasks in this run.
80
+ ## This value will go through JSON round-trips, so should only contain
81
+ ## variable types that are expressible with JSON.
82
+ def do! target, run_id, run_params=nil
83
+ raise ArgumentError, "you haven't called publish_graph!" unless @redis.exists(@dag_key)
84
+
85
+ dag = load_dag
86
+ target = target.to_s
87
+ raise ArgumentError, "#{target.inspect} is not a recognized task" unless dag.member?(target)
88
+
89
+ @redis.hset @params_key, run_id, run_params.to_json if run_params
90
+
91
+ dag.each_strongly_connected_component_from(target) do |tasks|
92
+ task = tasks.first # all single-element arrays by this point
93
+ next unless dag[task].nil? || dag[task].empty? # only push tasks without dependencies
94
+ enqueue_task! task, target, run_id, true
95
+ end
96
+ end
97
+
98
+ ## Return the total number of outstanding tasks in the queue. Note that this
99
+ ## is only the number of tasks whose dependencies are satisfied (i.e. those
100
+ ## only those that are currently ready to be performed). Queue size may
101
+ ## fluctuate in both directions as targets are built.
102
+ def size; @redis.llen @queue end
103
+
104
+ ## Returns the total number of outstanding tasks currently being processed.
105
+ def processing_list_size; @redis.llen @processing_list end
106
+
107
+ ## Returns the total number of completed tasks we have information about.
108
+ def done_list_size; @redis.llen @done_list end
109
+
110
+ ## Returns information representing the set of tasks currently in the queue.
111
+ ## The return value is a hash that includes, among other things, these keys:
112
+ ## +task+: the name of the task
113
+ ## +run_id+: the run_id of the task
114
+ ## +target+: the target of the task
115
+ ## +ts+: the timestamp of queue insertion
116
+ def enqueued_tasks start_idx=0, end_idx=-1
117
+ @redis.lrange(@queue, start_idx, end_idx).map { |t| task_summary_for t }
118
+ end
119
+
120
+ ## Returns information representing the set of tasks currently in process by
121
+ ## worker processes. The return value is a hash that includes keys from
122
+ ## #enqueued_tasks, plus these keys:
123
+ ## +worker_id+: the worker_id of the worker processing this task
124
+ ## +time_waiting+: the approximate number of seconds this task was enqueued for
125
+ ## +ts+: the timestamp at the start of processing
126
+ def in_progress_tasks start_idx=0, end_idx=-1
127
+ @redis.lrange(@processing_list, start_idx, end_idx).map { |t| task_summary_for t }
128
+ end
129
+
130
+ ## Returns information representing the set of tasks that have been
131
+ ## completed. The return value is a hash that includes keys from
132
+ ## #in_progress_tasks, plus these keys:
133
+ ## +worker_id+: the worker_id of the worker processing this task
134
+ ## +time_waiting+: the approximate number of seconds this task was enqueued for
135
+ ## +ts+: the timestamp at the end of processing
136
+ ## +state+: one of "done", "skipped", or "error"
137
+ ## +error+, +backtrace+: debugging information for tasks in state "error"
138
+ ## +time_processing+: the approximate number of seconds this task was processed for
139
+ def done_tasks start_idx=0, end_idx=-1
140
+ @redis.lrange(@done_list, start_idx, end_idx).map { |t| task_summary_for t }
141
+ end
142
+
143
+ ## Yields tasks from the queue that are ready for execution. Callers should
144
+ ## then perform the work for those tasks. Any exceptions thrown will result in the
145
+ ## task being reinserted in the queue and tried at a later point (possibly by another
146
+ ## process), unless the retry maximum for that task has been exceeded.
147
+ ##
148
+ ## This method downloads the task graph as necessary, so live updates of the
149
+ ## graph are possible without restarting worker processes.
150
+ ##
151
+ ## +opts+ are:
152
+ ## * +blocking+: if true, #each will block until items are available (and will never return)
153
+ ## * +retries+: how many times an individual job should be retried before resulting in an error state. Default is 2 (so 3 tries total).
154
+ ## * +worker_id+: the id of this worker process, for debugging. (If nil, will use a reasonably intelligent default.)
155
+ def each opts={}
156
+ worker_id = opts[:worker_id] || [Socket.gethostname, $$, $0].join("-")
157
+ retries = opts[:retries] || 2
158
+ blocking = opts[:blocking]
159
+
160
+ while true
161
+ ## get the token of the next task to perform
162
+ meth = blocking ? :brpoplpush : :rpoplpush
163
+ token = @redis.send meth, @queue, @processing_list
164
+ break unless token # no more tokens and we're in non-blocking mode
165
+
166
+ ## decompose the token
167
+ task, target, run_id, insertion_time = parse_token token
168
+
169
+ ## record that we've seen this
170
+ set_metadata! task, run_id, worker_id: worker_id, time_waiting: (Time.now - insertion_time).to_i
171
+
172
+ ## load the target state. abort if we don't need to do anything
173
+ target_state = get_state target, run_id
174
+ if (target_state == :error) || (target_state == :done)
175
+ #log "skipping #{task}##{run_id} because #{target}##{run_id} is in state #{target_state}"
176
+ set_metadata! task, run_id, state: :skipped
177
+ commit! token
178
+ next
179
+ end
180
+
181
+ ## get any run params
182
+ params = @redis.hget @params_key, run_id
183
+ params = JSON.parse(params) if params
184
+
185
+ ## ok, let's finally try to perform the task
186
+ begin
187
+ #log "performing #{task}##{run_id}"
188
+
189
+ ## the task is now in progress
190
+ set_metadata! task, run_id, state: :in_progress
191
+
192
+ ## do it
193
+ startt = Time.now
194
+ yield task.to_sym, run_id, params
195
+ elapsed = Time.now - startt
196
+
197
+ ## update total running time
198
+ total_time_processing = elapsed + (get_metadata(task, run_id)[:time_processing] || 0).to_f
199
+ set_metadata! task, run_id, time_processing: total_time_processing
200
+
201
+ set_metadata! task, run_id, state: :done
202
+ enqueue_next_tasks! task, target, run_id
203
+ commit! token
204
+ rescue Exception => e
205
+ num_tries = inc_num_tries! task, run_id
206
+ if num_tries > retries # we fail
207
+ set_metadata! target, run_id, state: :error
208
+ set_metadata! task, run_id, state: :error, error: "(#{e.class}) #{e.message}", backtrace: e.backtrace
209
+ commit! token
210
+ else # we'll retry
211
+ uncommit! token
212
+ end
213
+
214
+ raise
215
+ end
216
+ end
217
+ end
218
+
219
+ private
220
+
221
+ def load_dag
222
+ dag = JSON.parse @redis.get(@dag_key)
223
+ dag.inject(TSortHash.new) { |h, (k, v)| h[k] = v; h } # i guess
224
+ end
225
+
226
+ ## tasks are popped from the right, so at_the_end means lpush, otherwise
227
+ ## rpush.
228
+ def enqueue_task! task, target, run_id, at_the_end=false
229
+ set_metadata! task, run_id, state: :in_queue
230
+ token = make_token task, target, run_id
231
+ at_the_end ? @redis.lpush(@queue, token) : @redis.rpush(@queue, token)
232
+ end
233
+
234
+ ## move from processing list to done list
235
+ def commit! token
236
+ @redis.multi do
237
+ @redis.lrem @processing_list, 1, token
238
+ @redis.lpush @done_list, token
239
+ end
240
+ end
241
+
242
+ ## move from processing list back to queue
243
+ def uncommit! token
244
+ @redis.multi do # rewind the rpoplpush
245
+ @redis.rpush @queue, token
246
+ @redis.lrem @processing_list, 1, token
247
+ end
248
+ end
249
+
250
+ ## make pretty thing for debuggin
251
+ def task_summary_for token
252
+ task, target, run_id, ts = parse_token token
253
+ params = @redis.hget @params_key, run_id
254
+ params = JSON.parse(params) if params
255
+ get_metadata(task, run_id).merge task: task, run_id: run_id, target: target, insertion_time: ts, params: params
256
+ end
257
+
258
+ ## enqueue all tasks for +target+ that are unblocked by virtue of having
259
+ ## completed +task+
260
+ def enqueue_next_tasks! task, target, run_id
261
+ ## gimme dag
262
+ dag = load_dag
263
+
264
+ ## find all tasks that we block
265
+ blocked = dag.inject([]) { |a, (k, v)| a << k if v.member?(task.to_s); a }
266
+
267
+ ## find all tasks in the path to target
268
+ in_path_to_target = [] # sigh... ancient interfaces
269
+ dag.each_strongly_connected_component_from(target) { |x| in_path_to_target << x.first }
270
+
271
+ (blocked & in_path_to_target).each do |btask|
272
+ deps = dag[btask]
273
+ dep_states = deps.map { |t| get_state t, run_id }
274
+ if dep_states.all? { |s| s == :done } # we've unblocked it!
275
+ #log "unblocked task #{btask}##{run_id}"
276
+ enqueue_task! btask, target, run_id
277
+ end
278
+ end
279
+ end
280
+
281
+ def make_token task, target, run_id;
282
+ [task, target, run_id, Time.now.to_i].to_json
283
+ end
284
+
285
+ def parse_token token
286
+ task, target, run_id, ts = JSON.parse token
287
+ [task, target, run_id, Time.at(ts)]
288
+ end
289
+
290
+ def metadata_key_for task, run_id; [@metadata_prefix, task, run_id].join(":") end
291
+
292
+ def get_state task, run_id
293
+ key = metadata_key_for task, run_id
294
+ (@redis.hget(key, "state") || "unstarted").to_sym
295
+ end
296
+
297
+ def inc_num_tries! task, run_id
298
+ key = metadata_key_for task, run_id
299
+ @redis.hincrby key, "tries", 1
300
+ end
301
+
302
+ def set_metadata! task, run_id, metadata
303
+ metadata.each { |k, v| metadata[k] = v.to_json unless v.is_a?(String) || v.is_a?(Symbol) }
304
+ metadata[:ts] = Time.now.to_i
305
+ key = metadata_key_for task, run_id
306
+ @redis.mapped_hmset key, metadata
307
+ end
308
+
309
+ def get_metadata task, run_id
310
+ key = metadata_key_for task, run_id
311
+ md = @redis.hgetall(key).inject({}) { |h, (k, v)| h[k.to_sym] = v; h }
312
+ md[:ts] = Time.at md[:ts].to_i
313
+ md[:tries] = md[:tries].to_i
314
+ md
315
+ end
316
+ end
@@ -0,0 +1,100 @@
1
+ <html>
2
+ <head>
3
+ <title>Redact status</title>
4
+ <link rel="stylesheet" href="https://maxcdn.bootstrapcdn.com/bootstrap/3.2.0/css/bootstrap.min.css">
5
+ <link rel="stylesheet" href="https://maxcdn.bootstrapcdn.com/bootstrap/3.2.0/css/bootstrap-theme.min.css">
6
+ <script src="https://maxcdn.bootstrapcdn.com/bootstrap/3.2.0/js/bootstrap.min.js"></script>
7
+ </head>
8
+
9
+ <body>
10
+ <h1>In Progress (<%= in_progress.size %>)</h1>
11
+
12
+ <div class="table-responsive">
13
+ <table class="table table-striped table-hover table-condensed">
14
+ <thead>
15
+ <tr>
16
+ <th>Task / Target / Worker</th>
17
+ <th>Run id</th>
18
+ <th>Age</th>
19
+ <th>State</th>
20
+ <th>Time in queue</th>
21
+ <th>Params</th>
22
+ </tr>
23
+ </thead>
24
+ <tbody>
25
+ <% in_progress.each do |t| %>
26
+ <tr class="<%= t.state_happiness %>">
27
+ <td><%= t.task %><br/><%= t.target %><br/><%= t.worker_id %></td>
28
+ <td><%= t.run_id %></td>
29
+ <td><%= t.ago %></td>
30
+ <td><%= t.state %></td>
31
+ <td><%= t.time_in_queue %></td>
32
+ <td><%= t.params %></td>
33
+ </tr>
34
+ <% end %>
35
+ </tbody>
36
+ </table>
37
+ </div>
38
+
39
+ <h1>Enqueued (<%= enqueued.size %>)</h1>
40
+
41
+ <div class="table-responsive">
42
+ <table class="table table-striped table-hover table-condensed">
43
+ <thead>
44
+ <tr>
45
+ <th>Task / Target</th>
46
+ <th>Run id</th>
47
+ <th>Age</th>
48
+ <th>State</th>
49
+ <th>Params</th>
50
+ <th>Tries</th>
51
+ </tr>
52
+ </thead>
53
+ <tbody>
54
+ <% enqueued.reverse.each do |t| %>
55
+ <tr class="<%= t.state_happiness %>">
56
+ <td><%= t.task %><br/><%= t.target %></td>
57
+ <td><%= t.run_id %></td>
58
+ <td><%= t.ago %></td>
59
+ <td><%= t.state %></td>
60
+ <td><%= t.params %></td>
61
+ <td><%= t.tries %></td>
62
+ </tr>
63
+ <% end %>
64
+ </tbody>
65
+ </table>
66
+ </div>
67
+
68
+ <h1>Recently completed (<%= done.size %>)</h1>
69
+
70
+ <div class="table-responsive">
71
+ <table class="table table-striped table-hover table-condensed">
72
+ <thead>
73
+ <tr>
74
+ <th>Task / Target / Worker</th>
75
+ <th>Run id</th>
76
+ <th>Age</th>
77
+ <th>State</th>
78
+ <th>Time in queue</th>
79
+ <th>Time in progress</th>
80
+ <th>Params</th>
81
+ </tr>
82
+ </thead>
83
+ <tbody>
84
+ <% done.each do |t| %>
85
+ <tr class="<%= t.state_happiness %>">
86
+ <td><%= t.task %><br/><%= t.target %><br/><%= t.worker_id %></td>
87
+ <td><%= t.run_id %></td>
88
+ <td><%= t.ago %></td>
89
+ <td><%= t.state %></td>
90
+ <td><%= t.time_in_queue %></td>
91
+ <td><%= t.time_in_progress %></td>
92
+ <td><%= t.params %></td>
93
+ </tr>
94
+ <% end %>
95
+ </tbody>
96
+ </table>
97
+ </div>
98
+
99
+ </body>
100
+ </html>
metadata ADDED
@@ -0,0 +1,84 @@
1
+ --- !ruby/object:Gem::Specification
2
+ name: redact
3
+ version: !ruby/object:Gem::Version
4
+ version: '0.1'
5
+ platform: ruby
6
+ authors:
7
+ - William Morgan
8
+ autorequire:
9
+ bindir: bin
10
+ cert_chain: []
11
+ date: 2014-10-06 00:00:00.000000000 Z
12
+ dependencies:
13
+ - !ruby/object:Gem::Dependency
14
+ name: trollop
15
+ requirement: !ruby/object:Gem::Requirement
16
+ requirements:
17
+ - - ~>
18
+ - !ruby/object:Gem::Version
19
+ version: '2.0'
20
+ type: :runtime
21
+ prerelease: false
22
+ version_requirements: !ruby/object:Gem::Requirement
23
+ requirements:
24
+ - - ~>
25
+ - !ruby/object:Gem::Version
26
+ version: '2.0'
27
+ - !ruby/object:Gem::Dependency
28
+ name: sinatra
29
+ requirement: !ruby/object:Gem::Requirement
30
+ requirements:
31
+ - - ~>
32
+ - !ruby/object:Gem::Version
33
+ version: 1.4.5
34
+ type: :runtime
35
+ prerelease: false
36
+ version_requirements: !ruby/object:Gem::Requirement
37
+ requirements:
38
+ - - ~>
39
+ - !ruby/object:Gem::Version
40
+ version: 1.4.5
41
+ description: |
42
+ Redact is a dependency-based work planner for Redis. It allows you to express
43
+ your work as a set of tasks that are dependent on other tasks, and to execute
44
+ runs across this graph, in such a way that all dependencies for a task are
45
+ guaranteed to be satisfied before the task itself is executed.
46
+ email: wmorgan-redact@masanjin.net
47
+ executables: []
48
+ extensions: []
49
+ extra_rdoc_files: []
50
+ files:
51
+ - README
52
+ - COPYING
53
+ - lib/redact.rb
54
+ - bin/redact-monitor
55
+ - views/index.erb
56
+ homepage: http://gitub.com/wmorgan/redact
57
+ licenses:
58
+ - COPYING
59
+ metadata: {}
60
+ post_install_message:
61
+ rdoc_options:
62
+ - -c
63
+ - utf8
64
+ - --main
65
+ - README
66
+ require_paths:
67
+ - lib
68
+ required_ruby_version: !ruby/object:Gem::Requirement
69
+ requirements:
70
+ - - '>='
71
+ - !ruby/object:Gem::Version
72
+ version: '0'
73
+ required_rubygems_version: !ruby/object:Gem::Requirement
74
+ requirements:
75
+ - - '>='
76
+ - !ruby/object:Gem::Version
77
+ version: '0'
78
+ requirements: []
79
+ rubyforge_project:
80
+ rubygems_version: 2.0.14
81
+ signing_key:
82
+ specification_version: 4
83
+ summary: A dependency-based work planner for Redis.
84
+ test_files: []