job_reactor 0.5.0.beta2
Sign up to get free protection for your applications and to get access to all the features.
- data/README.markdown +88 -0
- data/lib/job_reactor/distributor/client.rb +40 -0
- data/lib/job_reactor/distributor/server.rb +31 -0
- data/lib/job_reactor/distributor.rb +92 -0
- data/lib/job_reactor/job_reactor/config.rb +31 -0
- data/lib/job_reactor/job_reactor/exceptions.rb +23 -0
- data/lib/job_reactor/job_reactor/job_parser.rb +39 -0
- data/lib/job_reactor/job_reactor/storages.rb +25 -0
- data/lib/job_reactor/job_reactor.rb +208 -0
- data/lib/job_reactor/logger.rb +57 -0
- data/lib/job_reactor/node/client.rb +54 -0
- data/lib/job_reactor/node/server.rb +38 -0
- data/lib/job_reactor/node.rb +222 -0
- data/lib/job_reactor/storages/memory_storage.rb +39 -0
- data/lib/job_reactor/storages/redis_storage.rb +76 -0
- data/lib/job_reactor.rb +44 -0
- metadata +85 -0
data/README.markdown
ADDED
@@ -0,0 +1,88 @@
|
|
1
|
+
JobReactor
|
2
|
+
==========
|
3
|
+
|
4
|
+
JobReactor is a library for creating and processing background jobs.
|
5
|
+
It is asynchronous client-server distributed system based on [EventMachine][0].
|
6
|
+
|
7
|
+
Quick start
|
8
|
+
===========
|
9
|
+
Coming soon, see examples
|
10
|
+
|
11
|
+
Main features
|
12
|
+
=============
|
13
|
+
1. Client-server architecture
|
14
|
+
-----------------------------
|
15
|
+
You can run as many distributors and working nodes as you need. You are free to choose the strategy.
|
16
|
+
If you have many background tasks from each part of your application you can use, for example, 3 distributors (one in each process) and 10 working nodes.
|
17
|
+
If you don't have many jobs you can leave only one node which will be connected to 3 distributors.
|
18
|
+
2. High scalability
|
19
|
+
-------------------
|
20
|
+
Nodes and distributors are connected via TCP. So, you can run them on any machine you can connect to.
|
21
|
+
Nodes may use different storages or the same one. So, you can store vitally important jobs in relational database and
|
22
|
+
simple insignificant jobs in memory.
|
23
|
+
And more: your nodes may create jobs for others nodes and communicate with each other. See page [advance usage].
|
24
|
+
3. Full job control
|
25
|
+
-------------------
|
26
|
+
You can add callback and errbacks to the job which will be called on the node.
|
27
|
+
You also can add 'success feedback' and 'error feedback' which will be called in your main application.
|
28
|
+
When job is done on remote node, your application will receive the result inside corespondent 'feedback'.
|
29
|
+
If error occur in the job you can see it in errbacks and do what you want.
|
30
|
+
Inside the job you can get information about when it starts, which node execute job and etc.
|
31
|
+
You also can add some arguments to the job on-the-fly which will be used in the subsequent callbacks and errbacks. See [advance usage].
|
32
|
+
4. Reliability
|
33
|
+
--------------
|
34
|
+
You can run additional nodes and stop any nodes on-the-fly.
|
35
|
+
Distributor is smart enough to send jobs to another node if someone is stopped or crashed.
|
36
|
+
If no nodes are connected to distributor it will keep jobs in memory and send them when nodes start.
|
37
|
+
If node is stopped or crashed it will retry stored jobs after start.
|
38
|
+
5. EventMachine available
|
39
|
+
-------------------------
|
40
|
+
Remember, your jobs will be run inside EventMachine reactor! You can easily use the power of async nature of EventMachine.
|
41
|
+
Use asynchronous [http requests], [websockets], [etc.], [etc.], and [etc]. See page [advance usage].
|
42
|
+
6. Deferred and periodic jobs
|
43
|
+
-----------------------------
|
44
|
+
You can use deferred jobs which will run 'after' some time or 'run_at' given time.
|
45
|
+
You can create periodic jobs which will run every given time period and cancel them on condition.
|
46
|
+
7. No polling
|
47
|
+
-------------
|
48
|
+
There is no storage polling. Absolutely. When node receives job (no matter instant, periodic or deferred) there will be EventMachine timer created
|
49
|
+
which will start job at the right time.
|
50
|
+
8. Job retrying
|
51
|
+
--------------
|
52
|
+
If job fails it will be retried. You can choose global retrying strategy or manage separate jobs.
|
53
|
+
9. Predefined nodes
|
54
|
+
-------------------
|
55
|
+
You can specify node for jobs, so they will be executed in that node environment. And you can specify which node is forbidden for the job.
|
56
|
+
If no nodes are specified distributor will try to send the job to the first free node.
|
57
|
+
10. Node based priorities
|
58
|
+
-----------------------
|
59
|
+
There are no priorities like in Delayed::Job or Stalker. Bud there are flexible node-based priorities.
|
60
|
+
You can specify the node which should execute the job. You can reserve several nodes for high priority jobs.
|
61
|
+
|
62
|
+
|
63
|
+
|
64
|
+
The main parts of JobReactor are:
|
65
|
+
---------------------------------
|
66
|
+
JobReactor module for creating jobs.
|
67
|
+
Distributor module for 'distributing' jobs between working nodes.
|
68
|
+
Node object for job processing.
|
69
|
+
#TODO
|
70
|
+
|
71
|
+
|
72
|
+
|
73
|
+
|
74
|
+
|
75
|
+
|
76
|
+
|
77
|
+
|
78
|
+
How it works
|
79
|
+
------------
|
80
|
+
#TODO
|
81
|
+
|
82
|
+
|
83
|
+
|
84
|
+
|
85
|
+
|
86
|
+
Links:
|
87
|
+
------
|
88
|
+
[0]: http://rubyeventmachine.com/
|
@@ -0,0 +1,40 @@
|
|
1
|
+
# TODO comment it
|
2
|
+
module JobReactor
|
3
|
+
module Distributor
|
4
|
+
class Client < EM::Connection
|
5
|
+
def initialize(name)
|
6
|
+
@name = name
|
7
|
+
end
|
8
|
+
|
9
|
+
def name
|
10
|
+
@name
|
11
|
+
end
|
12
|
+
|
13
|
+
def lock
|
14
|
+
@lock = true
|
15
|
+
end
|
16
|
+
|
17
|
+
def unlock
|
18
|
+
@lock = false
|
19
|
+
end
|
20
|
+
|
21
|
+
def locked?
|
22
|
+
@lock
|
23
|
+
end
|
24
|
+
|
25
|
+
def available?
|
26
|
+
!locked?
|
27
|
+
end
|
28
|
+
|
29
|
+
def receive_data(data)
|
30
|
+
self.unlock if data == 'ok'
|
31
|
+
end
|
32
|
+
|
33
|
+
def unbind
|
34
|
+
JR::Logger.log "#{@name} disconnected"
|
35
|
+
close_connection
|
36
|
+
JobReactor::Distributor.connections.delete(self)
|
37
|
+
end
|
38
|
+
end
|
39
|
+
end
|
40
|
+
end
|
@@ -0,0 +1,31 @@
|
|
1
|
+
# TODO comment it
|
2
|
+
module JobReactor
|
3
|
+
module Distributor
|
4
|
+
class Server < EM::Connection
|
5
|
+
|
6
|
+
def post_init
|
7
|
+
JR::Logger.log 'Begin node handshake'
|
8
|
+
end
|
9
|
+
|
10
|
+
def receive_data(data)
|
11
|
+
data = Marshal.load(data)
|
12
|
+
if data[:node_info]
|
13
|
+
node_info = data[:node_info]
|
14
|
+
JR::Logger.log "Receive data from node: #{data[:node_info]}"
|
15
|
+
JobReactor::Distributor.nodes << node_info
|
16
|
+
connection = EM.connect(*node_info[:server], Client, node_info[:name])
|
17
|
+
JobReactor::Distributor.connections << connection
|
18
|
+
elsif data[:success]
|
19
|
+
JR.run_succ_feedback(data[:success])
|
20
|
+
send_data('ok')
|
21
|
+
elsif data[:error]
|
22
|
+
JR.run_err_feedback(data[:error])
|
23
|
+
send_data('ok')
|
24
|
+
end
|
25
|
+
|
26
|
+
data
|
27
|
+
end
|
28
|
+
|
29
|
+
end
|
30
|
+
end
|
31
|
+
end
|
@@ -0,0 +1,92 @@
|
|
1
|
+
require 'job_reactor/distributor/client'
|
2
|
+
require 'job_reactor/distributor/server'
|
3
|
+
|
4
|
+
module JobReactor
|
5
|
+
module Distributor
|
6
|
+
extend self
|
7
|
+
|
8
|
+
def host
|
9
|
+
@@host
|
10
|
+
end
|
11
|
+
|
12
|
+
def port
|
13
|
+
@@port
|
14
|
+
end
|
15
|
+
|
16
|
+
def nodes
|
17
|
+
@@nodes ||= []
|
18
|
+
end
|
19
|
+
|
20
|
+
# Contains connections pool
|
21
|
+
def connections
|
22
|
+
@@connections ||= []
|
23
|
+
end
|
24
|
+
|
25
|
+
#Starts distributor on given hast and port
|
26
|
+
|
27
|
+
def start(host, port)
|
28
|
+
@@host = host
|
29
|
+
@@port = port
|
30
|
+
EM.start_server(host, port, JobReactor::Distributor::Server, [host, port])
|
31
|
+
JR::Logger.log "Distributor listens #{host}:#{port}"
|
32
|
+
#EM.add_periodic_timer(5) do
|
33
|
+
# JR::Logger.log('Available nodes: ' << JR::Distributor.connections.map(&:name).join(' '))
|
34
|
+
#end
|
35
|
+
end
|
36
|
+
|
37
|
+
# Tries to find available node connection
|
38
|
+
# If it is distributor will send marshalled data
|
39
|
+
# If get_connection returns nil distributor will try again after 1 second
|
40
|
+
#
|
41
|
+
def send_data_to_node(hash)
|
42
|
+
connection = get_connection(hash)
|
43
|
+
if connection
|
44
|
+
data = Marshal.dump(hash)
|
45
|
+
connection.send_data(data)
|
46
|
+
connection.lock
|
47
|
+
else
|
48
|
+
EM.next_tick do
|
49
|
+
send_data_to_node(hash)
|
50
|
+
end
|
51
|
+
end
|
52
|
+
end
|
53
|
+
|
54
|
+
private
|
55
|
+
|
56
|
+
# Looks for available connection.
|
57
|
+
# If job hash specified node, tries check if the node is available.
|
58
|
+
# If not, returns nil or tries to find any other free node if :always_use_specified_node == true
|
59
|
+
# If job hasn't any specified node, methods return any available connection or nil (and will be launched again in one second)
|
60
|
+
|
61
|
+
def get_connection(hash)
|
62
|
+
check_node_pool
|
63
|
+
if hash['node']
|
64
|
+
node_connection = connections.select{ |con| con.name == hash['node'] && con.name != hash['not_node']}.first
|
65
|
+
JR::Logger.log("WARNING: Node #{hash['node']} is not available") unless node_connection
|
66
|
+
if node_connection.try(:available?)
|
67
|
+
node_connection
|
68
|
+
else
|
69
|
+
JR.config[:always_use_specified_node] ? nil : connections.select{ |con| con.available? && con.name != hash['not_node'] }.first
|
70
|
+
end
|
71
|
+
else
|
72
|
+
connections.select{ |con| con.available? && con.name != hash['not_node'] }.first
|
73
|
+
end
|
74
|
+
end
|
75
|
+
|
76
|
+
# Checks node poll. If it is empty will fail after :when_node_pull_is_empty_will_raise_exception_after seconds
|
77
|
+
# The distributor will fail when number of timers raise to EM.get_max_timers which if default 100000 for the majority system
|
78
|
+
# To exit earlier may be useful for error detection
|
79
|
+
#
|
80
|
+
def check_node_pool
|
81
|
+
if connections.size == 0
|
82
|
+
JR::Logger.log 'Warning: Node pool is empty'
|
83
|
+
EM::Timer.new(JR.config[:when_node_pull_is_empty_will_raise_exception_after]) do
|
84
|
+
if connections.size == 0
|
85
|
+
raise JobReactor::NodePoolIsEmpty
|
86
|
+
end
|
87
|
+
end
|
88
|
+
end
|
89
|
+
end
|
90
|
+
|
91
|
+
end
|
92
|
+
end
|
@@ -0,0 +1,31 @@
|
|
1
|
+
# Names are informative
|
2
|
+
# TODO
|
3
|
+
module JobReactor
|
4
|
+
def self.config
|
5
|
+
@@config ||= {}
|
6
|
+
end
|
7
|
+
end
|
8
|
+
|
9
|
+
JR = JobReactor
|
10
|
+
|
11
|
+
JR.config[:job_directory] = 'reactor_jobs'
|
12
|
+
JR.config[:max_attempt] = 10
|
13
|
+
JR.config[:retry_multiplier] = 5
|
14
|
+
JR.config[:retry_jobs_at_start] = true
|
15
|
+
JR.config[:merge_job_itself_to_args] = true
|
16
|
+
JR.config[:log_job_processing] = true
|
17
|
+
JR.config[:always_use_specified_node] = false #will send job to another node if specified node is not available
|
18
|
+
JR.config[:remove_done_jobs] = true
|
19
|
+
JR.config[:remove_cancelled_jobs] = true
|
20
|
+
JR.config[:when_node_pull_is_empty_will_raise_exception_after] = 3600
|
21
|
+
|
22
|
+
JR.config[:redis_host] = 'localhost'
|
23
|
+
JR.config[:redis_port] = 6379
|
24
|
+
|
25
|
+
#TODO next releases with rails support
|
26
|
+
#JR.config[:active_record_adapter] = 'mysql2'
|
27
|
+
#JR.config[:active_record_database] = 'em'
|
28
|
+
#JR.config[:active_record_user] = ''
|
29
|
+
#JR.config[:active_record_password] = ''
|
30
|
+
#JR.config[:active_record_table_name] = 'reactor_jobs'
|
31
|
+
#JR.config[:use_custom_active_record_connection] = true
|
@@ -0,0 +1,23 @@
|
|
1
|
+
module JobReactor
|
2
|
+
|
3
|
+
# The purpose of exceptions is in their names
|
4
|
+
# TODO
|
5
|
+
|
6
|
+
class NoJobsDefined < RuntimeError
|
7
|
+
end
|
8
|
+
class NoSuchJob < RuntimeError
|
9
|
+
end
|
10
|
+
class CancelJob < RuntimeError
|
11
|
+
end
|
12
|
+
class NodePoolIsEmpty < RuntimeError
|
13
|
+
end
|
14
|
+
class NoSuchNode < RuntimeError
|
15
|
+
end
|
16
|
+
class LostConnection < RuntimeError
|
17
|
+
end
|
18
|
+
class SchedulePeriodicJob < RuntimeError
|
19
|
+
end
|
20
|
+
|
21
|
+
end
|
22
|
+
|
23
|
+
|
@@ -0,0 +1,39 @@
|
|
1
|
+
# parse jobs defined in the JR.config[:job_directory]
|
2
|
+
# build hash of the following structure:
|
3
|
+
# {"job_1" => {
|
4
|
+
# job: Proc,
|
5
|
+
# callbacks: [
|
6
|
+
# ["first_callback", Proc],
|
7
|
+
# ["second_callback", Proc]
|
8
|
+
# ],
|
9
|
+
# errbacks: [
|
10
|
+
# ["first_errback", Proc],
|
11
|
+
# ["second_errback", Proc]
|
12
|
+
# ]
|
13
|
+
# },
|
14
|
+
# "job_2" => {
|
15
|
+
# job: Proc,
|
16
|
+
# callbacks: [[]],
|
17
|
+
# errbacks: [[]]
|
18
|
+
# }
|
19
|
+
# }
|
20
|
+
# Names of callbacks and errbacks are optional and may be used just for description
|
21
|
+
|
22
|
+
module JobReactor
|
23
|
+
extend self
|
24
|
+
|
25
|
+
def job(name, &block)
|
26
|
+
JR.jobs.merge!(name => { job: block })
|
27
|
+
end
|
28
|
+
|
29
|
+
def job_callback(name, callback_name = 'noname', &block)
|
30
|
+
JR.jobs[name].merge!(callbacks: []) unless JR.jobs[name][:callbacks]
|
31
|
+
JR.jobs[name][:callbacks] << [callback_name, block]
|
32
|
+
end
|
33
|
+
|
34
|
+
def job_errback(name, errback_name = 'noname', &block)
|
35
|
+
JR.jobs[name].merge!(errbacks: []) unless JR.jobs[name][:errbacks]
|
36
|
+
JR.jobs[name][:errbacks] << [errback_name, block]
|
37
|
+
end
|
38
|
+
|
39
|
+
end
|
@@ -0,0 +1,25 @@
|
|
1
|
+
# Storages implement simple functionality.
|
2
|
+
# There are four methods should be implemented:
|
3
|
+
# save(hash).
|
4
|
+
# load(hash).
|
5
|
+
# destroy(hash).
|
6
|
+
# jobs_for(name).
|
7
|
+
# The last one is used when node is restarting to retry saved jobs.
|
8
|
+
# The storage may not be thread safe, because each node manage it own jobs and don't now anything about others.
|
9
|
+
|
10
|
+
# Defines storages for lazy loading
|
11
|
+
|
12
|
+
# TODO 'NEXT RELEASE'
|
13
|
+
# require 'active_record'
|
14
|
+
# class JobReactor::ActiveRecordStorage < ::ActiveRecord::Base; end
|
15
|
+
|
16
|
+
module JobReactor::MemoryStorage; end
|
17
|
+
module JobReactor::RedisStorage; end
|
18
|
+
|
19
|
+
module JobReactor
|
20
|
+
STORAGES = {
|
21
|
+
#'active_record_storage' => JobReactor::ActiveRecordStorage,
|
22
|
+
'memory_storage' => JobReactor::MemoryStorage,
|
23
|
+
'redis_storage' => JobReactor::RedisStorage
|
24
|
+
}
|
25
|
+
end
|
@@ -0,0 +1,208 @@
|
|
1
|
+
# The core.
|
2
|
+
# Gives API to parse jobs, send them to node using distributor, and make them for node.
|
3
|
+
|
4
|
+
require 'job_reactor/job_reactor/config'
|
5
|
+
require 'job_reactor/job_reactor/job_parser'
|
6
|
+
require 'job_reactor/job_reactor/exceptions'
|
7
|
+
require 'job_reactor/job_reactor/storages'
|
8
|
+
|
9
|
+
module JobReactor
|
10
|
+
|
11
|
+
# Yes, we monkeypatched Ruby core class.
|
12
|
+
# Now all hashes hash EM::Deferrable callbacks and errbacks.
|
13
|
+
# It is just for simplicity.
|
14
|
+
# It's cool use 'job = {}' instead 'job = JobHash.new.
|
15
|
+
# We are ready to discuss it and change.
|
16
|
+
#
|
17
|
+
Hash.send(:include, EM::Deferrable)
|
18
|
+
|
19
|
+
class << self
|
20
|
+
|
21
|
+
# Accessors to jobs.
|
22
|
+
#
|
23
|
+
def jobs
|
24
|
+
@@jobs ||= { }
|
25
|
+
end
|
26
|
+
|
27
|
+
# Ready flag.
|
28
|
+
# @@ready is true when block is called inside EM reactor.
|
29
|
+
#
|
30
|
+
def ready!
|
31
|
+
@@ready = true
|
32
|
+
end
|
33
|
+
|
34
|
+
def ready?
|
35
|
+
(@@ready ||= false) && EM.reactor_running?
|
36
|
+
end
|
37
|
+
|
38
|
+
# Requires storage
|
39
|
+
# Creates and start node.
|
40
|
+
#
|
41
|
+
def start_node(opts)
|
42
|
+
parse_jobs
|
43
|
+
require_storage!(opts)
|
44
|
+
node = Node.new(opts)
|
45
|
+
node.start
|
46
|
+
end
|
47
|
+
|
48
|
+
def start_distributor(host, port)
|
49
|
+
JR::Distributor.start(host, port)
|
50
|
+
end
|
51
|
+
|
52
|
+
def succ_feedbacks
|
53
|
+
@@callbacks ||= { }
|
54
|
+
end
|
55
|
+
|
56
|
+
def err_feedbacks
|
57
|
+
@@errbacks ||= { }
|
58
|
+
end
|
59
|
+
|
60
|
+
# Here is the only method user can call inside the application (excepts start-up methods, of course).
|
61
|
+
# You have to specify job_name and optionally its args and opts.
|
62
|
+
# The method set initial arguments and send job to distributor which will send it to node.
|
63
|
+
# Options are :after and :period (for deferred and periodic jobs), and :node to specify the preferred node to launch job.
|
64
|
+
# Use :always_use_specified_node option to be sure that job will launched in the specified node.
|
65
|
+
# Job itself is a hash with the following keys:
|
66
|
+
# name, args, make_after, last_error, run_at, failed_at, attempt, period, node, not_node, status, distributor, on_success, on_error.
|
67
|
+
# TODO examples.
|
68
|
+
#
|
69
|
+
def enqueue(name, args = { }, opts = { }, success_proc = nil, error_proc = nil)
|
70
|
+
hash = { 'name' => name, 'args' => args, 'attempt' => 0, 'status' => 'new' }
|
71
|
+
|
72
|
+
hash.merge!('period' => opts[:period]) if opts[:period]
|
73
|
+
opts[:after] = (opts[:run_at] - Time.now) if opts[:run_at]
|
74
|
+
hash.merge!('make_after' => (opts[:after] || 0))
|
75
|
+
|
76
|
+
hash.merge!('node' => opts[:node]) if opts[:node]
|
77
|
+
hash.merge!('not_node' => opts[:not_node]) if opts[:not_node]
|
78
|
+
|
79
|
+
hash.merge!('distributor' => "#{JR::Distributor.host}:#{JR::Distributor.port}")
|
80
|
+
|
81
|
+
add_succ_feedbacks!(hash, success_proc) if success_proc
|
82
|
+
add_err_feedbacks!(hash, error_proc) if error_proc
|
83
|
+
|
84
|
+
JR::Distributor.send_data_to_node(hash)
|
85
|
+
end
|
86
|
+
|
87
|
+
|
88
|
+
# This method is used by node (Node#schedule).
|
89
|
+
# It makes job from hash by calling callback and errback methods.
|
90
|
+
#
|
91
|
+
# The strategy is the following:
|
92
|
+
# First and last callback (add_start_callback) are informational.
|
93
|
+
# Second is the proc specified in JR.job method.
|
94
|
+
# Third and ... are the procs specified in job_callbacks.
|
95
|
+
#
|
96
|
+
# Then errbacks are attached.
|
97
|
+
# They are called when error occurs in callbacks.
|
98
|
+
# The last errback raise exception again to return job back to node workflow.
|
99
|
+
# See Node#do_job method to better understand how this works.
|
100
|
+
#
|
101
|
+
def make(hash) #new job is a Hash
|
102
|
+
raise NoSuchJob unless jr_job = JR.jobs[hash['name']]
|
103
|
+
|
104
|
+
job = hash
|
105
|
+
add_start_callback(job) if JR.config[:log_job_processing]
|
106
|
+
job.callback(&jr_job[:job])
|
107
|
+
|
108
|
+
jr_job[:callbacks].each do |callback|
|
109
|
+
job.callback(&callback[1])
|
110
|
+
end if jr_job[:callbacks]
|
111
|
+
|
112
|
+
add_last_callback(job) if JR.config[:log_job_processing]
|
113
|
+
|
114
|
+
add_start_errback(job) if JR.config[:log_job_processing]
|
115
|
+
|
116
|
+
jr_job[:errbacks].each do |errback|
|
117
|
+
job.errback(&errback[1])
|
118
|
+
end if jr_job[:errbacks]
|
119
|
+
|
120
|
+
add_complete_errback(job) if JR.config[:log_job_processing]
|
121
|
+
|
122
|
+
job
|
123
|
+
end
|
124
|
+
|
125
|
+
# Runs success callbacks with job args
|
126
|
+
#
|
127
|
+
def run_succ_feedback(data)
|
128
|
+
proc = data[:do_not_delete] ? succ_feedbacks[data[:callback_id]] : succ_feedbacks.delete(data[:callback_id])
|
129
|
+
proc.call(data[:args]) if proc
|
130
|
+
end
|
131
|
+
|
132
|
+
# Runs error callbacks with job args
|
133
|
+
# Exception class is in args[:error]
|
134
|
+
#
|
135
|
+
def run_err_feedback(data)
|
136
|
+
proc = err_feedbacks.delete(data[:errback_id])
|
137
|
+
proc.call(data[:args]) if proc
|
138
|
+
end
|
139
|
+
|
140
|
+
private
|
141
|
+
|
142
|
+
# Requires storage and change opts[:storage] to the constant
|
143
|
+
#
|
144
|
+
def require_storage!(opts)
|
145
|
+
require "job_reactor/storages/#{opts[:storage]}"
|
146
|
+
opts[:storage] = STORAGES[opts[:storage]]
|
147
|
+
end
|
148
|
+
|
149
|
+
# Loads all *.rb files in the :job_directory folder
|
150
|
+
# See job_reactor/job_parser to understand how job hash is built
|
151
|
+
#
|
152
|
+
def parse_jobs
|
153
|
+
JR.config[:job_directory] += '/**/*.rb'
|
154
|
+
Dir[JR.config[:job_directory]].each {|file| load file }
|
155
|
+
end
|
156
|
+
|
157
|
+
# Adds success callback which will launch when node reports success
|
158
|
+
#
|
159
|
+
def add_succ_feedbacks!(hash, callback)
|
160
|
+
distributor = "#{JR::Distributor.host}:#{JR::Distributor.port}"
|
161
|
+
feedback_id = "#{distributor}_#{Time.now.utc.to_f}"
|
162
|
+
succ_feedbacks.merge!(feedback_id => callback)
|
163
|
+
hash.merge!('on_success' => feedback_id)
|
164
|
+
end
|
165
|
+
|
166
|
+
# Adds error callback which will launch when node reports error
|
167
|
+
#
|
168
|
+
def add_err_feedbacks!(hash, errback)
|
169
|
+
distributor = "#{JR::Distributor.host}:#{JR::Distributor.port}"
|
170
|
+
feedback_id = "#{distributor}_#{Time.now.utc.to_f}"
|
171
|
+
err_feedbacks.merge!(feedback_id => errback)
|
172
|
+
hash.merge!('on_error' => feedback_id)
|
173
|
+
end
|
174
|
+
|
175
|
+
# Logs the beginning.
|
176
|
+
#
|
177
|
+
def add_start_callback(job)
|
178
|
+
job.callback do
|
179
|
+
JR::Logger.log_event(:start, job)
|
180
|
+
end
|
181
|
+
end
|
182
|
+
|
183
|
+
# Logs the completing
|
184
|
+
#
|
185
|
+
def add_last_callback(job)
|
186
|
+
job.callback do
|
187
|
+
JR::Logger.log_event(:complete, job)
|
188
|
+
end
|
189
|
+
end
|
190
|
+
|
191
|
+
# Logs the beginning or error cycle.
|
192
|
+
#
|
193
|
+
def add_start_errback(job)
|
194
|
+
job.errback do
|
195
|
+
JR::Logger.log_event(:error, job)
|
196
|
+
end
|
197
|
+
end
|
198
|
+
|
199
|
+
# Logs the end of error cycle
|
200
|
+
#
|
201
|
+
def add_complete_errback(job)
|
202
|
+
job.errback do
|
203
|
+
JR::Logger.log_event(:error_complete, job)
|
204
|
+
end
|
205
|
+
end
|
206
|
+
|
207
|
+
end
|
208
|
+
end
|
@@ -0,0 +1,57 @@
|
|
1
|
+
module JobReactor
|
2
|
+
module Logger
|
3
|
+
################
|
4
|
+
# To set output stream
|
5
|
+
@@logger_method = :puts
|
6
|
+
|
7
|
+
def self.stdout=(value)
|
8
|
+
if value.is_a?(Symbol) && value == :rails_logger
|
9
|
+
@@stdout = Rails.logger
|
10
|
+
@@logger_method = :info
|
11
|
+
else
|
12
|
+
@@stdout = value
|
13
|
+
@@logger_method = :puts
|
14
|
+
end
|
15
|
+
end
|
16
|
+
|
17
|
+
def self.stdout
|
18
|
+
@@stdout ||= $stdout
|
19
|
+
end
|
20
|
+
|
21
|
+
#################
|
22
|
+
# Is checked in dev_log
|
23
|
+
|
24
|
+
@@development = false
|
25
|
+
|
26
|
+
def self.development=(value)
|
27
|
+
@@development = !!value
|
28
|
+
end
|
29
|
+
|
30
|
+
#################
|
31
|
+
|
32
|
+
# Log message to output stream
|
33
|
+
#
|
34
|
+
def self.log(msg)
|
35
|
+
stdout.public_send(@@logger_method, '-'*100)
|
36
|
+
stdout.public_send(@@logger_method, msg)
|
37
|
+
end
|
38
|
+
|
39
|
+
# Build string for job event and log it
|
40
|
+
#
|
41
|
+
def self.log_event(event, job)
|
42
|
+
log("Log: #{event} #{job['name']}")
|
43
|
+
end
|
44
|
+
|
45
|
+
# Log if JR::Logger.development is set to true
|
46
|
+
#
|
47
|
+
def self.dev_log(msg)
|
48
|
+
log(msg) if development?
|
49
|
+
end
|
50
|
+
|
51
|
+
# Is JR::Logger.development set to true?
|
52
|
+
#
|
53
|
+
def self.development?
|
54
|
+
@@development
|
55
|
+
end
|
56
|
+
end
|
57
|
+
end
|
@@ -0,0 +1,54 @@
|
|
1
|
+
module JobReactor
|
2
|
+
class Node
|
3
|
+
class Client < EM::Connection
|
4
|
+
|
5
|
+
def initialize(node, distributor)
|
6
|
+
@node = node
|
7
|
+
@distributor = distributor
|
8
|
+
end
|
9
|
+
|
10
|
+
def post_init
|
11
|
+
JR::Logger.log("Searching for distributor: #{@distributor.join(' ')} ...")
|
12
|
+
end
|
13
|
+
|
14
|
+
def lock
|
15
|
+
@lock = true
|
16
|
+
end
|
17
|
+
|
18
|
+
def unlock
|
19
|
+
@lock = false
|
20
|
+
end
|
21
|
+
|
22
|
+
def locked?
|
23
|
+
@lock
|
24
|
+
end
|
25
|
+
|
26
|
+
def available?
|
27
|
+
!locked?
|
28
|
+
end
|
29
|
+
|
30
|
+
def receive_data(data)
|
31
|
+
self.unlock if data == 'ok'
|
32
|
+
end
|
33
|
+
|
34
|
+
# Sends node credentials to distributor.
|
35
|
+
#
|
36
|
+
def connection_completed
|
37
|
+
JR::Logger.log('Begin distributor handshake')
|
38
|
+
data = {node_info: {name: @node.config[:name], server: @node.config[:server]} }
|
39
|
+
data = Marshal.dump(data)
|
40
|
+
send_data(data)
|
41
|
+
end
|
42
|
+
|
43
|
+
# Tries to connect.
|
44
|
+
#
|
45
|
+
def unbind
|
46
|
+
EM::Timer.new(1) do
|
47
|
+
@node.connect_to(@distributor)
|
48
|
+
end
|
49
|
+
end
|
50
|
+
|
51
|
+
end
|
52
|
+
|
53
|
+
end
|
54
|
+
end
|
@@ -0,0 +1,38 @@
|
|
1
|
+
module JobReactor
|
2
|
+
class Node
|
3
|
+
class Server < EM::Connection
|
4
|
+
|
5
|
+
#Need to know the storage to call save method on it
|
6
|
+
#Need to now node name to send it to the distributor
|
7
|
+
#
|
8
|
+
def initialize(node, storage)
|
9
|
+
@storage = storage
|
10
|
+
@node = node
|
11
|
+
end
|
12
|
+
|
13
|
+
#Ok, node is connected and ready to work
|
14
|
+
#
|
15
|
+
def post_init
|
16
|
+
JR::Logger.log("#{@node.name} ready to work")
|
17
|
+
end
|
18
|
+
|
19
|
+
# It is the place where job life cycle begins.
|
20
|
+
# This method:
|
21
|
+
# -receives data from distributor;
|
22
|
+
# -saves them in storage;
|
23
|
+
# -returns 'ok' to unlock node connection;
|
24
|
+
# -and schedules job;
|
25
|
+
#
|
26
|
+
def receive_data(data)
|
27
|
+
hash = Marshal.load(data)
|
28
|
+
JR::Logger.log("#{@node.name} received job: #{hash}")
|
29
|
+
hash.merge!('node' => @node.name)
|
30
|
+
@storage.save(hash) do |hash|
|
31
|
+
@node.schedule(hash)
|
32
|
+
end
|
33
|
+
send_data('ok')
|
34
|
+
end
|
35
|
+
|
36
|
+
end
|
37
|
+
end
|
38
|
+
end
|
@@ -0,0 +1,222 @@
|
|
1
|
+
require 'job_reactor/node/server'
|
2
|
+
require 'job_reactor/node/client'
|
3
|
+
module JobReactor
|
4
|
+
class Node
|
5
|
+
|
6
|
+
def initialize(opts)
|
7
|
+
@config = { storage: opts[:storage], name: opts[:name], server: opts[:server], distributors: opts[:distributors]}
|
8
|
+
end
|
9
|
+
|
10
|
+
def config
|
11
|
+
@config
|
12
|
+
end
|
13
|
+
|
14
|
+
# Config accessors.
|
15
|
+
#
|
16
|
+
[:storage, :name, :server, :distributors].each do |method|
|
17
|
+
define_method(method) do
|
18
|
+
config[method]
|
19
|
+
end
|
20
|
+
end
|
21
|
+
|
22
|
+
# Store distributor connection instances.
|
23
|
+
#
|
24
|
+
def connections
|
25
|
+
@connections ||= {}
|
26
|
+
end
|
27
|
+
|
28
|
+
# Retrying jobs if any,
|
29
|
+
# starts server and tries to connect to distributors.
|
30
|
+
#
|
31
|
+
def start
|
32
|
+
retry_jobs if JR.config[:retry_jobs_at_start]
|
33
|
+
EM.start_server(*self.config[:server], Server, self, self.storage)
|
34
|
+
self.config[:distributors].each do |distributor|
|
35
|
+
connect_to(distributor)
|
36
|
+
end
|
37
|
+
end
|
38
|
+
|
39
|
+
# Connects to distributor.
|
40
|
+
# This method is public, because it is called by client when connection interrupt.
|
41
|
+
#
|
42
|
+
def connect_to(distributor)
|
43
|
+
if connections[distributor]
|
44
|
+
JR::Logger.log 'Searching for distributors ...'
|
45
|
+
connections[distributor].reconnect(*distributor)
|
46
|
+
else
|
47
|
+
connections.merge!(distributor => EM.connect(*distributor, Client, self, distributor))
|
48
|
+
end
|
49
|
+
end
|
50
|
+
|
51
|
+
# The method is called by node server.
|
52
|
+
# It makes a job and run do_job.
|
53
|
+
#
|
54
|
+
def schedule(hash)
|
55
|
+
EM::Timer.new(hash['make_after']) do #Of course, we can start job immediately (unless it is 'after' job), but we let EM take care about it. Maybe there is another job is ready to start
|
56
|
+
self.storage.load(hash) do |hash|
|
57
|
+
if job = JR.make(hash) #If we decide fail silently. See JR.make
|
58
|
+
do_job(job)
|
59
|
+
else
|
60
|
+
#TODO Do nothing or raise exception ????
|
61
|
+
end
|
62
|
+
end
|
63
|
+
end
|
64
|
+
end
|
65
|
+
|
66
|
+
private
|
67
|
+
|
68
|
+
# Calls succeed on deferrable object.
|
69
|
+
# When job (or it's callbacks) fails, errbacks are launched.
|
70
|
+
# If errbacks fails job will be relaunched.
|
71
|
+
#
|
72
|
+
# You can see custom exception 'CancelJob''.
|
73
|
+
# You can use it to change normal execution.
|
74
|
+
#
|
75
|
+
def do_job(job)
|
76
|
+
job['run_at'] = Time.now
|
77
|
+
job['status'] = 'in progress'
|
78
|
+
storage.save(job) do |job|
|
79
|
+
begin
|
80
|
+
args = job['args'].merge(JR.config[:merge_job_itself_to_args] ? {:job_itself => job.dup} : {})
|
81
|
+
job.succeed(args)
|
82
|
+
job['args'] = args
|
83
|
+
job_completed(job)
|
84
|
+
rescue JobReactor::CancelJob
|
85
|
+
cancel_job(job)
|
86
|
+
rescue Exception => e
|
87
|
+
rescue_job(e, job)
|
88
|
+
end
|
89
|
+
end
|
90
|
+
end
|
91
|
+
|
92
|
+
# Reports success to distributor if should do it.
|
93
|
+
# If job is 'periodic' job schedule it again.
|
94
|
+
# Sets status completed or removes job from storage.
|
95
|
+
#
|
96
|
+
def job_completed(job)
|
97
|
+
report_success(job) if job['on_success']
|
98
|
+
if job['period'] && job['period'].to_i > 0
|
99
|
+
job['status'] = 'queued'
|
100
|
+
job['make_after'] = job['period']
|
101
|
+
job['args'].delete(:job_itself)
|
102
|
+
storage.save(job) { |job| schedule(job) }
|
103
|
+
else
|
104
|
+
if JR.config[:remove_done_jobs]
|
105
|
+
storage.destroy(job)
|
106
|
+
else
|
107
|
+
job['status'] = 'complete'
|
108
|
+
storage.save(job)
|
109
|
+
end
|
110
|
+
end
|
111
|
+
end
|
112
|
+
|
113
|
+
#Lanches job errbacks
|
114
|
+
#
|
115
|
+
def rescue_job(e, job)
|
116
|
+
begin
|
117
|
+
job['failed_at'] = Time.now #Save error info
|
118
|
+
job['last_error'] = e
|
119
|
+
job['status'] = 'error'
|
120
|
+
self.storage.save(job) do |job|
|
121
|
+
begin
|
122
|
+
args = job['args'].merge!(:error => e).merge(JR.config[:merge_job_itself_to_args] ? { :job_itself => job.dup } : { })
|
123
|
+
job.fail(args) #Fire errbacks. You can access error in you errbacks (args[:error])
|
124
|
+
job['args'] = args
|
125
|
+
complete_rescue(job)
|
126
|
+
rescue JobReactor::CancelJob
|
127
|
+
cancel_job(job) #If it was cancelled we destroy it or set status 'cancelled'
|
128
|
+
rescue Exception => e #Recsue Exceptions in errbacks
|
129
|
+
job['args'].merge!(:errback_error => e)
|
130
|
+
complete_rescue(job)
|
131
|
+
end
|
132
|
+
end
|
133
|
+
end
|
134
|
+
end
|
135
|
+
|
136
|
+
#Tryes again or report error
|
137
|
+
#
|
138
|
+
def complete_rescue(job)
|
139
|
+
if job['attempt'].to_i < JobReactor.config[:max_attempt] - 1
|
140
|
+
try_again(job)
|
141
|
+
else
|
142
|
+
report_error(job) if job['on_error']
|
143
|
+
end
|
144
|
+
end
|
145
|
+
|
146
|
+
# Cancels job. Remove or set 'cancelled status'
|
147
|
+
#
|
148
|
+
def cancel_job(job)
|
149
|
+
report_error(job) if job['on_error']
|
150
|
+
if JR.config[:remove_cancelled_jobs]
|
151
|
+
storage.destroy(job)
|
152
|
+
else
|
153
|
+
job['status'] = 'cancelled'
|
154
|
+
storage.save(job)
|
155
|
+
end
|
156
|
+
end
|
157
|
+
|
158
|
+
# try_again has special condition for periodic jobs.
|
159
|
+
# They will be rescheduled after period time.
|
160
|
+
#
|
161
|
+
def try_again(job)
|
162
|
+
job['attempt'] += 1
|
163
|
+
if job['period'] && job['period'] > 0
|
164
|
+
job['make_after'] = job['period']
|
165
|
+
else
|
166
|
+
job['make_after'] = job['attempt'] * JobReactor.config[:retry_multiplier]
|
167
|
+
end
|
168
|
+
job['args'].delete(:job_itself)
|
169
|
+
self.storage.save(job) do |job|
|
170
|
+
self.schedule(job)
|
171
|
+
end
|
172
|
+
end
|
173
|
+
|
174
|
+
# Retries jobs.
|
175
|
+
# Runs only once when node starts.
|
176
|
+
#
|
177
|
+
def retry_jobs
|
178
|
+
storage.jobs_for(name) do |job_to_retry|
|
179
|
+
job_to_retry['args'].merge!(:retrying => true)
|
180
|
+
try_again(job_to_retry) if job_to_retry
|
181
|
+
end
|
182
|
+
end
|
183
|
+
|
184
|
+
# Reports success to node, sends jobs args
|
185
|
+
#
|
186
|
+
def report_success(job)
|
187
|
+
host, port = job['distributor'].split(':')
|
188
|
+
port = port.to_i
|
189
|
+
distributor = self.connections[[host, port]]
|
190
|
+
data = {:success => { callback_id: job['on_success'], args: job['args']}}
|
191
|
+
data[:success].merge!(do_not_delete: true) if job['period'] && job['period'].to_i > 0
|
192
|
+
data = Marshal.dump(data)
|
193
|
+
send_data_to_distributor(distributor, data)
|
194
|
+
end
|
195
|
+
|
196
|
+
# Reports error to node, sends jobs args.
|
197
|
+
# Exception class is merged to args.
|
198
|
+
#
|
199
|
+
def report_error(job)
|
200
|
+
host, port = job['distributor'].split(':')
|
201
|
+
port = port.to_i
|
202
|
+
distributor = self.connections[[host, port]]
|
203
|
+
data = {:error => { errback_id: job['on_error'], args: job['args']}}
|
204
|
+
data = Marshal.dump(data)
|
205
|
+
send_data_to_distributor(distributor, data)
|
206
|
+
end
|
207
|
+
|
208
|
+
# Sends data to distributor
|
209
|
+
#
|
210
|
+
def send_data_to_distributor(distributor, data)
|
211
|
+
if distributor.locked?
|
212
|
+
EM.next_tick do
|
213
|
+
send_data_to_distributor(distributor, data)
|
214
|
+
end
|
215
|
+
else
|
216
|
+
distributor.send_data(data)
|
217
|
+
distributor.lock
|
218
|
+
end
|
219
|
+
end
|
220
|
+
|
221
|
+
end
|
222
|
+
end
|
@@ -0,0 +1,39 @@
|
|
1
|
+
# TODO comment it
|
2
|
+
module JobReactor
|
3
|
+
module MemoryStorage
|
4
|
+
|
5
|
+
@@storage = { }
|
6
|
+
|
7
|
+
class << self
|
8
|
+
def storage
|
9
|
+
@@storage
|
10
|
+
end
|
11
|
+
|
12
|
+
def load(hash, &block)
|
13
|
+
hash = storage[hash['id']]
|
14
|
+
hash_copy = { }
|
15
|
+
hash.each { |k, v| hash_copy.merge!(k => v) }
|
16
|
+
block.call(hash_copy) if block_given?
|
17
|
+
end
|
18
|
+
|
19
|
+
def save(hash, &block)
|
20
|
+
unless (hash['id'])
|
21
|
+
id = Time.now.to_f.to_s
|
22
|
+
hash.merge!('id' => id)
|
23
|
+
end
|
24
|
+
storage.merge!(hash['id'] => hash)
|
25
|
+
|
26
|
+
block.call(hash) if block_given?
|
27
|
+
end
|
28
|
+
|
29
|
+
def destroy(hash)
|
30
|
+
storage.delete(hash['id'])
|
31
|
+
end
|
32
|
+
|
33
|
+
def jobs_for(name, &block) #No persistance
|
34
|
+
nil
|
35
|
+
end
|
36
|
+
end
|
37
|
+
|
38
|
+
end
|
39
|
+
end
|
@@ -0,0 +1,76 @@
|
|
1
|
+
# TODO comment it
|
2
|
+
require 'em-redis'
|
3
|
+
|
4
|
+
module JobReactor
|
5
|
+
module RedisStorage
|
6
|
+
@@storage = EM::Protocols::Redis.connect(host: JobReactor.config[:redis_host], port: JobReactor.config[:redis_port])
|
7
|
+
ATTRS = %w(id name args last_error run_at failed_at attempt period make_after status distributor on_success on_error)
|
8
|
+
|
9
|
+
class << self
|
10
|
+
|
11
|
+
def storage
|
12
|
+
@@storage
|
13
|
+
end
|
14
|
+
|
15
|
+
def load(hash, &block)
|
16
|
+
key = "#{hash['node']}_#{hash['id']}"
|
17
|
+
hash_copy = {'node' => hash['node']} #need new object, because old one has been 'failed'
|
18
|
+
|
19
|
+
storage.hmget(key, *ATTRS) do |record|
|
20
|
+
ATTRS.each_with_index do |attr, i|
|
21
|
+
hash_copy[attr] = record[i]
|
22
|
+
end
|
23
|
+
['attempt', 'period', 'make_after'].each do |attr|
|
24
|
+
hash_copy[attr] = hash_copy[attr].to_i
|
25
|
+
end
|
26
|
+
hash_copy['args'] = Marshal.load(hash_copy['args'])
|
27
|
+
|
28
|
+
block.call(hash_copy) if block_given?
|
29
|
+
end
|
30
|
+
end
|
31
|
+
|
32
|
+
|
33
|
+
def save(hash, &block)
|
34
|
+
hash.merge!('id' => Time.now.to_f.to_s) unless hash['id']
|
35
|
+
key = "#{hash['node']}_#{hash['id']}"
|
36
|
+
args, hash['args'] = hash['args'], Marshal.dump(hash['args'])
|
37
|
+
|
38
|
+
storage.hmset(key, *ATTRS.map{|attr| [attr, hash[attr]]}.flatten) do
|
39
|
+
hash['args'] = args
|
40
|
+
|
41
|
+
block.call(hash) if block_given?
|
42
|
+
end
|
43
|
+
end
|
44
|
+
|
45
|
+
def destroy(hash)
|
46
|
+
storage.del("#{hash['node']}_#{hash['id']}")
|
47
|
+
end
|
48
|
+
|
49
|
+
def destroy_all_jobs_for(name)
|
50
|
+
pattern = "*#{name}_*"
|
51
|
+
storage.del(*storage.keys(pattern))
|
52
|
+
end
|
53
|
+
|
54
|
+
def jobs_for(name, &block)
|
55
|
+
pattern = "*#{name}_*"
|
56
|
+
storage.keys(pattern) do |keys|
|
57
|
+
keys.each do |key|
|
58
|
+
hash = {}
|
59
|
+
storage.hget(key, 'id') do |id|
|
60
|
+
hash['id'] = id
|
61
|
+
hash['node'] = name
|
62
|
+
self.load(hash) do |hash|
|
63
|
+
if hash['status'] != 'complete' && hash['status'] != 'cancelled' && hash['attempt'].to_i < JobReactor.config[:max_attempt]
|
64
|
+
block.call(hash)
|
65
|
+
end
|
66
|
+
end
|
67
|
+
end
|
68
|
+
end
|
69
|
+
end
|
70
|
+
end
|
71
|
+
|
72
|
+
end
|
73
|
+
|
74
|
+
end
|
75
|
+
|
76
|
+
end
|
data/lib/job_reactor.rb
ADDED
@@ -0,0 +1,44 @@
|
|
1
|
+
require 'eventmachine'
|
2
|
+
require 'job_reactor/job_reactor'
|
3
|
+
require 'job_reactor/logger'
|
4
|
+
require 'job_reactor/node'
|
5
|
+
require 'job_reactor/distributor'
|
6
|
+
|
7
|
+
|
8
|
+
# JobReactor initialization process.
|
9
|
+
# Parses jobs, runs EventMachine reactor and call given block inside reactor.
|
10
|
+
# The ::run method run EM in Thread to do not prevent execution of application.
|
11
|
+
# The ::wait_em_and_run is for using JobReactor with
|
12
|
+
# applications already have EventMachine inside and run it at start. Server Thin, for example.
|
13
|
+
# The run! method is for using JobReactor as standalone application. Advanced usage. For example you wand use node with distributor in one process
|
14
|
+
#
|
15
|
+
module JobReactor
|
16
|
+
extend self
|
17
|
+
|
18
|
+
def run(&block)
|
19
|
+
Thread.new do
|
20
|
+
if EM.reactor_running?
|
21
|
+
block.call if block_given?
|
22
|
+
JR.ready!
|
23
|
+
else
|
24
|
+
EM.run do
|
25
|
+
block.call if block_given?
|
26
|
+
JR.ready!
|
27
|
+
end
|
28
|
+
end
|
29
|
+
end
|
30
|
+
end
|
31
|
+
|
32
|
+
def run!(&block)
|
33
|
+
if EM.reactor_running?
|
34
|
+
block.call if block_given?
|
35
|
+
JR.ready!
|
36
|
+
else
|
37
|
+
EM.run do
|
38
|
+
block.call if block_given?
|
39
|
+
JR.ready!
|
40
|
+
end
|
41
|
+
end
|
42
|
+
end
|
43
|
+
|
44
|
+
end
|
metadata
ADDED
@@ -0,0 +1,85 @@
|
|
1
|
+
--- !ruby/object:Gem::Specification
|
2
|
+
name: job_reactor
|
3
|
+
version: !ruby/object:Gem::Version
|
4
|
+
version: 0.5.0.beta2
|
5
|
+
prerelease: 6
|
6
|
+
platform: ruby
|
7
|
+
authors:
|
8
|
+
- Anton Mishchuk
|
9
|
+
- Andrey Rozhkovskiy
|
10
|
+
autorequire:
|
11
|
+
bindir: bin
|
12
|
+
cert_chain: []
|
13
|
+
date: 2012-06-01 00:00:00.000000000 Z
|
14
|
+
dependencies:
|
15
|
+
- !ruby/object:Gem::Dependency
|
16
|
+
name: eventmachine
|
17
|
+
requirement: &83843190 !ruby/object:Gem::Requirement
|
18
|
+
none: false
|
19
|
+
requirements:
|
20
|
+
- - ! '>='
|
21
|
+
- !ruby/object:Gem::Version
|
22
|
+
version: '0'
|
23
|
+
type: :runtime
|
24
|
+
prerelease: false
|
25
|
+
version_requirements: *83843190
|
26
|
+
- !ruby/object:Gem::Dependency
|
27
|
+
name: em-redis
|
28
|
+
requirement: &83842960 !ruby/object:Gem::Requirement
|
29
|
+
none: false
|
30
|
+
requirements:
|
31
|
+
- - ! '>='
|
32
|
+
- !ruby/object:Gem::Version
|
33
|
+
version: '0'
|
34
|
+
type: :runtime
|
35
|
+
prerelease: false
|
36
|
+
version_requirements: *83842960
|
37
|
+
description: ! " JobReactor is a library for creating and processing background
|
38
|
+
jobs.\n It is client-server distributed system based on EventMachine.\n"
|
39
|
+
email: anton.mishchuk@gmial.com
|
40
|
+
executables: []
|
41
|
+
extensions: []
|
42
|
+
extra_rdoc_files: []
|
43
|
+
files:
|
44
|
+
- lib/job_reactor.rb
|
45
|
+
- lib/job_reactor/job_reactor.rb
|
46
|
+
- lib/job_reactor/node.rb
|
47
|
+
- lib/job_reactor/storages/redis_storage.rb
|
48
|
+
- lib/job_reactor/storages/memory_storage.rb
|
49
|
+
- lib/job_reactor/distributor/client.rb
|
50
|
+
- lib/job_reactor/distributor/server.rb
|
51
|
+
- lib/job_reactor/node/client.rb
|
52
|
+
- lib/job_reactor/node/server.rb
|
53
|
+
- lib/job_reactor/job_reactor/job_parser.rb
|
54
|
+
- lib/job_reactor/job_reactor/storages.rb
|
55
|
+
- lib/job_reactor/job_reactor/exceptions.rb
|
56
|
+
- lib/job_reactor/job_reactor/config.rb
|
57
|
+
- lib/job_reactor/distributor.rb
|
58
|
+
- lib/job_reactor/logger.rb
|
59
|
+
- README.markdown
|
60
|
+
homepage: http://github.com/antonmi/job_reactor
|
61
|
+
licenses: []
|
62
|
+
post_install_message:
|
63
|
+
rdoc_options: []
|
64
|
+
require_paths:
|
65
|
+
- lib
|
66
|
+
required_ruby_version: !ruby/object:Gem::Requirement
|
67
|
+
none: false
|
68
|
+
requirements:
|
69
|
+
- - ! '>='
|
70
|
+
- !ruby/object:Gem::Version
|
71
|
+
version: '0'
|
72
|
+
required_rubygems_version: !ruby/object:Gem::Requirement
|
73
|
+
none: false
|
74
|
+
requirements:
|
75
|
+
- - ! '>'
|
76
|
+
- !ruby/object:Gem::Version
|
77
|
+
version: 1.3.1
|
78
|
+
requirements: []
|
79
|
+
rubyforge_project:
|
80
|
+
rubygems_version: 1.8.6
|
81
|
+
signing_key:
|
82
|
+
specification_version: 3
|
83
|
+
summary: Simple, powerful and high scalable job queueing and background workers system
|
84
|
+
based on EventMachine
|
85
|
+
test_files: []
|