job_reactor 0.5.0.beta4 → 0.5.0

Sign up to get free protection for your applications and to get access to all the features.
data/README.markdown CHANGED
@@ -5,22 +5,33 @@ Now we are in beta (need to complete documentation and fix some bugs)
5
5
 
6
6
  JobReactor is a library for creating, scheduling and processing background jobs.
7
7
  It is asynchronous client-server distributed system based on [EventMachine][0].
8
- Inspired by [Resque][1], [Stalker][2], [DelayedJob][3], and etc.
8
+ Inspired by [Resque][1], [Beanstalkd][2]([Stalker][3]), [DelayedJob][4], and etc.
9
9
 
10
10
  JobReactor has not 'rails' integration for the time being.
11
- But it is very close. We need test the system with different servers (clusters) and automatize initialization and restart processes.
11
+ But it is very close. We need to test the system with different servers (clusters) and automate the initialization and re-start processes.
12
12
  Collaborators, you are welcome!
13
13
 
14
- So, read 'features' part and try JobReactor. You can do a lot with it.
14
+ So, read the 'features' section and try JobReactor. You can do a lot with it.
15
+
16
+ Note
17
+ ====
18
+ JobReactor is based on [EventMachine][0]. Jobs are launched in EM reactor loop in one thread.
19
+ There are advantages and disadvantages. The main benefit is fast scheduling, saving and loading.
20
+ The weak point is the processing of heavy background jobs when each job takes minutes and hours.
21
+ They will block the reactor and break normal processing.
22
+
23
+ If you can't divide 'THE BIG JOB' into 'small pieces' you shouldn't use JobReactor. See alternatives such [DelayedJob][4] or [Resque][1].
24
+
25
+ __JobReactor is the right solution if you have thousands, millions, and, we hope:), billions relatively small jobs.__
15
26
 
16
27
  Quick start
17
28
  ===========
18
29
  Use `gem install job_reactor --pre` to try it.
19
30
 
20
- You need to install [Redis][6] if you want to persist your jobs.
31
+ You need to install [Redis][5] if you want to persist your jobs.
21
32
  ``$ sudo apt-get install redis-server ``
22
33
 
23
- In you main application:
34
+ In your main application:
24
35
  `application.rb`
25
36
  ``` ruby
26
37
  require 'job_reactor'
@@ -47,9 +58,14 @@ And the last file - 'the worker code':
47
58
  `worker.rb`
48
59
  ``` ruby
49
60
  require 'job_reactor'
50
- JR.config[:job_directory] = 'reactor_jobs' #this default config so you can omit this line
61
+ JR.config[:job_directory] = 'reactor_jobs' #this default config, so you can omit this line
51
62
  JR.run! do
52
- JR.start_node({:storage => 'memory_storage', :name => 'worker_1', :server => ['localhost', 5001], :distributors => [['localhost', 5000]] })
63
+ JR.start_node({
64
+ :storage => 'memory_storage',
65
+ :name => 'worker_1',
66
+ :server => ['localhost', 5001],
67
+ :distributors => [['localhost', 5000]]
68
+ })
53
69
  end
54
70
  ```
55
71
  Run 'application.rb' in one terminal window and 'worker.rb' in another.
@@ -71,74 +87,193 @@ simple insignificant jobs in memory.
71
87
  And more: your nodes may create jobs for others nodes and communicate with each other. See page [advanced usage].
72
88
  3. Full job control
73
89
  -------------------
74
- You can add callback and errbacks to the job which will be called on the node.
90
+ You can add 'callback' and 'errbacks' to the job which will be called on the node.
75
91
  You also can add 'success feedback' and 'error feedback' which will be called in your main application.
76
92
  When job is done on remote node, your application will receive the result inside corespondent 'feedback'.
77
- If error occur in the job you can see it in errbacks and do what you want.
78
- Inside the job you can get information about when it starts, which node execute job and etc.
79
- You also can add some arguments to the job on-the-fly which will be used in the subsequent callbacks and errbacks. See [advance usage].
80
- 4. Reliability
93
+ If error occur in the job you can see it in 'errbacks' and the in 'error feedback' and do what you want.
94
+ 4. Reflection and modifying
95
+ ---------------------------
96
+ Inside the job you can get information about when it starts, when it fails, which node execute job and etc.
97
+ You also can add some arguments to the job on-the-fly which will be used in the subsequent callbacks and errbacks.
98
+ These arguments then can be sent back to the distibutor.
99
+ 5. Reliability
81
100
  --------------
82
101
  You can run additional nodes and stop any nodes on-the-fly.
83
102
  Distributor is smart enough to send jobs to another node if someone is stopped or crashed.
84
103
  If no nodes are connected to distributor it will keep jobs in memory and send them when nodes start.
85
104
  If node is stopped or crashed it will retry stored jobs after start.
86
- 5. EventMachine available
105
+ 6. EventMachine available
87
106
  -------------------------
88
107
  Remember, your jobs will be run inside EventMachine reactor! You can easily use the power of async nature of EventMachine.
89
- Use asynchronous [em-http-request][4], [em-websocket][5], [etc.], [etc.], and [etc]. See page [advance usage].
90
- 6. Deferred and periodic jobs
108
+ Use asynchronous [em-http-request][6], [em-websocket][7], [etc.], [etc.], and [etc].
109
+ 7. Thread safe
110
+ Eventmachine reactor loop runs in one thread. So the code in jobs executed in the given node is absolutely threadsafe.
111
+ 8. Deferred and periodic jobs
91
112
  -----------------------------
92
113
  You can use deferred jobs which will run 'after' some time or 'run_at' given time.
93
114
  You can create periodic jobs which will run every given time period and cancel them on condition.
94
- 7. No polling
115
+ 9. No polling
95
116
  -------------
96
117
  There is no storage polling. Absolutely. When node receives job (no matter instant, periodic or deferred) there will be EventMachine timer created
97
118
  which will start job at the right time.
98
- 8. Job retrying
119
+ 10. Job retrying
99
120
  --------------
100
121
  If job fails it will be retried. You can choose global retrying strategy or manage separate jobs.
101
- 9. Predefined nodes
122
+ 11. Predefined nodes
102
123
  -------------------
103
124
  You can specify node for jobs, so they will be executed in that node environment. And you can specify which node is forbidden for the job.
104
125
  If no nodes are specified distributor will try to send the job to the first free node.
105
- 10. Node based priorities
126
+ 12. Node based priorities
106
127
  -----------------------
107
- There are no priorities like in Delayed::Job or Stalker. Bud there are flexible node-based priorities.
128
+ There are no priorities like in Delayed::Job or Stalker. But there are flexible node-based priorities.
108
129
  You can specify the node which should execute the job and the node is forbidden for given job. You can reserve several nodes for high priority jobs.
109
130
 
131
+ How it works
132
+ ============
133
+ 1. You run JobReactor::Distributor in your application initializer
134
+ -----------------------------------------------------
135
+ ``` ruby
136
+ JR.run do
137
+ JR.start_distributor('localhost', 5000)
138
+ end
139
+ ```
140
+ This code runs EventMachine reactor loop in the new thread and call the block given.
141
+ JR.start_distributor starts EventMachine TCP server on given host and port.
142
+ And now JobReactor is ready to work.
143
+
144
+ 2. You run JobReactor::Node in the different process or different machine
145
+ ------------------------------------------------------------------------
110
146
 
147
+ ``` ruby
148
+ JR.run! do
149
+ JR.start_node({
150
+ :storage => 'redis_storage',
151
+ :name => 'redis_node1',
152
+ :server => ['localhost', 5001],
153
+ :distributors => [['localhost', 5000]]
154
+ })
155
+ end
156
+ ```
111
157
 
112
- The main parts of JobReactor are:
113
- ---------------------------------
114
- JobReactor module for creating jobs.
115
- Distributor module for 'distributing' jobs between working nodes.
116
- Node object for job processing.
117
- #TODO
158
+ This code runs EventMachine reactor loop (in the main thread: this is the difference between `run` and `run!`).
159
+ And start the Node inside the reactor.
160
+ When node starts it:
161
+ * parses the 'reactor jobs' files (recursively parse all files specified in JR.config[:job_directory] directory, default is 'reactor_jobs' directory) and create hash of jobs callbacks and errbacs (see [JobReator jobs]);
162
+ * tries to 'retry' the job (if you use 'redis_storage' and `JR.config[:retry_jobs_at_start]` is true)
163
+ * starts it's own TCP server;
164
+ * connects to Distributor server and sends the information about needed to establish the connection;
165
+ When distributor receives the credentials it connects to Node server. And now there is a full duplex-connection between Distributor and Node.
166
+
167
+ 3. You enqueue the job in your application
168
+ ------------------------------------------
169
+
170
+ ```ruby
171
+ JR.enqueue('my_job',{arg1: 1, arg2: 2}, {after: 20}, success, error)
172
+ ```
118
173
 
174
+ The first argument is the name of the job, the second is the arguments hash for the job.
175
+ The third is the options hash. If you don't specify any option job will be instant job and will be sent to any free node. You can use the following options:
176
+ * `after: seconds` - node will try run the job after `seconds` seconds;
177
+ * `run_at: time` - node will try run the job at given time;
178
+ * `period: seconds` - node will run job periodically, each `seconds` seconds;
179
+ You can add `node: 'node_name'` and `not_node: 'node_name'` to the options. This specify the node on which the job should or shouldn't be run. For example:
119
180
 
181
+ ```ruby
182
+ JR.enqueue('my_job', {arg1: 1}, {period: 100, node: 'my_favourite_node', not_node: 'do_not_use_this_node'})
183
+ ```
120
184
 
185
+ The rule to use specified node is not strict if `JR.config[:always_use_specified_node]` is false (default).
186
+ This means that distributor will try to send the job to the given node at first. But if the node is `locked` (maybe you have just sent another job to it and it is very busy) distributor will look for other node.
187
+ The last two arguments are optional too. The first is 'success feedback' and the last is 'error feedback'. We use term 'feedback' to distinguish from 'callbacks' and 'errbacks'. 'feedback' is executed on the main application side while 'callbacks' on the node side. 'feedbacks' are the procs which will be called when node sent message that job is completed (successfully or not). The argunments for the 'feedback' are the arguments of the initial job plus all added on the node side.
188
+ Example:
121
189
 
190
+ ```ruby
191
+ #in your 'job_file'
192
+ job 'my_job' do |args|
193
+ #do smth
194
+ args.merge!(result: 'Yay!')
195
+ #in your application
196
+ #success feedback
197
+ success = proc {|args| puts args}
198
+ #enqueue job
199
+ JR.enqueue('my_job', {arg1: 1}, {}, success)
200
+ ```
122
201
 
202
+ The 'success' proc args will be {arg1: 1, result: 'Yay!'}.
203
+ The same story is with 'error feedback'. Note that error feedback will be launched after all attempts failed on the node side.
204
+ See config: `JR.config[:max_attempt] = 10` and `JR.config[:retry_multiplier]`
123
205
 
206
+ 4. You disconnect node (stop it manually or node fails itself)
207
+ --------------------------------------------------------------
208
+ * distributor will send jobs to any other nodes if present
209
+ * distributor will store in memory enqueued jobs if there is no connected node (or specified node)
210
+ * when node starts again, then distributor will send jobs to the node
124
211
 
212
+ 5. You stop the main application.
213
+ ---------------------------------
214
+ * Nodes will continue to work, but you won't be able to receive the results from node when you start the application again because all feedbacks are stored in memory.
125
215
 
126
- How it works
127
- ------------
128
- #TODO
216
+ Job Storage
217
+ ==========
218
+ Now you can store your job in [Redis][5] storage (`'redis_storage`') or in memory (`'memory_storage'`).
219
+ Only the first, of course, 'really' persists the jobs. You can use the last one if you don't want install Redis, don't need retry jobs and need more speed (by the way, the difference in performance is not so great - Redis is very fast).
129
220
 
221
+ The default host and port for Redis server are:
130
222
 
131
- License
132
- ---------
133
- The MIT License - Copyright (c) 2012 Anton Mishchuk
223
+ ```ruby
224
+ JR.config[:redis_host] = 'localhost'
225
+ JR.config[:redis_port] = 6379
226
+ ```
134
227
 
228
+ JobReactor works asynchronously with Redis using [em-redis][8] library to increase the speed.
229
+ Several nodes can use one Redis storage.
135
230
 
231
+ The informaion about jobs is saved several times during processing. This information includes:
232
+ * id - the unique job id;
233
+ * name - job name which 'defines' the job;
234
+ * args - serialized arguments for the job;
235
+ * run_at - the time when job was launched;
236
+ * failed_at - the time when job was failed;
237
+ * last_error - the error occured;
238
+ * period - period (for periodic jobs);
239
+ * status - job status ('new', 'in progress', 'queued', 'complete', 'error', 'failed', 'cancelled');
240
+ * attempt - the number of attempt;
241
+ * make_after - when to start job again (in seconds after last save);
242
+ * distributor - host and port of distributor server which sent the job (used for 'feedbacks');
243
+ * on_success - the unique id of success feedback on the distributor side;
244
+ * on_error - the unique id of error feedback on the distributor side;
136
245
 
246
+ By default JobReactor delete all completed and cancelled jobs, but you can configure it:
247
+ The default options are:
248
+
249
+ ```ruby
250
+ JR.config[:remove_done_jobs] = true
251
+ JR.config[:remove_cancelled_jobs] = true
252
+ JR.config[:remove_failed_jobs] = false
253
+ JR.config[:retry_jobs_at_start] = true
254
+ ```
255
+
256
+ We provide simple `JR::RedisMonitor` module to check the Redis storage from irb console (or from your app).
257
+ See methods:
258
+
259
+ ```ruby
260
+ JR::RedisMonitor.jobs_for(node_name)
261
+ JR::RedisMonitor.load(job_id)
262
+ JR::RedisMonitor.destroy(job_id)
263
+ JR::RedisMonitor.destroy_all_jobs_for(node_name)
264
+ ```
265
+
266
+
267
+ License
268
+ =======
269
+ The MIT License - Copyright (c) 2012 Anton Mishchuk
137
270
 
138
271
  [0]: http://rubyeventmachine.com
139
272
  [1]: https://github.com/defunkt/resque
140
- [2]: https://github.com/han/stalker
141
- [3]: https://github.com/tobi/delayed_job
142
- [4]: https://github.com/igrigorik/em-http-request
143
- [5]: https://github.com/igrigorik/em-websocket
144
- [6]: http://redis.io
273
+ [2]: http://kr.github.com/beanstalkd/
274
+ [3]: https://github.com/han/stalker
275
+ [4]: https://github.com/tobi/delayed_job
276
+ [5]: http://redis.io
277
+ [6]: https://github.com/igrigorik/em-http-request
278
+ [7]: https://github.com/igrigorik/em-websocket
279
+ [8]: https://github.com/madsimian/em-redis
@@ -52,11 +52,11 @@ module JobReactor
52
52
  end
53
53
 
54
54
  def succ_feedbacks
55
- @@callbacks ||= { }
55
+ @@succ_feedbacks ||= { }
56
56
  end
57
57
 
58
58
  def err_feedbacks
59
- @@errbacks ||= { }
59
+ @@err_feedbacks ||= { }
60
60
  end
61
61
 
62
62
  # Here is the only method user can call inside the application (excepts start-up methods, of course).
@@ -73,7 +73,7 @@ module JobReactor
73
73
  # (Do not use procs, objects with singleton methods, etc ... ).
74
74
  #
75
75
  # Example:
76
- # JR.enqueue 'job', {:arg1 => 'arg1', :arg2 => 'arg2'}
76
+ # JR.enqueue 'job', {arg1: 'arg1', arg2: 'arg2'}
77
77
  #
78
78
  # You can add the following options:
79
79
  # :run_at - run at given time;
@@ -83,8 +83,8 @@ module JobReactor
83
83
  # :not_node - to do not send job to the node;
84
84
  #
85
85
  # Example:
86
- # JR.enqueue 'job', {:arg1 => 'arg1'}, {:period => 100, :node => 'my_favorite_node'}
87
- # JR.enqueue 'job', {:arg1 => 'arg1'}, {:after => 10, :not_node => 'some_node'}
86
+ # JR.enqueue 'job', {arg1: 'arg1'}, {period: 100, node: 'my_favorite_node'}
87
+ # JR.enqueue 'job', {arg1: 'arg1'}, {after: 10, not_node: 'some_node'}
88
88
  #
89
89
  # You can add 'success feedback' and 'error feedback'. We use term 'feedback' to distinguish them from callbacks and errbacks which are executed on the node side.
90
90
  # These feedbacks are the procs. The first is 'success feedback', the second - 'error feedback'.
@@ -93,7 +93,7 @@ module JobReactor
93
93
  # Example:
94
94
  # success = proc { |args| result = args }
95
95
  # error = proc { |args| result = args }
96
- # JR.enqueue 'job', { :arg1 => 'arg1'}, {}, success, error
96
+ # JR.enqueue 'job', { arg1: 'arg1'}, {}, success, error
97
97
  #
98
98
  def enqueue(name, args = { }, opts = { }, success_proc = nil, error_proc = nil)
99
99
  hash = { 'name' => name, 'args' => args, 'attempt' => 0, 'status' => 'new' }
@@ -107,8 +107,8 @@ module JobReactor
107
107
 
108
108
  hash.merge!('distributor' => "#{JR::Distributor.host}:#{JR::Distributor.port}")
109
109
 
110
- add_succ_feedbacks!(hash, success_proc) if success_proc
111
- add_err_feedbacks!(hash, error_proc) if error_proc
110
+ add_succ_feedbacks!(hash, success_proc) if success_proc.is_a? Proc
111
+ add_err_feedbacks!(hash, error_proc) if error_proc.is_a? Proc
112
112
 
113
113
  JR::Distributor.send_data_to_node(hash)
114
114
  end
@@ -12,7 +12,7 @@ JR.config[:job_directory] = 'reactor_jobs'
12
12
  JR.config[:max_attempt] = 10
13
13
  JR.config[:retry_multiplier] = 5
14
14
  JR.config[:retry_jobs_at_start] = true
15
- JR.config[:merge_job_itself_to_args] = true
15
+ JR.config[:merge_job_itself_to_args] = false
16
16
  JR.config[:log_job_processing] = true
17
17
  JR.config[:always_use_specified_node] = false #will send job to another node if specified node is not available
18
18
  JR.config[:remove_done_jobs] = true
@@ -22,6 +22,8 @@ JR.config[:remove_failed_jobs] = false
22
22
  JR.config[:redis_host] = 'localhost'
23
23
  JR.config[:redis_port] = 6379
24
24
 
25
+ JR.config[:logger_method] = :puts
26
+
25
27
  #TODO next releases with rails support
26
28
  #JR.config[:active_record_adapter] = 'mysql2'
27
29
  #JR.config[:active_record_database] = 'em'
@@ -2,7 +2,7 @@ module JobReactor
2
2
 
3
3
  class NoSuchJob < RuntimeError
4
4
  end
5
- class CancelJob < RuntimeError
5
+ class CancelJob < Exception
6
6
  end
7
7
 
8
8
  end
@@ -1,57 +1,30 @@
1
1
  module JobReactor
2
2
  module Logger
3
- ################
4
- # To set output stream
5
- @@logger_method = :puts
3
+
4
+ # Sets the output stream
5
+ #
6
+ @@logger_method = JR.config[:logger_method]
6
7
 
7
8
  def self.stdout=(value)
8
- if value.is_a?(Symbol) && value == :rails_logger
9
- @@stdout = Rails.logger
10
- @@logger_method = :info
11
- else
12
- @@stdout = value
13
- @@logger_method = :puts
14
- end
9
+ @@stdout = value
15
10
  end
16
11
 
17
12
  def self.stdout
18
13
  @@stdout ||= $stdout
19
14
  end
20
15
 
21
- #################
22
- # Is checked in dev_log
23
-
24
- @@development = false
25
-
26
- def self.development=(value)
27
- @@development = !!value
28
- end
29
-
30
- #################
31
-
32
- # Log message to output stream
33
- #
16
+ # Logs message to output stream
17
+ #
34
18
  def self.log(msg)
35
19
  stdout.public_send(@@logger_method, '-'*100)
36
20
  stdout.public_send(@@logger_method, msg)
37
21
  end
38
22
 
39
- # Build string for job event and log it
23
+ # Builds string for job event and log it
40
24
  #
41
25
  def self.log_event(event, job)
42
- log("Log: #{event} #{job['name']}")
43
- end
44
-
45
- # Log if JR::Logger.development is set to true
46
- #
47
- def self.dev_log(msg)
48
- log(msg) if development?
26
+ log("#{event} #{job['name']}")
49
27
  end
50
28
 
51
- # Is JR::Logger.development set to true?
52
- #
53
- def self.development?
54
- @@development
55
- end
56
29
  end
57
30
  end
@@ -52,9 +52,13 @@ module JobReactor
52
52
  # It makes a job and run do_job.
53
53
  #
54
54
  def schedule(hash)
55
- EM::Timer.new(hash['make_after']) do #Of course, we can start job immediately (unless it is 'after' job), but we let EM take care about it. Maybe there is another job is ready to start
56
- self.storage.load(hash) do |hash|
57
- do_job(JR.make(hash))
55
+ if hash['make_after'].to_i > 0
56
+ EM::Timer.new(hash['make_after']) do
57
+ self.storage.load(hash) { |hash| do_job(JR.make(hash)) }
58
+ end
59
+ else
60
+ EM.next_tick do
61
+ self.storage.load(hash) { |hash| do_job(JR.make(hash)) }
58
62
  end
59
63
  end
60
64
  end
@@ -78,7 +82,7 @@ module JobReactor
78
82
  job.succeed(args)
79
83
  job['args'] = args
80
84
  job_completed(job)
81
- rescue JobReactor::CancelJob
85
+ rescue CancelJob
82
86
  cancel_job(job)
83
87
  rescue Exception => e
84
88
  rescue_job(e, job)
@@ -116,14 +120,14 @@ module JobReactor
116
120
  job['status'] = 'error'
117
121
  self.storage.save(job) do |job|
118
122
  begin
119
- args = job['args'].merge!(:error => e).merge(JR.config[:merge_job_itself_to_args] ? { :job_itself => job.dup } : { })
123
+ args = job['args'].merge!(error: e).merge(JR.config[:merge_job_itself_to_args] ? { job_itself: job.dup } : { })
120
124
  job.fail(args) #Fire errbacks. You can access error in you errbacks (args[:error])
121
125
  job['args'] = args
122
126
  complete_rescue(job)
123
127
  rescue JobReactor::CancelJob
124
- cancel_job(job) #If it was cancelled we destroy it or set status 'cancelled'
128
+ cancel_job(job, true) #If it was cancelled we destroy it or set status 'cancelled'
125
129
  rescue Exception => e #Recsue Exceptions in errbacks
126
- job['args'].merge!(:errback_error => e) #So in args you now have :error and :errback_error
130
+ job['args'].merge!(errback_error: e) #So in args you now have :error and :errback_error
127
131
  complete_rescue(job)
128
132
  end
129
133
  end
@@ -148,8 +152,9 @@ module JobReactor
148
152
 
149
153
  # Cancels job. Remove or set 'cancelled status'
150
154
  #
151
- def cancel_job(job)
152
- report_error(job) if job['on_error']
155
+ def cancel_job(job, error = false)
156
+ report_success(job) if !error && job['on_success']
157
+ report_error(job) if error && job['on_error']
153
158
  if JR.config[:remove_cancelled_jobs]
154
159
  storage.destroy(job)
155
160
  else
@@ -178,7 +183,7 @@ module JobReactor
178
183
  #
179
184
  def retry_jobs
180
185
  storage.jobs_for(name) do |job_to_retry|
181
- job_to_retry['args'].merge!(:retrying => true)
186
+ job_to_retry['args'].merge!(retrying: true)
182
187
  try_again(job_to_retry) if job_to_retry
183
188
  end
184
189
  end
@@ -189,7 +194,7 @@ module JobReactor
189
194
  host, port = job['distributor'].split(':')
190
195
  port = port.to_i
191
196
  distributor = self.connections[[host, port]]
192
- data = {:success => { callback_id: job['on_success'], args: job['args']}}
197
+ data = { :success => { callback_id: job['on_success'], args: job['args']} }
193
198
  data[:success].merge!(do_not_delete: true) if job['period'] && job['period'].to_i > 0
194
199
  data = Marshal.dump(data)
195
200
  send_data_to_distributor(distributor, data)
@@ -11,9 +11,11 @@ module JobReactor
11
11
 
12
12
  def load(hash, &block)
13
13
  hash = storage[hash['id']]
14
- hash_copy = { }
15
- hash.each { |k, v| hash_copy.merge!(k => v) }
16
- block.call(hash_copy) if block_given?
14
+ if hash
15
+ hash_copy = { }
16
+ hash.each { |k, v| hash_copy.merge!(k => v) }
17
+ block.call(hash_copy) if block_given?
18
+ end
17
19
  end
18
20
 
19
21
  def save(hash, &block)
@@ -51,7 +51,7 @@ module JobReactor
51
51
  #
52
52
  def destroy_all_jobs_for(name)
53
53
  pattern = "*#{name}_*"
54
- storage.del(*storage.keys(pattern))
54
+ storage.del(*storage.keys(pattern)) unless storage.keys(pattern).empty?
55
55
  end
56
56
 
57
57
  end
@@ -17,15 +17,17 @@ module JobReactor
17
17
  hash_copy = {'node' => hash['node']} #need new object, because old one has been 'failed'
18
18
 
19
19
  storage.hmget(key, *ATTRS) do |record|
20
- ATTRS.each_with_index do |attr, i|
21
- hash_copy[attr] = record[i]
22
- end
23
- ['attempt', 'period', 'make_after'].each do |attr|
24
- hash_copy[attr] = hash_copy[attr].to_i
25
- end
26
- hash_copy['args'] = Marshal.load(hash_copy['args'])
20
+ unless record.compact.empty?
21
+ ATTRS.each_with_index do |attr, i|
22
+ hash_copy[attr] = record[i]
23
+ end
24
+ ['attempt', 'period', 'make_after'].each do |attr|
25
+ hash_copy[attr] = hash_copy[attr].to_i
26
+ end
27
+ hash_copy['args'] = Marshal.load(hash_copy['args'])
27
28
 
28
- block.call(hash_copy) if block_given?
29
+ block.call(hash_copy) if block_given?
30
+ end
29
31
  end
30
32
  end
31
33
 
metadata CHANGED
@@ -1,8 +1,8 @@
1
1
  --- !ruby/object:Gem::Specification
2
2
  name: job_reactor
3
3
  version: !ruby/object:Gem::Version
4
- version: 0.5.0.beta4
5
- prerelease: 6
4
+ version: 0.5.0
5
+ prerelease:
6
6
  platform: ruby
7
7
  authors:
8
8
  - Anton Mishchuk
@@ -10,7 +10,7 @@ authors:
10
10
  autorequire:
11
11
  bindir: bin
12
12
  cert_chain: []
13
- date: 2012-06-05 00:00:00.000000000 Z
13
+ date: 2012-06-14 00:00:00.000000000 Z
14
14
  dependencies:
15
15
  - !ruby/object:Gem::Dependency
16
16
  name: eventmachine
@@ -60,8 +60,9 @@ dependencies:
60
60
  - - ! '>='
61
61
  - !ruby/object:Gem::Version
62
62
  version: '0'
63
- description: ! " JobReactor is a library for creating and processing background
64
- jobs.\n It is client-server distributed system based on EventMachine.\n"
63
+ description: ! " JobReactor is a library for creating, scheduling and processing
64
+ background jobs.\n It is asynchronous client-server distributed system based
65
+ on EventMachine.\n Inspired by DelayedJob, Resque, Beanstalkd, and etc.\n"
65
66
  email: anton.mishchuk@gmial.com
66
67
  executables: []
67
68
  extensions: []
@@ -99,9 +100,9 @@ required_ruby_version: !ruby/object:Gem::Requirement
99
100
  required_rubygems_version: !ruby/object:Gem::Requirement
100
101
  none: false
101
102
  requirements:
102
- - - ! '>'
103
+ - - ! '>='
103
104
  - !ruby/object:Gem::Version
104
- version: 1.3.1
105
+ version: '0'
105
106
  requirements: []
106
107
  rubyforge_project:
107
108
  rubygems_version: 1.8.24