rocketjob 0.9.1 → 1.0.0

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
checksums.yaml CHANGED
@@ -1,7 +1,7 @@
1
1
  ---
2
2
  SHA1:
3
- metadata.gz: b2d5bd2e258623a70cb2ac81bb559db1302d2c40
4
- data.tar.gz: 5e478b33ac8e404496ca112536dcd57e24d3b369
3
+ metadata.gz: 4f0d0679cdb6e1b6fb284b1dfd2c90b5b201ffe7
4
+ data.tar.gz: ea135d888befde2dd286242a8c5bb1c338a45128
5
5
  SHA512:
6
- metadata.gz: 6c39b919588cd3f58490294e92ef2233f397a867b4e3bf41073e16374238348f6a2ec892a2dfbcc73ed838ef62c3344a6a3d7ec9289e70ba4ee575cf5ab096a7
7
- data.tar.gz: 07603275a4346907ea05613012582cbf746571011d25bb4bb42d3bfa022932c8867030025a338e83aa21f8de9eb597e9cee83d9cb34e3eb9175054da8c2101d9
6
+ metadata.gz: c8640b98d22541f3739a4a61fd726981a3f349c0b57ecdaf88e006839e5ac98d3505144723df65a7f98505951a9f971c0d393669fa655e63b445221a64d8e485
7
+ data.tar.gz: 88665fc325e161c4cb3d4dbdf506250c3891742199c6e3d9c0b5f74064f245faf08b21213222ff0679fff24ee2cea9bef4b2af3ae7da1f146dac57e472800d30
data/README.md CHANGED
@@ -1,330 +1,20 @@
1
- # rocketjob[![Build Status](https://secure.travis-ci.org/rocketjob/rocketjob.png?branch=master)](http://travis-ci.org/rocketjob/rocketjob) ![](http://ruby-gem-downloads-badge.herokuapp.com/rocketjob?type=total)
1
+ # rocketjob [![Gem Version](https://badge.fury.io/rb/rocketjob.svg)](http://badge.fury.io/rb/rocketjob) [![Build Status](https://secure.travis-ci.org/rocketjob/rocketjob.png?branch=master)](http://travis-ci.org/rocketjob/rocketjob) ![](http://ruby-gem-downloads-badge.herokuapp.com/rocketjob?type=total)
2
2
 
3
- High volume, priority based, background job processing solution for Ruby.
3
+ High volume, priority based, distributed, background job processing solution for Ruby.
4
4
 
5
5
  ## Status
6
6
 
7
- Beta - Feedback on the API is welcome. API may change.
7
+ Production Ready
8
8
 
9
- Already in use in production internally processing large files with millions
9
+ Already in use in production processing large files with millions
10
10
  of records, as well as large jobs to walk though large databases.
11
11
 
12
- ## Why?
12
+ ## Documentation
13
13
 
14
- We have tried for years to make both `resque` and more recently `sidekiq`
15
- work for large high performance batch processing.
16
- Even `sidekiq-pro` was purchased and used in an attempt to process large batches.
14
+ * [Guide](http://rocketjob.io/)
15
+ * [API Reference](http://www.rubydoc.info/gems/rocketjob/)
17
16
 
18
- Unfortunately, after all the pain and suffering with the existing asynchronous
19
- worker solutions none of them have worked in our production environment without
20
- significant hand-holding and constant support. Mysteriously the odd record/job
21
- was disappearing when processing 100's of millions of jobs with no indication
22
- where those lost jobs went.
23
-
24
- In our environment we cannot lose even a single job or record, as all data is
25
- business critical. The existing batch processing solution do not supply any way
26
- to collect the output from batch processing and as a result every job has custom
27
- code to collect it's output. rocketjob has built in support to collect the results
28
- of any batch job.
29
-
30
- High availability and high throughput were being limited by how much we could get
31
- through `redis`. Being a single-threaded process it is constrained to a single
32
- CPU. Putting `redis` on a large multi-core box does not help since it will not
33
- use more than one CPU at a time.
34
- Additionally, `redis` is constrained to the amount of physical memory is available
35
- on the server.
36
- `redis` worked very well when processing was below around 100,000 jobs a day,
37
- when our workload suddenly increased to over 100,000,000 a day it could not keep
38
- up. Its single CPU would often hit 100% CPU utilization when running many `sidekiq-pro`
39
- servers. We also had to store actual job data in a separate MySQL database since
40
- it would not fit in memory on the `redis` server.
41
-
42
- `rocketjob` was created out of necessity due to constant support. End-users were
43
- constantly contacting the development team to ask on the status of "hung" or
44
- "in-complete" jobs, as part of our DevOps role.
45
-
46
- Another significant production support challenge is trying to get `resque` or `sidekiq`
47
- to process the batch jobs in a very specific order. Switching from queue-based
48
- to priority-based job processing means that all jobs are processed in the order of
49
- their priority and not what queues are defined on what servers and in what quantity.
50
- This approach has allowed us to significantly increase the CPU and IO utilization
51
- across all worker machines. The traditional queue based approach required constant
52
- tweaking in the production environment to try and balance workload without overwhelming
53
- any one server.
54
-
55
- End-users are now able to modify the priority of their various jobs at runtime
56
- so that they can get that business critical job out first, instead of having to
57
- wait for other jobs of the same type/priority to finish first.
58
-
59
- Since `rocketjob` uploads the entire file, or all data for processing it does not
60
- require jobs to store the data in other databases.
61
- Additionally, `rocketjob` supports encryption and compression of any data uploaded
62
- into Sliced Jobs to ensure PCI compliance and to prevent sensitive from being exposed
63
- either at rest in the data store, or in flight as it is being read or written to the
64
- backend data store.
65
- Often large files received for processing contain sensitive data that must not be exposed
66
- in the backend job store. Having this capability built-in ensures all our jobs
67
- are properly securing sensitive data.
68
-
69
- Since moving to `rocketjob` our production support has diminished and now we can
70
- focus on writing code again. :)
71
-
72
- ## Introduction
73
-
74
- `rocketjob` is a global "priority based queue" (https://en.wikipedia.org/wiki/Priority_queue)
75
- All jobs are placed in a single global queue and the job with the highest priority
76
- is processed first. Jobs with the same priority are processed on a first-in
77
- first-out (FIFO) basis.
78
-
79
- This differs from the traditional approach of separate queues for jobs which
80
- quickly becomes cumbersome when there are for example over a hundred different
81
- types of jobs.
82
-
83
- The global priority based queue ensures that the workers are utilized to their
84
- capacity without requiring constant manual intervention.
85
-
86
- `rocketjob` is designed to handle hundreds of millions of concurrent jobs
87
- that are often encountered in high volume batch processing environments.
88
- It is designed from the ground up to support large batch file processing.
89
- For example a single file that contains millions of records to be processed
90
- as quickly as possible without impacting other jobs with a higher priority.
91
-
92
- ## Management
93
-
94
- The companion project [rocketjob mission control](https://github.com/rocketjob/rocket_job_mission_control)
95
- contains the Rails Engine that can be loaded into your Rails project to add
96
- a web interface for viewing and managing `rocketjob` jobs.
97
-
98
- `rocketjob mission control` can also be run stand-alone in a shell Rails application.
99
-
100
- By separating `rocketjob mission control` into a separate gem means it does not
101
- have to be loaded where `rocketjob` jobs are defined or run.
102
-
103
- ## Jobs
104
-
105
- Simple single task jobs:
106
-
107
- Example job to run in a separate worker process
108
-
109
- ```ruby
110
- class MyJob < RocketJob::Job
111
- # Method to call asynchronously by the worker
112
- def perform(email_address, message)
113
- # For example send an email to the supplied address with the supplied message
114
- send_email(email_address, message)
115
- end
116
- end
117
- ```
118
-
119
- To queue the above job for processing:
120
-
121
- ```ruby
122
- MyJob.perform_later('jack@blah.com', 'lets meet')
123
- ```
124
-
125
- ## Directory Monitoring
126
-
127
- A common task with many batch processing systems is to look for the appearance of
128
- new files and kick off jobs to process them. `DirmonJob` is a job designed to do
129
- this task.
130
-
131
- `DirmonJob` runs every 5 minutes by default, looking for new files that have appeared
132
- based on configured entries called `DirmonEntry`. Ultimately these entries will be
133
- configurable via `rocketjob_mission_control`, the web management interface for `rocketjob`.
134
-
135
- Example, creating a `DirmonEntry`
136
-
137
- ```ruby
138
- RocketJob::DirmonEntry.new(
139
- path: 'path_to_monitor/*',
140
- job: 'Jobs::TestJob',
141
- arguments: [ { input: 'yes' } ],
142
- properties: { priority: 23, perform_method: :event },
143
- archive_directory: '/exports/archive'
144
- )
145
- ```
146
-
147
- The attributes of DirmonEntry:
148
-
149
- * path <String>
150
-
151
- Wildcard path to search for files in.
152
- For details on valid path values, see: http://ruby-doc.org/core-2.2.2/Dir.html#method-c-glob
153
-
154
- Example:
155
-
156
- * input_files/process1/*.csv*
157
- * input_files/process2/**/*
158
-
159
- * job <String>
160
-
161
- Name of the job to start
162
-
163
- * arguments <Array>
164
-
165
- Any user supplied arguments for the method invocation
166
- All keys must be UTF-8 strings. The values can be any valid BSON type:
167
-
168
- * Integer
169
- * Float
170
- * Time (UTC)
171
- * String (UTF-8)
172
- * Array
173
- * Hash
174
- * True
175
- * False
176
- * Symbol
177
- * nil
178
- * Regular Expression
179
-
180
- _Note_: Date is not supported, convert it to a UTC time
181
-
182
- * properties <Hash>
183
-
184
- Any job properties to set.
185
-
186
- Example, override the default job priority:
187
-
188
- ```ruby
189
- { priority: 45 }
190
- ```
191
-
192
- * archive_directory
193
-
194
- Archive directory to move the file to before the job is started. It is important to
195
- move the file before it is processed so that it is not picked up again for processing.
196
- If no archive_directory is supplied the file will be moved to a folder called '_archive'
197
- in the same folder as the file itself.
198
-
199
- If the `path` above is a relative path the relative path structure will be
200
- maintained when the file is moved to the archive path.
201
-
202
- * enabled <Boolean>
203
-
204
- Allow a monitoring entry to be disabled so that it is ignored by `DirmonJob`.
205
- This feature is useful for operations to temporarily stop processing files
206
- from a particular source, without having to completely delete the `DirmonEntry`.
207
- It can also be used to create a `DirmonEntry` without it becoming immediately
208
- active.
209
- ```
210
-
211
- ### Starting the directory monitor
212
-
213
- The directory monitor job only needs to be started once per installation by running
214
- the following code:
215
-
216
- ```ruby
217
- RocketJob::Jobs::DirmonJob.perform_later
218
- ```
219
-
220
- The polling interval to check for new files can be modified when starting the job
221
- for the first time by adding:
222
- ```ruby
223
- RocketJob::Jobs::DirmonJob.perform_later do |job|
224
- job.check_seconds = 180
225
- end
226
- ```
227
-
228
- The default priority for `DirmonJob` is 40, to increase it's priority:
229
- ```ruby
230
- RocketJob::Jobs::DirmonJob.perform_later do |job|
231
- job.check_seconds = 300
232
- job.priority = 25
233
- end
234
- ```
235
-
236
- Once `DirmonJob` has been started it's priority and check interval can be
237
- changed at any time as follows:
238
-
239
- ```ruby
240
- RocketJob::Jobs::DirmonJob.first.set(check_seconds: 180, priority: 20)
241
- ```
242
-
243
- The `DirmonJob` will automatically re-schedule a new instance of itself to run in
244
- the future after it completes a each scan/run. If successful the current job instance
245
- will destroy itself.
246
-
247
- In this way it avoids having a single Directory Monitor process that constantly
248
- sits there monitoring folders for changes. More importantly it avoids a "single
249
- point of failure" that is typical for earlier directory monitoring solutions.
250
- Every time `DirmonJob` runs and scans the paths for new files it could be running
251
- on a new worker. If any server/worker is removed or shutdown it will not stop
252
- `DirmonJob` since it will just run on another worker instance.
253
-
254
- There can only be one `DirmonJob` instance `queued` or `running` at a time. Any
255
- attempt to start a second instance will result in an exception.
256
-
257
- If an exception occurs while running `DirmonJob`, a failed job instance will remain
258
- in the job list for problem determination. The failed job cannot be restarted and
259
- should be destroyed if no longer needed.
260
-
261
- ## Rails Configuration
262
-
263
- MongoMapper will already configure itself in Rails environments. `rocketjob` can
264
- be configured to use a separate MongoDB instance from the Rails application as follows:
265
-
266
- For example, we may want `RocketJob::Job` to be stored in a Mongo Database that
267
- is replicated across data centers, whereas we may not want to replicate the
268
- `RocketJob::SlicedJob`** slices due to it's sheer volume.
269
-
270
- ```ruby
271
- config.before_initialize do
272
- # Share the common mongo configuration file
273
- config_file = root.join('config', 'mongo.yml')
274
- if config_file.file?
275
- config = YAML.load(ERB.new(config_file.read).result)
276
- if config["#{Rails.env}_rocketjob]
277
- options = (config['options']||{}).symbolize_keys
278
- options[:logger] = SemanticLogger::DebugAsTraceLogger.new('Mongo:rocketjob')
279
- RocketJob::Config.mongo_connection = Mongo::MongoClient.from_uri(config['uri'], options)
280
- end
281
- # It is also possible to store the jobs themselves in a separate MongoDB database
282
- if config["#{Rails.env}_rocketjob_work]
283
- options = (config['options']||{}).symbolize_keys
284
- options[:logger] = SemanticLogger::DebugAsTraceLogger.new('Mongo:rocketjob_work')
285
- RocketJob::Config.mongo_work_connection = Mongo::MongoClient.from_uri(config['uri'], options)
286
- end
287
- else
288
- puts "\nmongo.yml config file not found: #{config_file}"
289
- end
290
- end
291
- ```
292
-
293
- For an example config file, `config/mongo.yml`, see [mongo.yml](https://github.com/rocketjob/rocketjob/blob/master/test/config/mongo.yml)
294
-
295
- ## Standalone Configuration
296
-
297
- When running `rocketjob` in a standalone environment without Rails, the MongoDB
298
- connections will need to be setup as follows:
299
-
300
- ```ruby
301
- options = {
302
- pool_size: 50,
303
- pool_timeout: 5,
304
- logger: SemanticLogger::DebugAsTraceLogger.new('Mongo:Work'),
305
- }
306
-
307
- # For example when using a replica-set for high availability
308
- uri = 'mongodb://mongo1.site.com:27017,mongo2.site.com:27017/production_rocketjob'
309
- RocketJob::Config.mongo_connection = Mongo::MongoClient.from_uri(uri, options)
310
-
311
- # Use a separate database, or even server for `RocketJob::SlicedJob` slices
312
- uri = 'mongodb://mongo1.site.com:27017,mongo2.site.com:27017/production_rocketjob_slices'
313
- RocketJob::Config.mongo_work_connection = Mongo::MongoClient.from_uri(uri, options)
314
- ```
315
-
316
- ## Requirements
317
-
318
- MongoDB V2.6 or greater. V3 is recommended
319
-
320
- * V2.6 includes a feature to allow lookups using the `$or` clause to use an index
321
-
322
- ## Meta
323
-
324
- * Code: `git clone git://github.com/rocketjob/rocketjob.git`
325
- * Home: <https://github.com/rocketjob/rocketjob>
326
- * Bugs: <http://github.com/rocketjob/rocketjob/issues>
327
- * Gems: <http://rubygems.org/gems/rocketjob>
17
+ ## Versioning
328
18
 
329
19
  This project uses [Semantic Versioning](http://semver.org/).
330
20
 
@@ -2,6 +2,10 @@ module RocketJob
2
2
  class DirmonEntry
3
3
  include MongoMapper::Document
4
4
 
5
+ # Name for this path entry used to identify this DirmonEntry
6
+ # in the user interface
7
+ key :name, String
8
+
5
9
  # Wildcard path to search for files in
6
10
  #
7
11
  # Example:
@@ -12,13 +16,13 @@ module RocketJob
12
16
  #
13
17
  # Note
14
18
  # - If there are no '*' in the path then an exact filename match is expected
15
- key :path, String
19
+ key :path, String
16
20
 
17
21
  # Job to start
18
22
  #
19
23
  # Example:
20
24
  # "ProcessItJob"
21
- key :job, String
25
+ key :job_name, String
22
26
 
23
27
  # Any user supplied arguments for the method invocation
24
28
  # All keys must be UTF-8 strings. The values can be any valid BSON type:
@@ -35,13 +39,13 @@ module RocketJob
35
39
  # Regular Expression
36
40
  #
37
41
  # Note: Date is not supported, convert it to a UTC time
38
- key :arguments, Array, default: []
42
+ key :arguments, Array, default: []
39
43
 
40
44
  # Any job properties to set
41
45
  #
42
46
  # Example, override the default job priority:
43
47
  # { priority: 45 }
44
- key :properties, Hash, default: {}
48
+ key :properties, Hash, default: {}
45
49
 
46
50
  # Archive directory to move files to when processed to prevent processing the
47
51
  # file again.
@@ -53,8 +57,41 @@ module RocketJob
53
57
  key :archive_directory, String
54
58
 
55
59
  # Allow a monitoring path to be temporarily disabled
56
- key :enabled, Boolean, default: true
60
+ key :enabled, Boolean, default: true
61
+
62
+ # Method to perform on the job, usually :perform
63
+ key :perform_method, Symbol, default: :perform
64
+
65
+ # Returns the Job to be queued
66
+ def job_class
67
+ job_name.nil? ? nil : job_name.constantize
68
+ end
69
+
70
+ validates_presence_of :path, :job_name
71
+
72
+ validates_each :job_name do |record, attr, value|
73
+ exists = false
74
+ begin
75
+ exists = value.nil? ? false : value.constantize.ancestors.include?(RocketJob::Job)
76
+ rescue NameError => exc
77
+ end
78
+ record.errors.add(attr, 'job_name must be defined and must be derived from RocketJob::Job') unless exists
79
+ end
80
+
81
+ validates_each :arguments do |record, attr, value|
82
+ if klass = record.job_class
83
+ count = klass.argument_count(record.perform_method)
84
+ record.errors.add(attr, "There must be #{count} argument(s)") if value.size != count
85
+ end
86
+ end
87
+
88
+ validates_each :properties do |record, attr, value|
89
+ if record.job_name && (methods = record.job_class.instance_methods)
90
+ value.each_pair do |key, value|
91
+ record.errors.add(attr, "Unknown property: #{key.inspect} with value: #{value}") unless methods.include?("#{key}=".to_sym)
92
+ end
93
+ end
94
+ end
57
95
 
58
- validates_presence_of :path, :job
59
96
  end
60
97
  end
@@ -215,6 +215,11 @@ module RocketJob
215
215
  where(state: 'paused').each { |job| job.resume! }
216
216
  end
217
217
 
218
+ # Returns the number of required arguments for this job
219
+ def self.argument_count(method=:perform)
220
+ instance_method(method).arity
221
+ end
222
+
218
223
  # Returns [true|false] whether to collect the results from running this batch
219
224
  def collect_output?
220
225
  collect_output == true
@@ -96,7 +96,7 @@ module RocketJob
96
96
  def check_file(entry, file_name, previous_size)
97
97
  size = File.size(file_name)
98
98
  if previous_size && (previous_size == size)
99
- logger.info("File stabilized: #{file_name}. Starting: #{entry.job}")
99
+ logger.info("File stabilized: #{file_name}. Starting: #{entry.job_name}")
100
100
  start_job(entry, file_name)
101
101
  nil
102
102
  else
@@ -111,8 +111,9 @@ module RocketJob
111
111
 
112
112
  # Starts the job for the supplied entry
113
113
  def start_job(entry, file_name)
114
- entry.job.constantize.perform_later(*entry.arguments) do |job|
115
- # Set properties, also allows :perform_method to be overridden
114
+ entry.job_class.perform_later(*entry.arguments) do |job|
115
+ job.perform_method = entry.perform_method
116
+ # Set properties
116
117
  entry.properties.each_pair { |k, v| job.send("#{k}=".to_sym, v) }
117
118
 
118
119
  upload_file(job, file_name, entry.archive_directory)
@@ -1,4 +1,4 @@
1
1
  # encoding: UTF-8
2
2
  module RocketJob #:nodoc
3
- VERSION = "0.9.1"
3
+ VERSION = "1.0.0"
4
4
  end
@@ -0,0 +1,70 @@
1
+ require_relative 'test_helper'
2
+ require_relative 'jobs/test_job'
3
+
4
+ # Unit Test for RocketJob::Job
5
+ class DirmonEntryTest < Minitest::Test
6
+ context RocketJob::DirmonEntry do
7
+ teardown do
8
+ @dirmon_entry.destroy if @dirmon_entry && @dirmon_entry.new_record?
9
+ end
10
+
11
+ context '.config' do
12
+ should 'support multiple databases' do
13
+ assert_equal 'test_rocketjob', RocketJob::DirmonEntry.collection.db.name
14
+ end
15
+ end
16
+
17
+ context '#validate' do
18
+ should 'existance' do
19
+ assert entry = RocketJob::DirmonEntry.new(job_name: 'Jobs::TestJob')
20
+ assert_equal false, entry.valid?
21
+ assert_equal [ "can't be blank" ], entry.errors[:path], entry.errors.inspect
22
+ end
23
+
24
+ should 'job_name' do
25
+ assert entry = RocketJob::DirmonEntry.new(path: '/abc/**')
26
+ assert_equal false, entry.valid?
27
+ assert_equal ["can't be blank", "job_name must be defined and must be derived from RocketJob::Job"], entry.errors[:job_name], entry.errors.inspect
28
+ end
29
+
30
+ should 'arguments' do
31
+ assert entry = RocketJob::DirmonEntry.new(
32
+ job_name: 'Jobs::TestJob',
33
+ path: '/abc/**'
34
+ )
35
+ assert_equal false, entry.valid?
36
+ assert_equal ["There must be 1 argument(s)"], entry.errors[:arguments], entry.errors.inspect
37
+ end
38
+
39
+ should 'arguments with perform_method' do
40
+ assert entry = RocketJob::DirmonEntry.new(
41
+ job_name: 'Jobs::TestJob',
42
+ path: '/abc/**',
43
+ perform_method: :sum
44
+ )
45
+ assert_equal false, entry.valid?
46
+ assert_equal ["There must be 2 argument(s)"], entry.errors[:arguments], entry.errors.inspect
47
+ end
48
+
49
+ should 'valid' do
50
+ assert entry = RocketJob::DirmonEntry.new(
51
+ job_name: 'Jobs::TestJob',
52
+ path: '/abc/**',
53
+ arguments: [1]
54
+ )
55
+ assert entry.valid?, entry.errors.inspect
56
+ end
57
+
58
+ should 'valid with perform_method' do
59
+ assert entry = RocketJob::DirmonEntry.new(
60
+ job_name: 'Jobs::TestJob',
61
+ path: '/abc/**',
62
+ perform_method: :sum,
63
+ arguments: [1,2]
64
+ )
65
+ assert entry.valid?, entry.errors.inspect
66
+ end
67
+ end
68
+
69
+ end
70
+ end
@@ -11,7 +11,7 @@ class DirmonJobTest < Minitest::Test
11
11
  @archive_directory = '/tmp/archive_directory'
12
12
  @entry = RocketJob::DirmonEntry.new(
13
13
  path: 'abc/*',
14
- job: 'Jobs::TestJob',
14
+ job_name: 'Jobs::TestJob',
15
15
  arguments: [ { input: 'yes' } ],
16
16
  properties: { priority: 23, perform_method: :event },
17
17
  archive_directory: @archive_directory
@@ -27,12 +27,6 @@ class DirmonJobTest < Minitest::Test
27
27
  FileUtils.remove_dir(@archive_directory, true) if Dir.exist?(@archive_directory)
28
28
  end
29
29
 
30
- context '.config' do
31
- should 'support multiple databases' do
32
- assert_equal 'test_rocketjob', RocketJob::DirmonEntry.collection.db.name
33
- end
34
- end
35
-
36
30
  context '#archive_file' do
37
31
  should 'archive absolute path file' do
38
32
  begin
@@ -123,7 +117,7 @@ class DirmonJobTest < Minitest::Test
123
117
  job = @dirmon_job.stub(:upload_file, -> j, fn, sp { assert_equal [file_name, @archive_directory], [fn, sp] }) do
124
118
  @dirmon_job.start_job(@entry, file_name)
125
119
  end
126
- assert_equal @entry.job, job.class.name
120
+ assert_equal @entry.job_name, job.class.name
127
121
  assert_equal 23, job.priority
128
122
  assert_equal [ {:input=>"yes", "before_event"=>true, "event"=>true, "after_event"=>true} ], job.arguments
129
123
  end
metadata CHANGED
@@ -1,14 +1,14 @@
1
1
  --- !ruby/object:Gem::Specification
2
2
  name: rocketjob
3
3
  version: !ruby/object:Gem::Version
4
- version: 0.9.1
4
+ version: 1.0.0
5
5
  platform: ruby
6
6
  authors:
7
7
  - Reid Morrison
8
8
  autorequire:
9
9
  bindir: bin
10
10
  cert_chain: []
11
- date: 2015-07-22 00:00:00.000000000 Z
11
+ date: 2015-07-30 00:00:00.000000000 Z
12
12
  dependencies:
13
13
  - !ruby/object:Gem::Dependency
14
14
  name: aasm
@@ -108,8 +108,8 @@ dependencies:
108
108
  - - "~>"
109
109
  - !ruby/object:Gem::Version
110
110
  version: '2.0'
111
- description: Designed for batch processing from single records to millions of records
112
- in a single batch. Uses threading instead of process forking for greater throughtput.
111
+ description: High volume, priority based, distributed, background job processing solution
112
+ for Ruby
113
113
  email:
114
114
  - reidmo@gmail.com
115
115
  executables:
@@ -132,13 +132,14 @@ files:
132
132
  - lib/rocket_job/worker.rb
133
133
  - lib/rocketjob.rb
134
134
  - test/config/mongo.yml
135
+ - test/dirmon_entry_test.rb
135
136
  - test/dirmon_job_test.rb
136
137
  - test/job_test.rb
137
138
  - test/job_worker_test.rb
138
139
  - test/jobs/test_job.rb
139
140
  - test/test_helper.rb
140
141
  - test/worker_test.rb
141
- homepage: https://github.com/rocketjob/rocketjob
142
+ homepage: http://rocketjob.io
142
143
  licenses:
143
144
  - GPL-3.0
144
145
  metadata: {}
@@ -161,9 +162,10 @@ rubyforge_project:
161
162
  rubygems_version: 2.4.8
162
163
  signing_key:
163
164
  specification_version: 4
164
- summary: High volume, priority based, Enterprise Batch Processing solution for Ruby
165
+ summary: Background job processing system for Ruby
165
166
  test_files:
166
167
  - test/config/mongo.yml
168
+ - test/dirmon_entry_test.rb
167
169
  - test/dirmon_job_test.rb
168
170
  - test/job_test.rb
169
171
  - test/job_worker_test.rb