em-resque 1.0.0.beta1

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
data/HISTORY.md ADDED
@@ -0,0 +1,3 @@
1
+ ## 1.0.0 (2011-XX-XX)
2
+
3
+ * First release.
data/LICENSE ADDED
@@ -0,0 +1,20 @@
1
+ Copyright (c) Julius de Bruijn
2
+
3
+ Permission is hereby granted, free of charge, to any person obtaining
4
+ a copy of this software and associated documentation files (the
5
+ "Software"), to deal in the Software without restriction, including
6
+ without limitation the rights to use, copy, modify, merge, publish,
7
+ distribute, sublicense, and/or sell copies of the Software, and to
8
+ permit persons to whom the Software is furnished to do so, subject to
9
+ the following conditions:
10
+
11
+ The above copyright notice and this permission notice shall be
12
+ included in all copies or substantial portions of the Software.
13
+
14
+ THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND,
15
+ EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF
16
+ MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND
17
+ NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE
18
+ LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION
19
+ OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION
20
+ WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.
data/README.markdown ADDED
@@ -0,0 +1,275 @@
1
+ EM::Resque
2
+ ==========
3
+
4
+ EM::Resque is an addition to [Resque][0] for asynchronic processing of the
5
+ background jobs created by Resque. It works like the original Resque worker,
6
+ but runs inside an [EventMachine][1] and uses the same process instead of
7
+ forking a new one for every job. It can run N workers inside one process, it
8
+ packs all of them in Ruby fibers. The library is meant for small but IO-heavy
9
+ jobs, which won't use much of CPU power in the running server.
10
+
11
+ Use cases
12
+ ---------
13
+
14
+ EM::Resque is good for processing background jobs which are doing lots of IO.
15
+ The evented nature of the reactor core is great when accessing third
16
+ party services with HTTP or doing lots of database-intensive work. When
17
+ combined with a connection pool to a SQL server it lets you easily control the
18
+ amount of connections, being at the same time extremely scalable.
19
+
20
+ Overview
21
+ --------
22
+
23
+ EM::Resque jobs are created and queued like with the synchronous version. When
24
+ queued, one of the workers in the fiber pool will pick it up and process
25
+ the job.
26
+
27
+ When making IO actions inside a job, it should never block the other workers. E.g.
28
+ database operations etc. should be handled with libraries that support EventMachine
29
+ to allow concurrent processing.
30
+
31
+ Resque jobs are Ruby classes (or modules) which respond to the
32
+ `perform` method. Here's an example:
33
+
34
+ ``` ruby
35
+ class Pinger
36
+ @queue = :ping_publisher
37
+
38
+ def self.perform(url)
39
+ self.url = url
40
+ self.result = EventMachine::HttpRequest.new(url).get.response
41
+ self.save
42
+ end
43
+ end
44
+ ```
45
+
46
+ The `@queue` class instance variable determines which queue `Pinger`
47
+ jobs will be placed in. Queues are arbitrary and created on the fly -
48
+ you can name them whatever you want and have as many as you want.
49
+
50
+ To place an `Pinger` job on the `ping_publisher` queue, we might add this
51
+ to our application's pre-existing `Callback` class:
52
+
53
+ ``` ruby
54
+ class Callback
55
+ def async_ping_publisher
56
+ Resque.enqueue(Pinger, self.callback_url)
57
+ end
58
+ end
59
+ ```
60
+
61
+ Now when we call `callback.async_ping_publisher` in our
62
+ application, a job will be created and placed on the `ping_publisher`
63
+ queue.
64
+
65
+ For more use cases please refer [the original Resque manual][0].
66
+
67
+ Let's start 100 async workers to work on `ping_publisher` jobs:
68
+
69
+ $ cd app_root
70
+ $ QUEUE=file_serve FIBERS=100 rake em_resque:work
71
+
72
+ This starts the EM::Resque process and loads 100 fibers with a worker inside
73
+ each fiber and tells them to work off the `ping_publisher` queue. As soon as
74
+ one of the workers is doing it's first IO action it will go to a "yield" mode
75
+ to get data back from the IO and allow another one to start a new job. The
76
+ event loop resumes the worker when it has some data back from the IO action.
77
+
78
+ The workers also reserve the jobs for them so the other workers won't touch them.
79
+
80
+ Workers can be given multiple queues (a "queue list") and run on
81
+ multiple machines. In fact they can be run anywhere with network
82
+ access to the Redis server.
83
+
84
+ Jobs
85
+ ----
86
+
87
+ What should you run in the background with EM::Resque? Anything with lots of
88
+ IO and which takes any time at all. Best use case is gathering data and sending
89
+ pings to 3rd party services, which might or might not answer in a decent time.
90
+
91
+ At SponsorPay we use EM::Resque to process the following types of jobs:
92
+
93
+ * Simple messaging between our frontend and backend softwares
94
+ * Pinging publishers and affiliate networks
95
+
96
+ We're handling a tremendious amount of traffic with a bit over 100 workers,
97
+ using a lot less of database connections, memory and cpu power compared to the
98
+ synchronous and forking Resque or Delayed Job.
99
+
100
+ All the environment options from the original Resque work also in EM::Resque.
101
+ There are also couple of more variables.
102
+
103
+ ### The amount of fibers
104
+
105
+ The number of fibers for the current process is set in FIBERS variable. One
106
+ fiber equals one worker. They are all polling the same queue and terminated
107
+ when the main process terminates. The default value is 1.
108
+
109
+ $ QUEUE=ping_publisher FIBERS=50 rake em_resque:work
110
+
111
+ ### The amount of green threads
112
+
113
+ EventMachine has an option to use defer for long-running processes to be run in
114
+ a different thread. The default value is 20.
115
+
116
+ $ QUEUE=ping_publisher CONCURRENCY=20 rake em_resque:work
117
+
118
+ ### Signals
119
+
120
+ EM:Resque workers respond to a few different signals:
121
+
122
+ * `QUIT` / `TERM` / `INT` - Wait for workers to finish processing then exit
123
+
124
+ The Front End
125
+ -------------
126
+
127
+ EM::Resque uses the same frontend as Resque.
128
+
129
+ EM::Resque Dependencies
130
+ -----------------------
131
+
132
+ $ gem install bundler
133
+ $ bundle install
134
+
135
+ Installing EM::Resque
136
+ ---------------------
137
+
138
+ ### In a Rack app, as a gem
139
+
140
+ First install the gem.
141
+
142
+ $ gem install em-resque
143
+
144
+ Next include it in your application.
145
+
146
+ ``` ruby
147
+ require 'em-resque'
148
+ ```
149
+
150
+ Now start your application:
151
+
152
+ rackup config.ru
153
+
154
+ That's it! You can now create EM::Resque jobs from within your app.
155
+
156
+ To start a worker, create a Rakefile in your app's root (or add this
157
+ to an existing Rakefile):
158
+
159
+ ``` ruby
160
+ require 'your/app'
161
+ require 'em-resque/tasks'
162
+ ```
163
+
164
+ Now:
165
+
166
+ $ QUEUE=* FIBERS=50 rake em_resque:work
167
+
168
+ Alternately you can define a `resque:setup` hook in your Rakefile if you
169
+ don't want to load your app every time rake runs.
170
+
171
+ ### In a Rails 3 app, as a gem
172
+
173
+ *EM::Resque is not supporting Rails with Rake at the moment. Needs more work.*
174
+
175
+ To run EM::Resque with your Rails application, you need a specified script to
176
+ load all the needed libraries and start the workers.
177
+
178
+ *script/resque_async.rb*
179
+
180
+ ``` ruby
181
+ RAILS_ENV = ENV['RAILS_ENV'] || 'development_async'
182
+ RAILS_ROOT = Dir.pwd
183
+
184
+ require 'rubygems'
185
+ require 'yaml'
186
+ require 'uri'
187
+ require 'em-resque'
188
+ require 'em-resque/worker_machine'
189
+ require 'em-resque/task_helper'
190
+ require 'resque-retry'
191
+ require 'em-synchrony'
192
+ require 'em-synchrony/connection_pool'
193
+ require 'em-synchrony/mysql2'
194
+
195
+ Dir.glob(File.join(RAILS_ROOT, 'lib', 'async_worker', '**', '*.rb')).sort.each{|f| require File.expand_path(f)}
196
+
197
+ resque_config = YAML.load_file("#{RAILS_ROOT}/config/resque.yml")
198
+ proxy_config = YAML.load_file("#{RAILS_ROOT}/config/proxy.yml")
199
+ PROXY = proxy_config ? proxy_config[RAILS_ENV] : nil
200
+
201
+ EM::Resque.redis = resque_config[RAILS_ENV]
202
+ EM::Resque::WorkerMachine.new(TaskHelper.parse_opts_from_env).start
203
+ ```
204
+
205
+ You can start the script with the same environment variables as with the Rake
206
+ task.
207
+
208
+ Now we have our own minimal ORM backed with em-powered mysql connection pool to
209
+ handle our models, but there's a library in [em-synchrony][2] called
210
+ em-activerecord which can be combined with [async mysql2][3] library to handle
211
+ sql connections inside the EventMachine.
212
+
213
+ Configuration
214
+ -------------
215
+
216
+ You may want to change the Redis host and port Resque connects to, or
217
+ set various other options at startup.
218
+
219
+ EM::Resque has a `redis` setter which can be given a string or a Redis
220
+ object. This means if you're already using Redis in your app, EM::Resque
221
+ can re-use the existing connection. EM::Resque is using the non-blocking
222
+ em-redis by default.
223
+
224
+ String: `Resque.redis = 'localhost:6379'`
225
+
226
+ Redis: `Resque.redis = $redis`
227
+
228
+ TODO: Better configuration instructions.
229
+
230
+ Namespaces
231
+ ----------
232
+
233
+ If you're running multiple, separate instances of Resque you may want
234
+ to namespace the keyspaces so they do not overlap. This is not unlike
235
+ the approach taken by many memcached clients.
236
+
237
+ This feature is provided by the [redis-namespace][rs] library, which
238
+ Resque uses by default to separate the keys it manages from other keys
239
+ in your Redis server.
240
+
241
+ Simply use the `EM::Resque.redis.namespace` accessor:
242
+
243
+ ``` ruby
244
+ EM::Resque.redis.namespace = "resque:SponsorPay"
245
+ ```
246
+
247
+ We recommend sticking this in your initializer somewhere after Redis
248
+ is configured.
249
+
250
+ Contributing
251
+ ------------
252
+
253
+ 1. Fork EM::Resque
254
+ 2. Create a topic branch - `git checkout -b my_branch`
255
+ 3. Push to your branch - `git push origin my_branch`
256
+ 4. Create a [Pull Request](http://help.github.com/pull-requests/) from your branch
257
+ 5. That's it!
258
+
259
+ Meta
260
+ ----
261
+
262
+ * Code: `git clone git://github.com/SponsorPay/em-resque.git`
263
+ * Home: <http://github.com/SponsorPay/em-resque>
264
+ * Bugs: <http://github.com/SponsorPay/em-resque/issues>
265
+ * Gems: <http://gemcutter.org/gems/em-resque>
266
+
267
+ Author
268
+ ------
269
+
270
+ Julius de Bruijn :: julius.bruijn@sponsorpay.com :: @pimeys
271
+
272
+ [0]: http://github.com/defunkt/resque
273
+ [1]: http://rubyeventmachine.com/
274
+ [2]: https://github.com/igrigorik/em-synchrony
275
+ [3]: https://github.com/brianmario/mysql2/blob/master/lib/mysql2/em.rb
data/Rakefile ADDED
@@ -0,0 +1,77 @@
1
+ #
2
+ # Setup
3
+ #
4
+
5
+ $LOAD_PATH.unshift 'lib'
6
+ require 'em-resque/tasks'
7
+
8
+ def command?(command)
9
+ system("type #{command} > /dev/null 2>&1")
10
+ end
11
+
12
+
13
+ #
14
+ # Tests
15
+ #
16
+
17
+ require 'rake/testtask'
18
+
19
+ task :default => :test
20
+
21
+ if command?(:rg)
22
+ desc "Run the test suite with rg"
23
+ task :test do
24
+ Dir['test/**/*_test.rb'].each do |f|
25
+ sh("rg #{f}")
26
+ end
27
+ end
28
+ else
29
+ Rake::TestTask.new do |test|
30
+ test.libs << "test"
31
+ test.test_files = FileList['test/**/*_test.rb']
32
+ end
33
+ end
34
+
35
+ if command? :kicker
36
+ desc "Launch Kicker (like autotest)"
37
+ task :kicker do
38
+ puts "Kicking... (ctrl+c to cancel)"
39
+ exec "kicker -e rake test lib examples"
40
+ end
41
+ end
42
+
43
+
44
+ #
45
+ # Install
46
+ #
47
+
48
+ task :install => [ 'redis:install', 'dtach:install' ]
49
+
50
+
51
+ #
52
+ # Documentation
53
+ #
54
+
55
+ begin
56
+ require 'sdoc_helpers'
57
+ rescue LoadError
58
+ end
59
+
60
+
61
+ #
62
+ # Publishing
63
+ #
64
+
65
+ desc "Push a new version to Gemcutter"
66
+ task :publish do
67
+ require 'resque/version'
68
+
69
+ sh "gem build resque.gemspec"
70
+ sh "gem push resque-#{Resque::Version}.gem"
71
+ sh "git tag v#{Resque::Version}"
72
+ sh "git push origin v#{Resque::Version}"
73
+ sh "git push origin master"
74
+ sh "git clean -fd"
75
+ exec "rake pages"
76
+ end
77
+
@@ -0,0 +1,14 @@
1
+ class TaskHelper
2
+ def self.parse_opts_from_env
3
+ integer_keys = %w(CONCURRENCY INTERVAL FIBERS)
4
+ string_keys = %w(QUEUE QUEUES PIDFILE)
5
+ bool_keys = %w(LOGGING VERBOSE VVERBOSE)
6
+
7
+ ENV.reduce({}) do |acc, (k, v)|
8
+ acc = acc.merge(k.downcase.to_sym => v.to_i) if integer_keys.any?{|ik| ik == k}
9
+ acc = acc.merge(k.downcase.to_sym => v.to_s) if string_keys.any?{|sk| sk == k}
10
+ acc = acc.merge(k.downcase.to_sym => v == '1' || v.downcase == 'true') if bool_keys.any?{|bk| bk == k}
11
+ acc
12
+ end
13
+ end
14
+ end
@@ -0,0 +1,14 @@
1
+ require 'em-synchrony'
2
+ require 'em-resque/worker_machine'
3
+ require 'em-resque/task_helper'
4
+
5
+ namespace :em_resque do
6
+ task :setup
7
+
8
+ desc "Start an async Resque worker"
9
+ task :work => [ :setup ] do
10
+ require 'em-resque'
11
+
12
+ EM::Resque::WorkerMachine.new(TaskHelper.parse_opts_from_env).start
13
+ end
14
+ end
@@ -0,0 +1,5 @@
1
+ module EventMachine
2
+ module Resque
3
+ Version = VERSION = '1.0.0.beta1'
4
+ end
5
+ end
@@ -0,0 +1,36 @@
1
+ require 'resque'
2
+
3
+ # A non-forking version of Resque worker, which handles waiting with
4
+ # a non-blocking version of sleep.
5
+ class EventMachine::Resque::Worker < Resque::Worker
6
+ # Overwrite system sleep with the non-blocking version
7
+ def sleep(interval)
8
+ EM::Synchrony.sleep interval
9
+ end
10
+
11
+ # Be sure we're never forking
12
+ def fork
13
+ nil
14
+ end
15
+
16
+ # Simpler startup
17
+ def startup
18
+ register_worker
19
+ @cant_fork = true
20
+ $stdout.sync = true
21
+ end
22
+
23
+ # Tell Redis we've processed a job.
24
+ def processed!
25
+ Resque::Stat << "processed"
26
+ Resque::Stat << "processed:#{self}"
27
+ Resque::Stat << "processed_#{job['queue']}"
28
+ end
29
+
30
+ # The string representation is the same as the id for this worker instance.
31
+ # Can be used with Worker.find
32
+ def to_s
33
+ "#{super}:#{Fiber.current.object_id}"
34
+ end
35
+ alias_method :id, :to_s
36
+ end
@@ -0,0 +1,114 @@
1
+ require 'em-synchrony'
2
+ require 'em-resque'
3
+ require 'em-resque/worker'
4
+
5
+ module EventMachine
6
+ module Resque
7
+ # WorkerMachine is an EventMachine with Resque workers wrapped in Ruby
8
+ # fibers.
9
+ #
10
+ # An instance contains the workers and a system monitor running inside an
11
+ # EventMachine. The monitoring takes care of stopping the machine when all
12
+ # workers are shut down.
13
+
14
+ class WorkerMachine
15
+ # Initializes the machine, creates the fibers and workers, traps quit
16
+ # signals and prunes dead workers
17
+ #
18
+ # == Options
19
+ # concurrency:: The number of green threads inside the machine (default 20)
20
+ # interval:: Time in seconds how often the workers check for new work
21
+ # (default 5)
22
+ # fibers_count:: How many fibers (and workers) to be run inside the
23
+ # machine (default 1)
24
+ # queues:: Which queues to poll (default all)
25
+ # verbose:: Verbose log output (default false)
26
+ # vverbose:: Even more verbose log output (default false)
27
+ # pidfile:: The file to save the process id number
28
+ def initialize(opts = {})
29
+ @concurrency = opts[:concurrency] || 20
30
+ @interval = opts[:interval] || 5
31
+ @fibers_count = opts[:fibers] || 1
32
+ @queues = opts[:queue] || opts[:queues] || '*'
33
+ @verbose = opts[:logging] || opts[:verbose] || false
34
+ @very_verbose = opts[:vverbose] || false
35
+ @pidfile = opts[:pidfile]
36
+
37
+ raise(ArgumentError, "Should have at least one fiber") if @fibers_count.to_i < 1
38
+
39
+ build_workers
40
+ build_fibers
41
+ trap_signals
42
+ prune_dead_workers
43
+ end
44
+
45
+ # Start the machine and start polling queues.
46
+ def start
47
+ EM.synchrony do
48
+ @fibers.each(&:resume)
49
+ system_monitor.resume
50
+ end
51
+ end
52
+
53
+ # Stop the machine.
54
+ def stop
55
+ @workers.each(&:shutdown)
56
+ File.delete(@pidfile) if @pidfile
57
+ end
58
+
59
+ def fibers
60
+ @fibers || []
61
+ end
62
+
63
+ def workers
64
+ @workers || []
65
+ end
66
+
67
+ private
68
+
69
+ # Builds the workers to poll the given queues.
70
+ def build_workers
71
+ queues = @queues.to_s.split(',')
72
+
73
+ @workers = (1..@fibers_count.to_i).map do
74
+ worker = EM::Resque::Worker.new(*queues)
75
+ worker.verbose = @verbose
76
+ worker.very_verbose = @very_verbose
77
+
78
+ worker
79
+ end
80
+ end
81
+
82
+ # Builds the fibers to contain the built workers.
83
+ def build_fibers
84
+ @fibers = @workers.map do |worker|
85
+ Fiber.new do
86
+ worker.log "starting async worker #{worker}"
87
+ worker.work(@interval)
88
+ end
89
+ end
90
+ end
91
+
92
+ # Traps signals TERM, INT and QUIT to stop the machine.
93
+ def trap_signals
94
+ ['TERM', 'INT', 'QUIT'].each { |signal| trap(signal) { stop } }
95
+ end
96
+
97
+ # Deletes worker information from Redis if there's now processes for
98
+ # their pids.
99
+ def prune_dead_workers
100
+ @workers.first.prune_dead_workers if @workers.size > 0
101
+ end
102
+
103
+ # Shuts down the machine if all fibers are dead.
104
+ def system_monitor
105
+ Fiber.new do
106
+ loop do
107
+ EM.stop unless fibers.any?(&:alive?)
108
+ EM::Synchrony.sleep 1
109
+ end
110
+ end
111
+ end
112
+ end
113
+ end
114
+ end
data/lib/em-resque.rb ADDED
@@ -0,0 +1,26 @@
1
+ require 'resque'
2
+ require 'em-synchrony/em-redis'
3
+
4
+ module EM::Resque
5
+ extend Resque
6
+ def redis=(server)
7
+ case server
8
+ when String
9
+ if server =~ /redis\:\/\//
10
+ redis = EM::Protocols::Redis.connect(:url => server, :thread_safe => true)
11
+ else
12
+ server, namespace = server.split('/', 2)
13
+ host, port, db = server.split(':')
14
+ redis = EM::Protocols::Redis.new(:host => host, :port => port,
15
+ :thread_safe => true, :db => db)
16
+ end
17
+ namespace ||= :resque
18
+
19
+ @redis = Redis::Namespace.new(namespace, :redis => redis)
20
+ when Redis::Namespace
21
+ @redis = server
22
+ else
23
+ @redis = Redis::Namespace.new(:resque, :redis => server)
24
+ end
25
+ end
26
+ end
@@ -0,0 +1,2 @@
1
+ $LOAD_PATH.unshift File.dirname(__FILE__) + '/../../lib'
2
+ require 'em-resque/tasks'
@@ -0,0 +1,21 @@
1
+ require 'test_helper'
2
+
3
+ context "Resque" do
4
+ setup do
5
+ EM::Resque.redis.flushall
6
+ end
7
+
8
+ test "can put jobs to a queue" do
9
+ assert EM::Resque.enqueue(TestJob, 420, 'foo')
10
+ end
11
+
12
+ test "can read jobs from a queue" do
13
+ EM::Resque.enqueue(TestJob, 420, 'foo')
14
+
15
+ job = EM::Resque.reserve(:jobs)
16
+
17
+ assert_equal TestJob, job.payload_class
18
+ assert_equal 420, job.args[0]
19
+ assert_equal 'foo', job.args[1]
20
+ end
21
+ end
@@ -0,0 +1,115 @@
1
+ # Redis configuration file example
2
+
3
+ # By default Redis does not run as a daemon. Use 'yes' if you need it.
4
+ # Note that Redis will write a pid file in /var/run/redis.pid when daemonized.
5
+ daemonize yes
6
+
7
+ # When run as a daemon, Redis write a pid file in /var/run/redis.pid by default.
8
+ # You can specify a custom pid file location here.
9
+ pidfile ./test/redis-test-cluster.pid
10
+
11
+ # Accept connections on the specified port, default is 6379
12
+ port 9737
13
+
14
+ # If you want you can bind a single interface, if the bind option is not
15
+ # specified all the interfaces will listen for connections.
16
+ #
17
+ # bind 127.0.0.1
18
+
19
+ # Close the connection after a client is idle for N seconds (0 to disable)
20
+ timeout 300
21
+
22
+ # Save the DB on disk:
23
+ #
24
+ # save <seconds> <changes>
25
+ #
26
+ # Will save the DB if both the given number of seconds and the given
27
+ # number of write operations against the DB occurred.
28
+ #
29
+ # In the example below the behaviour will be to save:
30
+ # after 900 sec (15 min) if at least 1 key changed
31
+ # after 300 sec (5 min) if at least 10 keys changed
32
+ # after 60 sec if at least 10000 keys changed
33
+ save 900 1
34
+ save 300 10
35
+ save 60 10000
36
+
37
+ # The filename where to dump the DB
38
+ dbfilename dump-cluster.rdb
39
+
40
+ # For default save/load DB in/from the working directory
41
+ # Note that you must specify a directory not a file name.
42
+ dir ./test/
43
+
44
+ # Set server verbosity to 'debug'
45
+ # it can be one of:
46
+ # debug (a lot of information, useful for development/testing)
47
+ # notice (moderately verbose, what you want in production probably)
48
+ # warning (only very important / critical messages are logged)
49
+ loglevel debug
50
+
51
+ # Specify the log file name. Also 'stdout' can be used to force
52
+ # the demon to log on the standard output. Note that if you use standard
53
+ # output for logging but daemonize, logs will be sent to /dev/null
54
+ logfile stdout
55
+
56
+ # Set the number of databases. The default database is DB 0, you can select
57
+ # a different one on a per-connection basis using SELECT <dbid> where
58
+ # dbid is a number between 0 and 'databases'-1
59
+ databases 16
60
+
61
+ ################################# REPLICATION #################################
62
+
63
+ # Master-Slave replication. Use slaveof to make a Redis instance a copy of
64
+ # another Redis server. Note that the configuration is local to the slave
65
+ # so for example it is possible to configure the slave to save the DB with a
66
+ # different interval, or to listen to another port, and so on.
67
+
68
+ # slaveof <masterip> <masterport>
69
+
70
+ ################################## SECURITY ###################################
71
+
72
+ # Require clients to issue AUTH <PASSWORD> before processing any other
73
+ # commands. This might be useful in environments in which you do not trust
74
+ # others with access to the host running redis-server.
75
+ #
76
+ # This should stay commented out for backward compatibility and because most
77
+ # people do not need auth (e.g. they run their own servers).
78
+
79
+ # requirepass foobared
80
+
81
+ ################################### LIMITS ####################################
82
+
83
+ # Set the max number of connected clients at the same time. By default there
84
+ # is no limit, and it's up to the number of file descriptors the Redis process
85
+ # is able to open. The special value '0' means no limts.
86
+ # Once the limit is reached Redis will close all the new connections sending
87
+ # an error 'max number of clients reached'.
88
+
89
+ # maxclients 128
90
+
91
+ # Don't use more memory than the specified amount of bytes.
92
+ # When the memory limit is reached Redis will try to remove keys with an
93
+ # EXPIRE set. It will try to start freeing keys that are going to expire
94
+ # in little time and preserve keys with a longer time to live.
95
+ # Redis will also try to remove objects from free lists if possible.
96
+ #
97
+ # If all this fails, Redis will start to reply with errors to commands
98
+ # that will use more memory, like SET, LPUSH, and so on, and will continue
99
+ # to reply to most read-only commands like GET.
100
+ #
101
+ # WARNING: maxmemory can be a good idea mainly if you want to use Redis as a
102
+ # 'state' server or cache, not as a real DB. When Redis is used as a real
103
+ # database the memory usage will grow over the weeks, it will be obvious if
104
+ # it is going to use too much memory in the long run, and you'll have the time
105
+ # to upgrade. With maxmemory after the limit is reached you'll start to get
106
+ # errors for write operations, and this may even lead to DB inconsistency.
107
+
108
+ # maxmemory <bytes>
109
+
110
+ ############################### ADVANCED CONFIG ###############################
111
+
112
+ # Glue small output buffers together in order to send small replies in a
113
+ # single TCP packet. Uses a bit more CPU but most of the times it is a win
114
+ # in terms of number of queries per second. Use 'yes' if unsure.
115
+ glueoutputbuf yes
@@ -0,0 +1,115 @@
1
+ # Redis configuration file example
2
+
3
+ # By default Redis does not run as a daemon. Use 'yes' if you need it.
4
+ # Note that Redis will write a pid file in /var/run/redis.pid when daemonized.
5
+ daemonize yes
6
+
7
+ # When run as a daemon, Redis write a pid file in /var/run/redis.pid by default.
8
+ # You can specify a custom pid file location here.
9
+ pidfile ./test/redis-test.pid
10
+
11
+ # Accept connections on the specified port, default is 6379
12
+ port 9736
13
+
14
+ # If you want you can bind a single interface, if the bind option is not
15
+ # specified all the interfaces will listen for connections.
16
+ #
17
+ # bind 127.0.0.1
18
+
19
+ # Close the connection after a client is idle for N seconds (0 to disable)
20
+ timeout 300
21
+
22
+ # Save the DB on disk:
23
+ #
24
+ # save <seconds> <changes>
25
+ #
26
+ # Will save the DB if both the given number of seconds and the given
27
+ # number of write operations against the DB occurred.
28
+ #
29
+ # In the example below the behaviour will be to save:
30
+ # after 900 sec (15 min) if at least 1 key changed
31
+ # after 300 sec (5 min) if at least 10 keys changed
32
+ # after 60 sec if at least 10000 keys changed
33
+ save 900 1
34
+ save 300 10
35
+ save 60 10000
36
+
37
+ # The filename where to dump the DB
38
+ dbfilename dump.rdb
39
+
40
+ # For default save/load DB in/from the working directory
41
+ # Note that you must specify a directory not a file name.
42
+ dir ./test/
43
+
44
+ # Set server verbosity to 'debug'
45
+ # it can be one of:
46
+ # debug (a lot of information, useful for development/testing)
47
+ # notice (moderately verbose, what you want in production probably)
48
+ # warning (only very important / critical messages are logged)
49
+ loglevel debug
50
+
51
+ # Specify the log file name. Also 'stdout' can be used to force
52
+ # the demon to log on the standard output. Note that if you use standard
53
+ # output for logging but daemonize, logs will be sent to /dev/null
54
+ logfile stdout
55
+
56
+ # Set the number of databases. The default database is DB 0, you can select
57
+ # a different one on a per-connection basis using SELECT <dbid> where
58
+ # dbid is a number between 0 and 'databases'-1
59
+ databases 16
60
+
61
+ ################################# REPLICATION #################################
62
+
63
+ # Master-Slave replication. Use slaveof to make a Redis instance a copy of
64
+ # another Redis server. Note that the configuration is local to the slave
65
+ # so for example it is possible to configure the slave to save the DB with a
66
+ # different interval, or to listen to another port, and so on.
67
+
68
+ # slaveof <masterip> <masterport>
69
+
70
+ ################################## SECURITY ###################################
71
+
72
+ # Require clients to issue AUTH <PASSWORD> before processing any other
73
+ # commands. This might be useful in environments in which you do not trust
74
+ # others with access to the host running redis-server.
75
+ #
76
+ # This should stay commented out for backward compatibility and because most
77
+ # people do not need auth (e.g. they run their own servers).
78
+
79
+ # requirepass foobared
80
+
81
+ ################################### LIMITS ####################################
82
+
83
+ # Set the max number of connected clients at the same time. By default there
84
+ # is no limit, and it's up to the number of file descriptors the Redis process
85
+ # is able to open. The special value '0' means no limts.
86
+ # Once the limit is reached Redis will close all the new connections sending
87
+ # an error 'max number of clients reached'.
88
+
89
+ # maxclients 128
90
+
91
+ # Don't use more memory than the specified amount of bytes.
92
+ # When the memory limit is reached Redis will try to remove keys with an
93
+ # EXPIRE set. It will try to start freeing keys that are going to expire
94
+ # in little time and preserve keys with a longer time to live.
95
+ # Redis will also try to remove objects from free lists if possible.
96
+ #
97
+ # If all this fails, Redis will start to reply with errors to commands
98
+ # that will use more memory, like SET, LPUSH, and so on, and will continue
99
+ # to reply to most read-only commands like GET.
100
+ #
101
+ # WARNING: maxmemory can be a good idea mainly if you want to use Redis as a
102
+ # 'state' server or cache, not as a real DB. When Redis is used as a real
103
+ # database the memory usage will grow over the weeks, it will be obvious if
104
+ # it is going to use too much memory in the long run, and you'll have the time
105
+ # to upgrade. With maxmemory after the limit is reached you'll start to get
106
+ # errors for write operations, and this may even lead to DB inconsistency.
107
+
108
+ # maxmemory <bytes>
109
+
110
+ ############################### ADVANCED CONFIG ###############################
111
+
112
+ # Glue small output buffers together in order to send small replies in a
113
+ # single TCP packet. Uses a bit more CPU but most of the times it is a win
114
+ # in terms of number of queries per second. Use 'yes' if unsure.
115
+ glueoutputbuf yes
@@ -0,0 +1,32 @@
1
+ require 'test_helper'
2
+ require 'em-resque/task_helper'
3
+
4
+ context "TaskHelper" do
5
+ setup do
6
+ ENV['CONCURRENCY'] = '20'
7
+ ENV['INTERVAL'] = '5'
8
+ ENV['FIBERS'] = '5'
9
+ ENV['QUEUE'] = 'foo'
10
+ ENV['QUEUES'] = 'foo, bar'
11
+ ENV['PIDFILE'] = '/foo/bar'
12
+ ENV['LOGGING'] = '1'
13
+ ENV['VERBOSE'] = 'true'
14
+ ENV['VVERBOSE'] = 'false'
15
+
16
+ @valid_opts = {
17
+ :concurrency => ENV['CONCURRENCY'].to_i,
18
+ :interval => ENV['INTERVAL'].to_i,
19
+ :fibers => ENV['FIBERS'].to_i,
20
+ :queue => ENV['QUEUE'],
21
+ :queues => ENV['QUEUES'],
22
+ :pidfile => ENV['PIDFILE'],
23
+ :logging => true,
24
+ :verbose => true,
25
+ :vverbose => false }
26
+ end
27
+
28
+ test "can parse all parameters correctly" do
29
+ opts = TaskHelper.parse_opts_from_env
30
+ @valid_opts.each {|k,v| assert_equal v, opts[k]}
31
+ end
32
+ end
@@ -0,0 +1,99 @@
1
+ require 'rubygems'
2
+ require 'bundler'
3
+ Bundler.setup(:default, :test)
4
+ Bundler.require(:default, :test)
5
+
6
+ dir = File.dirname(File.expand_path(__FILE__))
7
+ $LOAD_PATH.unshift dir + '/../lib'
8
+ $TESTING = true
9
+ require 'test/unit'
10
+
11
+ begin
12
+ require 'leftright'
13
+ rescue LoadError
14
+ end
15
+
16
+ #
17
+ # make sure we can run redis
18
+ #
19
+
20
+ if !system("which redis-server")
21
+ puts '', "** can't find `redis-server` in your path"
22
+ puts "** try running `sudo rake install`"
23
+ abort ''
24
+ end
25
+
26
+
27
+ ##
28
+ # test/spec/mini 3
29
+ # http://gist.github.com/25455
30
+ # chris@ozmm.org
31
+ #
32
+ def context(*args, &block)
33
+ return super unless (name = args.first) && block
34
+ require 'test/unit'
35
+ klass = Class.new(defined?(ActiveSupport::TestCase) ? ActiveSupport::TestCase : Test::Unit::TestCase) do
36
+ def self.test(name, &block)
37
+ define_method("test_#{name.gsub(/\W/,'_')}", &block) if block
38
+ end
39
+ def self.xtest(*args) end
40
+ def self.setup(&block) define_method(:setup, &block) end
41
+ def self.teardown(&block) define_method(:teardown, &block) end
42
+ end
43
+ (class << klass; self end).send(:define_method, :name) { name.gsub(/\W/,'_') }
44
+
45
+ klass.class_eval &block
46
+
47
+ # XXX: In 1.8.x, not all tests will run unless anonymous classes are kept in scope.
48
+ ($test_classes ||= []) << klass
49
+ end
50
+
51
+ ##
52
+ # Helper to perform job classes
53
+ #
54
+ module PerformJob
55
+ def perform_job(klass, *args)
56
+ resque_job = Resque::Job.new(:testqueue, 'class' => klass, 'args' => args)
57
+ resque_job.perform
58
+ end
59
+ end
60
+
61
+ #
62
+ # fixture classes
63
+ #
64
+
65
+ class TestJob
66
+ @queue = 'jobs'
67
+
68
+ def self.perform(number, string)
69
+ end
70
+ end
71
+
72
+ class FailJob
73
+ @queue = 'jobs'
74
+
75
+ def self.perform(number, string)
76
+ raise Exception
77
+ end
78
+ end
79
+
80
+ def with_failure_backend(failure_backend, &block)
81
+ previous_backend = Resque::Failure.backend
82
+ Resque::Failure.backend = failure_backend
83
+ yield block
84
+ ensure
85
+ Resque::Failure.backend = previous_backend
86
+ end
87
+
88
+ class Time
89
+ # Thanks, Timecop
90
+ class << self
91
+ alias_method :now_without_mock_time, :now
92
+
93
+ def now_with_mock_time
94
+ $fake_time || now_without_mock_time
95
+ end
96
+
97
+ alias_method :now, :now_with_mock_time
98
+ end
99
+ end
@@ -0,0 +1,19 @@
1
+ require 'test_helper'
2
+ require 'em-resque/worker_machine'
3
+
4
+ context 'WorkerMachine' do
5
+ test 'should initialize itself' do
6
+ machine = EM::Resque::WorkerMachine.new
7
+
8
+ assert_equal 1, machine.fibers.count
9
+ assert_equal 1, machine.workers.count
10
+ assert_equal Fiber, machine.fibers.first.class
11
+ assert_equal EM::Resque::Worker, machine.workers.first.class
12
+ end
13
+
14
+ test 'should not run with under one fibers' do
15
+ assert_raise(ArgumentError, "Should have at least one fiber") do
16
+ machine = EM::Resque::WorkerMachine.new :fibers => 0
17
+ end
18
+ end
19
+ end
@@ -0,0 +1,46 @@
1
+ require 'test_helper'
2
+ require 'em-resque/worker'
3
+
4
+ context "Worker" do
5
+ setup do
6
+ EM::Resque.redis.flushall
7
+ end
8
+
9
+ test "processes jobs" do
10
+ EM.synchrony do
11
+ EM::Resque.enqueue(TestJob, 420, 'foo')
12
+ worker = EM::Resque::Worker.new('*')
13
+ worker.work(0)
14
+
15
+ assert_equal 1, EM::Resque.info[:processed]
16
+
17
+ worker.shutdown!
18
+ EM.stop
19
+ end
20
+ end
21
+
22
+ test "logs the processed queue" do
23
+ EM.synchrony do
24
+ EM::Resque.enqueue(TestJob, 420, 'test processed')
25
+ worker = EM::Resque::Worker.new('*')
26
+ worker.work(0)
27
+
28
+ assert_equal 1, EM::Resque.redis.get("stat:processed_jobs").to_i
29
+
30
+ worker.shutdown!
31
+ EM.stop
32
+ end
33
+ end
34
+
35
+ test "fails bad jobs" do
36
+ EM.synchrony do
37
+ EM::Resque.enqueue(FailJob, 420, "foo")
38
+ worker = EM::Resque::Worker.new('*')
39
+ worker.work(0)
40
+
41
+ assert_equal 1, Resque::Failure.count
42
+ worker.shutdown!
43
+ EM.stop
44
+ end
45
+ end
46
+ end
metadata ADDED
@@ -0,0 +1,109 @@
1
+ --- !ruby/object:Gem::Specification
2
+ name: em-resque
3
+ version: !ruby/object:Gem::Version
4
+ version: 1.0.0.beta1
5
+ prerelease: 6
6
+ platform: ruby
7
+ authors:
8
+ - Julius de Bruijn
9
+ autorequire:
10
+ bindir: bin
11
+ cert_chain: []
12
+ date: 2012-01-09 00:00:00.000000000 Z
13
+ dependencies:
14
+ - !ruby/object:Gem::Dependency
15
+ name: resque
16
+ requirement: &5999720 !ruby/object:Gem::Requirement
17
+ none: false
18
+ requirements:
19
+ - - ~>
20
+ - !ruby/object:Gem::Version
21
+ version: 1.19.0
22
+ type: :runtime
23
+ prerelease: false
24
+ version_requirements: *5999720
25
+ - !ruby/object:Gem::Dependency
26
+ name: em-synchrony
27
+ requirement: &5999200 !ruby/object:Gem::Requirement
28
+ none: false
29
+ requirements:
30
+ - - ~>
31
+ - !ruby/object:Gem::Version
32
+ version: 1.0.0
33
+ type: :runtime
34
+ prerelease: false
35
+ version_requirements: *5999200
36
+ - !ruby/object:Gem::Dependency
37
+ name: em-redis
38
+ requirement: &5990080 !ruby/object:Gem::Requirement
39
+ none: false
40
+ requirements:
41
+ - - ! '>='
42
+ - !ruby/object:Gem::Version
43
+ version: '0'
44
+ type: :runtime
45
+ prerelease: false
46
+ version_requirements: *5990080
47
+ description: ! " Em-resque is a version of Resque, which offers non-blocking and
48
+ non-forking\n workers. The idea is to have fast as possible workers for tasks
49
+ with lots of\n IO like pinging third party servers or hitting the database.\n\n
50
+ \ The async worker is using fibers through Synchrony library to reduce the amount\n
51
+ \ of callback functions. There's one fiber for worker and if one of the workers\n
52
+ \ is blocking, it will block all the workers at the same time.\n\n The idea
53
+ to use this version and not the regular Resque is to reduce the amount\n of SQL
54
+ connections for high-load services. Using one process for many workers\n gives
55
+ a better control to the amount of SQL connections.\n\n For using Resque please
56
+ refer the original project.\n\n https://github.com/defunkt/resque/\n\n The
57
+ library adds two rake tasks over Resque:\n\n * resque:work_async for working
58
+ inside the EventMachine\n"
59
+ email: julius.bruijn@sponsorpay.com
60
+ executables: []
61
+ extensions: []
62
+ extra_rdoc_files:
63
+ - LICENSE
64
+ - README.markdown
65
+ files:
66
+ - README.markdown
67
+ - Rakefile
68
+ - LICENSE
69
+ - HISTORY.md
70
+ - lib/em-resque/version.rb
71
+ - lib/em-resque/worker.rb
72
+ - lib/em-resque/tasks.rb
73
+ - lib/em-resque/worker_machine.rb
74
+ - lib/em-resque/task_helper.rb
75
+ - lib/tasks/em-resque.rake
76
+ - lib/em-resque.rb
77
+ - test/worker_machine_test.rb
78
+ - test/em-resque_test.rb
79
+ - test/redis-test.conf
80
+ - test/task_helpers_test.rb
81
+ - test/redis-test-cluster.conf
82
+ - test/worker_test.rb
83
+ - test/test_helper.rb
84
+ homepage: http://github.com/SponsorPay/em-resque
85
+ licenses: []
86
+ post_install_message:
87
+ rdoc_options:
88
+ - --charset=UTF-8
89
+ require_paths:
90
+ - lib
91
+ required_ruby_version: !ruby/object:Gem::Requirement
92
+ none: false
93
+ requirements:
94
+ - - ! '>='
95
+ - !ruby/object:Gem::Version
96
+ version: '0'
97
+ required_rubygems_version: !ruby/object:Gem::Requirement
98
+ none: false
99
+ requirements:
100
+ - - ! '>'
101
+ - !ruby/object:Gem::Version
102
+ version: 1.3.1
103
+ requirements: []
104
+ rubyforge_project:
105
+ rubygems_version: 1.8.10
106
+ signing_key:
107
+ specification_version: 3
108
+ summary: Em-resque is an async non-forking version of Resque
109
+ test_files: []