resque-fifo-queue 0.1.1

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
@@ -0,0 +1,7 @@
1
+ ---
2
+ SHA1:
3
+ metadata.gz: b962de3ad6dec018f3c39ae6dd0a1bce29552c6a
4
+ data.tar.gz: 7bb016d06b063bd45459668ede1de61df13b0d6d
5
+ SHA512:
6
+ metadata.gz: 3026f465971b56dd870ca6f177d0783ec0070d9231b0e4b17657664cadfe6bf22d753bbeeb7b64392472fddfa75cd8d828ed5b9bf675bfe8f0a1637f708ea764
7
+ data.tar.gz: c584f50175d9ae9e9ef934427e0b10688f40867ab40a913f0616ec909974e21baf2275e9fe956fff3ce951d893636ae5cbc17641056350b0b265416e9bf8f221
@@ -0,0 +1,14 @@
1
+ /.bundle/
2
+ /.yardoc
3
+ /Gemfile.lock
4
+ /_yardoc/
5
+ /coverage/
6
+ /doc/
7
+ /pkg/
8
+ /spec/reports/
9
+ /tmp/
10
+
11
+ *.gem
12
+
13
+ # rspec failure tracking
14
+ .rspec_status
data/.rspec ADDED
@@ -0,0 +1,2 @@
1
+ --format documentation
2
+ --color
@@ -0,0 +1,5 @@
1
+ sudo: false
2
+ language: ruby
3
+ rvm:
4
+ - 2.4.0
5
+ before_install: gem install bundler -v 1.14.6
@@ -0,0 +1,74 @@
1
+ # Contributor Covenant Code of Conduct
2
+
3
+ ## Our Pledge
4
+
5
+ In the interest of fostering an open and welcoming environment, we as
6
+ contributors and maintainers pledge to making participation in our project and
7
+ our community a harassment-free experience for everyone, regardless of age, body
8
+ size, disability, ethnicity, gender identity and expression, level of experience,
9
+ nationality, personal appearance, race, religion, or sexual identity and
10
+ orientation.
11
+
12
+ ## Our Standards
13
+
14
+ Examples of behavior that contributes to creating a positive environment
15
+ include:
16
+
17
+ * Using welcoming and inclusive language
18
+ * Being respectful of differing viewpoints and experiences
19
+ * Gracefully accepting constructive criticism
20
+ * Focusing on what is best for the community
21
+ * Showing empathy towards other community members
22
+
23
+ Examples of unacceptable behavior by participants include:
24
+
25
+ * The use of sexualized language or imagery and unwelcome sexual attention or
26
+ advances
27
+ * Trolling, insulting/derogatory comments, and personal or political attacks
28
+ * Public or private harassment
29
+ * Publishing others' private information, such as a physical or electronic
30
+ address, without explicit permission
31
+ * Other conduct which could reasonably be considered inappropriate in a
32
+ professional setting
33
+
34
+ ## Our Responsibilities
35
+
36
+ Project maintainers are responsible for clarifying the standards of acceptable
37
+ behavior and are expected to take appropriate and fair corrective action in
38
+ response to any instances of unacceptable behavior.
39
+
40
+ Project maintainers have the right and responsibility to remove, edit, or
41
+ reject comments, commits, code, wiki edits, issues, and other contributions
42
+ that are not aligned to this Code of Conduct, or to ban temporarily or
43
+ permanently any contributor for other behaviors that they deem inappropriate,
44
+ threatening, offensive, or harmful.
45
+
46
+ ## Scope
47
+
48
+ This Code of Conduct applies both within project spaces and in public spaces
49
+ when an individual is representing the project or its community. Examples of
50
+ representing a project or community include using an official project e-mail
51
+ address, posting via an official social media account, or acting as an appointed
52
+ representative at an online or offline event. Representation of a project may be
53
+ further defined and clarified by project maintainers.
54
+
55
+ ## Enforcement
56
+
57
+ Instances of abusive, harassing, or otherwise unacceptable behavior may be
58
+ reported by contacting the project team at joseph.dayo@gmail.com. All
59
+ complaints will be reviewed and investigated and will result in a response that
60
+ is deemed necessary and appropriate to the circumstances. The project team is
61
+ obligated to maintain confidentiality with regard to the reporter of an incident.
62
+ Further details of specific enforcement policies may be posted separately.
63
+
64
+ Project maintainers who do not follow or enforce the Code of Conduct in good
65
+ faith may face temporary or permanent repercussions as determined by other
66
+ members of the project's leadership.
67
+
68
+ ## Attribution
69
+
70
+ This Code of Conduct is adapted from the [Contributor Covenant][homepage], version 1.4,
71
+ available at [http://contributor-covenant.org/version/1/4][version]
72
+
73
+ [homepage]: http://contributor-covenant.org
74
+ [version]: http://contributor-covenant.org/version/1/4/
data/Gemfile ADDED
@@ -0,0 +1,4 @@
1
+ source 'https://rubygems.org'
2
+
3
+ # Specify your gem's dependencies in resque-fifo-queue.gemspec
4
+ gemspec
@@ -0,0 +1,21 @@
1
+ The MIT License (MIT)
2
+
3
+ Copyright (c) 2017 Joseph Emmanuel Dayo
4
+
5
+ Permission is hereby granted, free of charge, to any person obtaining a copy
6
+ of this software and associated documentation files (the "Software"), to deal
7
+ in the Software without restriction, including without limitation the rights
8
+ to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
9
+ copies of the Software, and to permit persons to whom the Software is
10
+ furnished to do so, subject to the following conditions:
11
+
12
+ The above copyright notice and this permission notice shall be included in
13
+ all copies or substantial portions of the Software.
14
+
15
+ THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
16
+ IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
17
+ FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
18
+ AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
19
+ LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
20
+ OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN
21
+ THE SOFTWARE.
@@ -0,0 +1,105 @@
1
+ # Resque::Fifo::Queue
2
+
3
+ Implementation of a sharded First-in First-out queue using Redis and Resque.
4
+
5
+ This gem unables you to guarantee in-order job processing based on a shard key. Useful for business requirements that are race-condition prone or needs something processed in a streaming manner (jobs that require preservation of chronological order).
6
+
7
+ Sharding is automatically managed depending on the number of workers available. Durability is guaranteed with failover resharding using a consitent hash. Built on the reliability of resque and is pure ruby which simplifies deployment if you are already using resque with ruby on rails.
8
+
9
+ ## Installation
10
+
11
+ Add this line to your application's Gemfile:
12
+
13
+ ```ruby
14
+ gem 'resque-fifo-queue'
15
+ ```
16
+
17
+ And then execute:
18
+
19
+ $ bundle
20
+
21
+ Or install it yourself as:
22
+
23
+ $ gem install resque-fifo-queue
24
+
25
+ ## Usage
26
+
27
+ This adds a new task, to run fifo queues:
28
+
29
+ ```bash
30
+ rake resque:fifo-worker
31
+ rake resque:fifo-workers
32
+ ```
33
+
34
+ Available options are similar to rake resque:work
35
+
36
+ Workers will assign their own queue names which is automatically managed by the sharding algorithm.
37
+
38
+ Supports the same parameters as resque:work but creates additional queues behind the scenes
39
+ to support fifo queues.
40
+
41
+ fifo workers can also double duty as standard resque worker in order to simplify resource sharing:
42
+
43
+ ```bash
44
+ QUEUE=high rake resque:fifo-worker
45
+ ```
46
+
47
+ Aside from being a worker that processes jobs from the fifo queue it will process jobs from the high queue as well.
48
+
49
+ Sample Usage
50
+ ------------
51
+
52
+ To start a job using a fifo strategy:
53
+
54
+ ```ruby
55
+ class SampleJob
56
+ def self.perform(*args)
57
+ # run your resque job here
58
+ end
59
+ end
60
+
61
+ shard_key = "user_00001"
62
+
63
+ # These async jobs will be guaranteed to run one after another in a single worker
64
+
65
+ Resque::Plugins::Fifo::Queue::Manager.enqueue_to(shard_key, SampleJob, "hello")
66
+ Resque::Plugins::Fifo::Queue::Manager.enqueue_to(shard_key, SampleJob, "hello1")
67
+
68
+ ```
69
+
70
+ ## Resque web extensions
71
+
72
+ This gem adds a FIFO_queue tab under resque web where you can see information about
73
+ workers and queues used. This can be accessed via:
74
+
75
+ http://localhost:3000/resque/fifo_queue
76
+
77
+ ## Development
78
+
79
+ After checking out the repo, run `bin/setup` to install dependencies. Then, run `rake spec` to run the tests. You can also run `bin/console` for an interactive prompt that will allow you to experiment.
80
+
81
+ To install this gem onto your local machine, run `bundle exec rake install`. To release a new version, update the version number in `version.rb`, and then run `bundle exec rake release`, which will create a git tag for the version, push git commits and tags, and push the `.gem` file to [rubygems.org](https://rubygems.org).
82
+
83
+ ## Ensuring workers are updated
84
+
85
+ Since only one worker is assigned to a particular shard, it is important to make sure a worker is running. The
86
+ fifo shard table is updated everytime a worker is started or stopped properly (using kill -S QUITE).
87
+
88
+ Though for some cases a scheduled task is necessary to make sure the worker list is constantly updated:
89
+
90
+ ```yaml
91
+ # resque_schedule.yml
92
+ auto_refresh_fifo_queues:
93
+ cron: '*/5 * * * * UTC'
94
+ class: Resque::Plugins::Fifo::Queue::DrainWorker
95
+ queue: fifo_refresh
96
+ description: 'Check if fifo workers are still valid and update worker table'
97
+ ```
98
+
99
+ ## Contributing
100
+
101
+ Bug reports and pull requests are welcome on GitHub at https://github.com/[USERNAME]/resque-fifo-queue. This project is intended to be a safe, welcoming space for collaboration, and contributors are expected to adhere to the [Contributor Covenant](http://contributor-covenant.org) code of conduct.
102
+
103
+ ## License
104
+
105
+ The gem is available as open source under the terms of the [MIT License](http://opensource.org/licenses/MIT).
@@ -0,0 +1,7 @@
1
+ require "bundler/gem_tasks"
2
+ require "rspec/core/rake_task"
3
+ require 'resque/tasks'
4
+
5
+ RSpec::Core::RakeTask.new(:spec)
6
+
7
+ task :default => :spec
@@ -0,0 +1,14 @@
1
+ #!/usr/bin/env ruby
2
+
3
+ require "bundler/setup"
4
+ require "resque/fifo/queue"
5
+
6
+ # You can add fixtures and/or initialization code here to make experimenting
7
+ # with your gem easier. You can also use a different console, if you like.
8
+
9
+ # (If you use this, don't forget to add pry to your Gemfile!)
10
+ # require "pry"
11
+ # Pry.start
12
+
13
+ require "irb"
14
+ IRB.start(__FILE__)
@@ -0,0 +1,8 @@
1
+ #!/usr/bin/env bash
2
+ set -euo pipefail
3
+ IFS=$'\n\t'
4
+ set -vx
5
+
6
+ bundle install
7
+
8
+ # Do any other automated setup that you need to do here
data/init.rb ADDED
@@ -0,0 +1 @@
1
+ require 'resque/fifo/queue'
@@ -0,0 +1,11 @@
1
+
2
+ module Resque
3
+ module Plugins
4
+ module Fifo
5
+ WORKER_QUEUE_NAMESPACE = "fifo-managed-queue"
6
+
7
+ module Queue
8
+ end
9
+ end
10
+ end
11
+ end
@@ -0,0 +1,12 @@
1
+ require 'resque'
2
+ require 'resque/fifo/constants'
3
+ require "redis"
4
+ require "redlock"
5
+ require 'xxhash'
6
+ require 'resque_solo'
7
+ require "resque/plugins/fifo/queue/version"
8
+ require "resque/plugins/fifo/server"
9
+ require "resque/plugins/fifo/extensions"
10
+ require "resque/plugins/fifo/queue/drain_worker"
11
+ require "resque/plugins/fifo/queue/manager"
12
+ require 'resque/plugins/fifo/worker'
@@ -0,0 +1,25 @@
1
+ require 'resque/fifo/queue'
2
+
3
+ task "resque:fifo-worker" => :environment do
4
+ prefix = ENV['PREFIX'] || 'fifo'
5
+ worker = Resque::Plugins::Fifo::Worker.new
6
+ worker.prepare
7
+ worker.log "Starting worker #{self}"
8
+ worker.work(ENV['INTERVAL'] || 5) # interval, will block
9
+ end
10
+
11
+ task "resque:fifo-workers" => :environment do
12
+ threads = []
13
+
14
+ if ENV['COUNT'].to_i < 1
15
+ abort "set COUNT env var, e.g. $ COUNT=2 rake resque:workers"
16
+ end
17
+
18
+ ENV['COUNT'].to_i.times do
19
+ threads << Thread.new do
20
+ system "rake resque:fifo-worker"
21
+ end
22
+ end
23
+
24
+ threads.each { |thread| thread.join }
25
+ end
@@ -0,0 +1,7 @@
1
+ Resque.send(:define_singleton_method, :queues) do
2
+ data_store.queue_names.reject { |name| name.start_with?(Resque::Plugins::Fifo::WORKER_QUEUE_NAMESPACE) }
3
+ end
4
+
5
+ Resque.send(:define_singleton_method, :all_queues) do
6
+ data_store.queue_names
7
+ end
@@ -0,0 +1,15 @@
1
+ module Resque
2
+ module Plugins
3
+ module Fifo
4
+ module Queue
5
+ class DrainWorker
6
+ include Resque::Plugins::UniqueJob
7
+
8
+ def self.perform
9
+ Resque::Plugins::Fifo::Queue::Manager.new.update_workers
10
+ end
11
+ end
12
+ end
13
+ end
14
+ end
15
+ end
@@ -0,0 +1,429 @@
1
+ require 'set'
2
+
3
+ module Resque
4
+ module Plugins
5
+ module Fifo
6
+ module Queue
7
+ class Manager
8
+ DLM_TTL = 30000
9
+ attr_accessor :queue_prefix
10
+
11
+ def initialize(queue_prefix = 'fifo')
12
+ @queue_prefix = queue_prefix
13
+ end
14
+
15
+ def fifo_hash_table_name
16
+ "fifo-queue-lookup-#{@queue_prefix}"
17
+ end
18
+
19
+ def queue_prefix
20
+ "#{Resque::Plugins::Fifo::WORKER_QUEUE_NAMESPACE}-#{@queue_prefix}"
21
+ end
22
+
23
+ def pending_queue_name
24
+ "#{queue_prefix}-pending"
25
+ end
26
+
27
+ def compute_queue_name(key)
28
+ index = compute_index(key)
29
+ slots = redis_client.lrange fifo_hash_table_name, 0, -1
30
+
31
+ return pending_queue_name if slots.empty?
32
+
33
+ slots.reverse.each do |slot|
34
+ slice, queue = slot.split('#')
35
+ if index > slice.to_i
36
+ return queue
37
+ end
38
+ end
39
+
40
+ _slice, queue_name = slots.last.split('#')
41
+
42
+ queue_name
43
+ end
44
+
45
+ def enqueue(key, klass, *args)
46
+ queue = compute_queue_name(key)
47
+
48
+ redis_client.incr "queue-stats-#{queue}"
49
+ Resque.validate(klass, queue)
50
+ if Resque.inline? && inline?
51
+ # Instantiating a Resque::Job and calling perform on it so callbacks run
52
+ # decode(encode(args)) to ensure that args are normalized in the same manner as a non-inline job
53
+ Resque::Job.new(:inline, {'class' => klass, 'args' => Resque.decode(Resque.encode(args)), 'fifo_key' => key, 'enqueue_ts' => 0}).perform
54
+ else
55
+ Resque.push(queue, :class => klass.to_s, :args => args, fifo_key: key, :enqueue_ts => Time.now.to_i)
56
+ end
57
+ end
58
+
59
+ # method for stubbing in tests
60
+ def inline?
61
+ Resque.inline?
62
+ end
63
+
64
+ def clear_stats
65
+ redis_client.del "fifo-stats-max-delay"
66
+ redis_client.del "fifo-stats-accumulated-delay"
67
+ redis_client.del "fifo-stats-accumulated-count"
68
+ redis_client.del "fifo-stats-dht-rehash"
69
+ redis_client.del "fifo-stats-accumulated-recalc-time"
70
+ redis_client.del "fifo-stats-accumulated-recalc-count"
71
+
72
+ slots = redis_client.lrange fifo_hash_table_name, 0, -1
73
+ slots.each_with_index.collect do |slot, index|
74
+ slice, queue = slot.split('#')
75
+ redis_client.del "queue-stats-#{queue}"
76
+ end
77
+ end
78
+
79
+ def get_stats_max_delay
80
+ redis_client.get("fifo-stats-max-delay") || 0
81
+ end
82
+
83
+ def get_stats_avg_dht_recalc
84
+ accumulated_delay = redis_client.get("fifo-stats-accumulated-recalc-time") || 0
85
+ total_items = redis_client.get("fifo-stats-accumulated-recalc-count") || 0
86
+ return 0 if total_items == 0
87
+
88
+ return accumulated_delay.to_f / total_items.to_f
89
+ end
90
+
91
+ def get_stats_avg_delay
92
+ accumulated_delay = redis_client.get("fifo-stats-accumulated-delay") || 0
93
+ total_items = redis_client.get("fifo-stats-accumulated-count") || 0
94
+ return 0 if total_items == 0
95
+
96
+ return accumulated_delay.to_f / total_items.to_f
97
+ end
98
+
99
+ def dht_times_rehashed
100
+ redis_client.get("fifo-stats-dht-rehash") || 0
101
+ end
102
+
103
+ def all_stats
104
+ {
105
+ dht_times_rehashed: dht_times_rehashed,
106
+ avg_delay: get_stats_avg_delay,
107
+ avg_dht_recalc: get_stats_avg_dht_recalc,
108
+ max_delay: get_stats_max_delay
109
+ }
110
+ end
111
+
112
+ def self.enqueue_to(key, klass, *args)
113
+ enqueue_topic('fifo', key, klass, *args)
114
+ end
115
+
116
+ def self.enqueue_topic(topic, key, klass, *args)
117
+ # Perform before_enqueue hooks. Don't perform enqueue if any hook returns false
118
+ before_hooks = Plugin.before_enqueue_hooks(klass).collect do |hook|
119
+ klass.send(hook, *args)
120
+ end
121
+
122
+ return nil if before_hooks.any? { |result| result == false }
123
+
124
+ manager = Resque::Plugins::Fifo::Queue::Manager.new(topic)
125
+ manager.enqueue(key, klass, *args)
126
+
127
+ Plugin.after_enqueue_hooks(klass).each do |hook|
128
+ klass.send(hook, *args)
129
+ end
130
+
131
+ return true
132
+ end
133
+
134
+ def dump_dht
135
+ slots = redis_client.lrange fifo_hash_table_name, 0, -1
136
+ slots.each_with_index.collect do |slot, index|
137
+ slice, queue = slot.split('#')
138
+ [slice.to_i, queue]
139
+ end
140
+ end
141
+
142
+ def pretty_dump
143
+ slots = redis_client.lrange fifo_hash_table_name, 0, -1
144
+ slots.each_with_index.collect do |slot, index|
145
+ slice, queue = slot.split('#')
146
+ puts "Slice ##{slice} -> #{queue}"
147
+ end
148
+ end
149
+
150
+ def peek_pending
151
+ Resque.peek(pending_queue_name, 0, 0)
152
+ end
153
+
154
+ def pending_total
155
+ redis_client.llen "queue:#{pending_queue_name}"
156
+ end
157
+
158
+ def dump_queue_names
159
+ dump_dht.collect { |item| item[1] }
160
+ end
161
+
162
+ def worker_for_queue(queue_name)
163
+ Resque.workers.collect do |worker|
164
+ w_queue_name = worker.queues.select { |name| name.start_with?("#{queue_prefix}-") }.first
165
+ return worker if w_queue_name == queue_name
166
+ end.compact
167
+ nil
168
+ end
169
+
170
+ def dump_queues
171
+ query_available_queues.collect do |queue|
172
+ [queue, Resque.peek(queue,0,0)]
173
+ end.to_h
174
+ end
175
+
176
+ def pretty_dump_queues
177
+ slots = redis_client.lrange fifo_hash_table_name, 0, -1
178
+ slots.each_with_index.collect do |slot, index|
179
+ slice, queue = slot.split('#')
180
+ puts "#Slice #{slice}"
181
+
182
+ puts "#{Resque.peek(queue,0,0).to_s.gsub('},',"},\n")},"
183
+ puts "\n"
184
+ end
185
+ end
186
+
187
+ def dump_queues_with_slices
188
+ slots = redis_client.lrange fifo_hash_table_name, 0, -1
189
+ slots.collect do |slot, index|
190
+ slice, queue = slot.split('#')
191
+ worker = worker_for_queue(queue)
192
+
193
+ hostname = '?'
194
+ status = '?'
195
+ pid = '?'
196
+ started = '?'
197
+ heartbeat = '?'
198
+
199
+ if worker
200
+ hostname = worker.hostname
201
+ status = worker.paused? ? 'paused' : worker.state.to_s
202
+ pid = worker.pid
203
+ started = worker.started
204
+ heartbeat = worker.heartbeat
205
+ end
206
+
207
+ [slice, queue, hostname, pid, status, started, heartbeat, get_processed_count(queue), Resque.peek(queue,0,0).size ]
208
+ end
209
+ end
210
+
211
+ def get_processed_count(queue)
212
+ redis_client.get("queue-stats-#{queue}") || 0
213
+ end
214
+
215
+ def dump_queues_sorted
216
+ queues = dump_queues
217
+ dht = dump_dht.collect do |item|
218
+ _slice, queue_name = item
219
+ queues[queue_name]
220
+ end
221
+ end
222
+
223
+ def update_workers
224
+ # query removed workers
225
+ start_time = Time.now.to_i
226
+ redlock.lock("fifo_queue_lock-#{queue_prefix}", DLM_TTL) do |locked|
227
+ if locked
228
+ start_timestamp = redis_client.get "fifo_update_timestamp-#{queue_prefix}"
229
+
230
+ process_dht
231
+ cleanup_queues
232
+ log("reinserting items from pending")
233
+ reinsert_pending_items(pending_queue_name)
234
+
235
+ # check if something tried to request an update, if so we requie again
236
+ current_timestamp = redis_client.get "fifo_update_timestamp-#{queue_prefix}"
237
+
238
+ if start_timestamp != current_timestamp
239
+ request_refresh
240
+ end
241
+ else
242
+ log("unable to lock DHT.")
243
+ end
244
+ end
245
+
246
+ end_time = Time.now.to_i
247
+
248
+ redis_client.set("fifo-stats-accumulated-recalc-time", end_time - start_time)
249
+ redis_client.incr "fifo-stats-accumulated-recalc-count"
250
+ end
251
+
252
+ def request_refresh
253
+ if Resque.inline?
254
+ # Instantiating a Resque::Job and calling perform on it so callbacks run
255
+ # decode(encode(args)) to ensure that args are normalized in the same manner as a non-inline job
256
+ Resque::Job.new(:inline, {'class' => Resque::Plugins::Fifo::Queue::DrainWorker, 'args' => []}).perform
257
+ else
258
+ redis_client.set "fifo_update_timestamp-#{queue_prefix}", Time.now.to_s
259
+ Resque.push(:fifo_refresh, :class => Resque::Plugins::Fifo::Queue::DrainWorker.to_s, :args => [])
260
+ end
261
+
262
+ end
263
+
264
+ def orphaned_queues
265
+ current_queues = dump_queue_names
266
+ Resque.all_queues.reject do |queue|
267
+ !queue.start_with?(queue_prefix) || current_queues.include?(queue)
268
+ end
269
+ end
270
+
271
+ private
272
+
273
+ def cleanup_queues
274
+ current_queues = dump_queue_names
275
+ Resque.all_queues.each do |queue|
276
+ if queue.start_with?(queue_prefix)
277
+ next if current_queues.include?(queue)
278
+
279
+ if redis_client.llen("queue:#{queue}") > 0
280
+ log("transfer non empty orphaned queue items to pending")
281
+ transfer_queues(queue, pending_queue_name)
282
+ end
283
+
284
+ log("remove orphaned queue #{queue}.")
285
+ Resque.remove_queue(queue)
286
+ end
287
+ end
288
+ end
289
+
290
+ def process_dht
291
+ slots = redis_client.lrange fifo_hash_table_name, 0, -1
292
+
293
+ current_queues = slots.map { |slot| slot.split('#')[1] }.uniq
294
+
295
+ available_queues = query_available_queues
296
+ # no change don't update
297
+ return if available_queues.sort == current_queues.sort
298
+
299
+ redis_client.incr "fifo-stats-dht-rehash"
300
+
301
+ remove_list = slots.select do |slot|
302
+ _slice, queue = slot.split('#')
303
+ !available_queues.include?(queue)
304
+ end
305
+
306
+ remove_list.each do |slot|
307
+ _slice, queue = slot.split('#')
308
+ log "queue #{queue} removed."
309
+ redis_client.lrem fifo_hash_table_name, -1, slot
310
+ transfer_queues(queue, pending_queue_name)
311
+ redis_client.del "queue-stats-#{queue}"
312
+ end
313
+
314
+ added_queues = available_queues.each do |queue|
315
+ if !current_queues.include?(queue)
316
+ insert_slot(queue)
317
+ log "queue #{queue} was added."
318
+ end
319
+ end
320
+ end
321
+
322
+ def log(message)
323
+ puts message
324
+ end
325
+
326
+ def insert_slot(queue)
327
+ new_slice = generate_new_slice # generate random 32-bit integer
328
+ insert_queue_to_slice new_slice, queue
329
+ end
330
+
331
+ def generate_new_slice
332
+ XXhash.xxh32(rand(0..2**32).to_s)
333
+ end
334
+
335
+ def insert_queue_to_slice(slice, queue)
336
+ queue_str = "#{slice}##{queue}"
337
+ log "insert #{queue} -> #{slice}"
338
+ slots = redis_client.lrange(fifo_hash_table_name, 0, -1)
339
+
340
+ if slots.empty?
341
+ redis_client.rpush(fifo_hash_table_name, queue_str)
342
+ return
343
+ end
344
+
345
+ _b_slice, prev_queue = slots.last.split('#')
346
+ slots.each do |slot|
347
+ slot_slice, s_queue = slot.split('#')
348
+ if slice < slot_slice.to_i
349
+ redlock.lock!("queue_lock-#{prev_queue}", DLM_TTL) do |_lock_info|
350
+ pause_queues([prev_queue]) do
351
+ redis_client.linsert(fifo_hash_table_name, 'BEFORE', slot, queue_str)
352
+ transfer_queues(prev_queue, pending_queue_name)
353
+ end
354
+ end
355
+ return
356
+ end
357
+
358
+ prev_queue = s_queue
359
+ end
360
+
361
+ _slot_slice, s_queue = slots.last.split('#')
362
+ pause_queues([s_queue]) do
363
+ transfer_queues(s_queue, pending_queue_name)
364
+ redis_client.rpush(fifo_hash_table_name, queue_str)
365
+ end
366
+ end
367
+
368
+ def reinsert_pending_items(from_queue)
369
+ redis_client.llen("queue:#{from_queue}").times do
370
+ slot = redis_client.lpop "queue:#{from_queue}"
371
+ queue_json = JSON.parse(slot)
372
+ target_queue = compute_queue_name(queue_json['fifo_key'])
373
+ log "#{queue_json['fifo_key']}: #{from_queue} -> #{target_queue}"
374
+ redis_client.rpush("queue:#{target_queue}", slot)
375
+ end
376
+ end
377
+
378
+ def pause_queues(queue_names = [], &block)
379
+ begin
380
+ queue_names.each do |queue_name|
381
+ worker = worker_for_queue(queue_name)
382
+ worker.pause_processing if worker
383
+ end
384
+
385
+ block.()
386
+ ensure
387
+ queue_names.each do |queue_name|
388
+ worker = worker_for_queue(queue_name)
389
+ worker.unpause_processing if worker
390
+ end
391
+ end
392
+ end
393
+
394
+ def transfer_queues(from_queue, to_queue)
395
+ log "transfer: #{from_queue} -> #{to_queue}"
396
+ redis_client.llen("queue:#{from_queue}").times do
397
+ redis_client.rpoplpush("queue:#{from_queue}", "queue:#{to_queue}")
398
+ end
399
+ end
400
+
401
+ def redis_client
402
+ Resque.redis
403
+ end
404
+
405
+ def redlock
406
+ Redlock::Client.new [redis_client.redis], {
407
+ retry_count: 30,
408
+ retry_delay: 1000, # milliseconds
409
+ retry_jitter: 100, # milliseconds
410
+ redis_timeout: 1 # seconds
411
+ }
412
+ end
413
+
414
+ def compute_index(key)
415
+ XXhash.xxh32(key)
416
+ end
417
+
418
+ def query_available_queues
419
+ expired_workers = Resque::Worker.all_workers_with_expired_heartbeats
420
+
421
+ Resque.workers.reject { |w| expired_workers.include?(w) }.collect do |worker|
422
+ worker.queues.select { |name| name.start_with?("#{queue_prefix}-") }.first
423
+ end.compact
424
+ end
425
+ end
426
+ end
427
+ end
428
+ end
429
+ end