postqueue 0.2.1 → 0.4.0

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
checksums.yaml CHANGED
@@ -1,7 +1,7 @@
1
1
  ---
2
2
  SHA1:
3
- metadata.gz: 7b6f8a8ad0f1aef1b565a223153f6280a9361314
4
- data.tar.gz: 4dbf3727e217a8c248ce37fc78715aaf9080f0ea
3
+ metadata.gz: 5fd15294351f2f6e2be7d4c15d7ac509e097f3c5
4
+ data.tar.gz: cbe1f9ca4e5bf7bbfa6027794e2c0ff26ad14c03
5
5
  SHA512:
6
- metadata.gz: 569162000045134eb3107231ed94c6b52e15f5aed0e47ec2d681502037d142fe0cbd7ac9e85b2361e5fbeced0a0bd01fce6bee8cea4ef258cf0b2b19c46afb4e
7
- data.tar.gz: 6c50d7e414a6b411e4066f6d2351d4d68310f8df847299911db0c7fbba2b25f82550a2db9f3ed8b0612c7777e4a415da2fbdcba9e96bcf92badb0bc602c24e60
6
+ metadata.gz: 5192d32ebb44a46f60df9303edb8cf2065edb42194e77e28576d2deba16dfbcc703c2c4f384f895a190cec68a327426936d1bfe48c157e8763e275a641945820
7
+ data.tar.gz: c27d77120587844546d644d5bdfa38897a771080fd89654242b39c00ca95125c8916b5dbb751ff47ed8def82ba50cded1daf2090dc6424854c2bac5343f366f3
data/README.md CHANGED
@@ -1,18 +1,37 @@
1
1
  # Postqueue
2
2
 
3
- The postqueue gem implements a simple to use queue on top of postgresql. Since this implementation is using the SKIP LOCKED
4
- syntax, it needs PostgresQL >= 9.5.
3
+ ## Intro
5
4
 
6
- Why not using another queue implementation? postqueue comes with some extras:
5
+ The postqueue gem implements a simple to use queue on top of postgresql. Note that while
6
+ a queue like this is typically used in a job queueing scenario, this document does not
7
+ talk about jobs, it talks about **queue items**; it also does not schedule a job,
8
+ it **enqueues** an item, and it does not executes a job, it **processes** queue items.
7
9
 
8
- - support for optimal handling of idempotent operations
9
- - batch processing
10
- - searchable via SQL
10
+ Why building an additional queue implementation? Compared to delayed_job or the other
11
+ usual suspects postqueue implements these features:
12
+
13
+ - The item structure is intentionally kept super simple: an item is described by an
14
+ `op` field - a string - and an `id` field, an integer. In a typical usecase a
15
+ queue item would describe an operation on a specific entity, where `op` names
16
+ both the operation and the entity type and the `id` field would describe the
17
+ individual entity.
18
+
19
+ - With such a simplistic item structure the queue itself can be searched or
20
+ otherwise evaluated using SQL. This also allows for **skipping duplicate entries**
21
+ when enqueuing items (managed via a duplicate: argument when enqueuing) and for
22
+ **batch processing** multple items in one go.
23
+
24
+ - With data being kept in a Postgresql database processing provides **transactional semantics**:
25
+ an item failing to process stays in the queue. Error handling is kept simpe to a
26
+ strategy of rescheduling items up to a specific maximum number of processing attemps.
27
+
28
+ Please be aware that postqueue is using the SELECT .. FOR UPDATE SKIP LOCKED Postgresql syntax,
29
+ and therefore needs at least PostgresQL >= 9.5.
11
30
 
12
31
  ## Basic usage
13
32
 
14
33
  ```ruby
15
- queue = PostgresQL::Base.new
34
+ queue = Postqueue.new
16
35
  queue.enqueue op: "product/reindex", entity_id: [12,13,14,15]
17
36
  queue.process do |op, entity_ids|
18
37
  # note: entity_ids is always an Array of ids.
@@ -27,12 +46,11 @@ end
27
46
 
28
47
  The process call will select a number of queue items for processing. They will all have
29
48
  the same `op` attribute. The callback will receive the `op` attribute and the `entity_ids`
30
- of all queue entries selected for processing. The `processing` method will return the
31
- return value of the block.
49
+ of all queue entries selected for processing. The `processing` method will return the number
50
+ of processed items.
32
51
 
33
- If no callback is given the return value will be the `[op, entity_ids]` values
34
- that would have been sent to the block. This is highly unrecommended though, since
35
- when using a block to do processing errors and exceptions can properly be dealt with.
52
+ If no callback is given the matching items are only removed from the queue without
53
+ any processing.
36
54
 
37
55
  Postqueue.process also accepts the following arguments:
38
56
 
@@ -45,56 +63,90 @@ Example:
45
63
  # only handle up to 10 "product/reindex" entries
46
64
  end
47
65
 
48
- If the block fails, by either returning `false` or by raising an exception the queue will
49
- postpone processing these entries by an increasing amount of time, up until
50
- `Postqueue::MAX_ATTEMPTS` failed attempts. The current MAX_ATTEMPTS definition
51
- leads to a maximum postpone interval (currently up to 190 seconds).
66
+ If the block raises an exception the queue will postpone processing these entries
67
+ by an increasing amount of time, up until `queue.max_attempts` failed attempts.
68
+ That value defaults to 5.
52
69
 
53
70
  If the queue is empty or no matching queue entry could be found, `Postqueue.process`
54
- returns nil.
71
+ returns 0.
72
+
73
+ ## Advanced usage
74
+
75
+ ### Concurrency
76
+
77
+ Postqueue implements the following concurrency guarantees:
78
+
79
+ - catastrophic DB failure and communication breakdown aside a queue item which is enqueued will eventually be processed successfully exactly once;
80
+ - multiple consumers can work in parallel.
81
+
82
+ Note that you should not share a Postqueue ruby object across threads - instead you should create
83
+ process objects with the identical configuration.
84
+
85
+ ### Idempotent operations
86
+
87
+ When enqueueing items duplicate idempotent operations are not enqueued. Whether or not an operation
88
+ should be considered idempotent is defined when configuring the queue:
89
+
90
+ Postqueue.new do |queue|
91
+ queue.idempotent_operation "idempotent"
92
+ end
55
93
 
56
- ### process a single entry
94
+ ### Processing a single entry
57
95
 
58
96
  Postqueue implements a shortcut to process only a single entry. Under the hood this
59
97
  calls `Postqueue.process` with `batch_size` set to `1`:
60
98
 
61
- Postqueue.process_one do |op, entity_ids|
62
- end
99
+ queue.process_one
63
100
 
64
101
  Note that even though `process_one` will only ever process a single entry the
65
- `entity_ids` parameter to the block is still an array (holding a single ID
102
+ `entity_ids` parameter to the callback is still an array (with a single ID entry
66
103
  in that case).
67
104
 
68
- ## idempotent operations
105
+ ### Migrating
106
+
107
+ Postqueue comes with migration helpers:
108
+
109
+ # set up a table for use with postqueue.
110
+ Postqueue.migrate!(table_name = "postqueue")
69
111
 
70
- Postqueue comes with simple support for idempotent operations: if an operation is deemed
71
- idempotent it is not enqueued again if it can be found in the queue already. Note that
72
- a queue item will be created if another item is currently being processed.
112
+ # set up a table for use with postqueue.
113
+ Postqueue.unmigrate!(table_name = "postqueue")
73
114
 
74
- class Testqueue < Postqueue::Base
75
- def idempotent?(entity_type:,op:)
76
- op == "reindex"
77
- end
115
+ You can also set up your own table, as long as it is compatible.
116
+
117
+ To use a non-default table or a non-default database, change the `item_class`
118
+ attribute of the queue:
119
+
120
+ Postqueue.new do |queue|
121
+ queue.item_class = MyItemClass
78
122
  end
79
123
 
80
- ## batch processing
124
+ `MyItemClass` should inherit from Postqueue::Item and use the same or a compatible database
125
+ structure.
126
+
127
+ ## Batch processing
81
128
 
82
- Often queue items can be processed in batches for a better performance of the entire system.
83
- To allow batch processing for some items subclass `Postqueue::Base` and reimplement the
84
- `batch_size?` method to return a suggested batch size for a specific operation.
85
- The following implements a batch_size of 100 for all queue entries:
129
+ Often queue items can be batched together for a performant operation. To allow batch
130
+ processing for some items, configure the Postqueue to either set a `default_batch_size`
131
+ or an operation-specific batch_size:
86
132
 
87
- class Batchqueue < Postqueue::Base
88
- def batch_size(op:)
89
- 100
90
- end
133
+ Postqueue.new do |queue|
134
+ queue.default_batch_size = 100
135
+ queue.batch_sizes["batchable"] = 10
91
136
  end
92
137
 
93
- ## Searchable via SQL
138
+ ## Test mode
94
139
 
95
- In contrast to other queue implementations available for Rubyists this queue formats
96
- entries in a way that makes it possible to query the queue via SQL. On the other
97
- hand this queue also does not allow to enqueue arbitrary entries as these others do.
140
+ During unit tests it is likely preferrable to process queue items in synchronous fashion (i.e. as they come in).
141
+ You can enable this mode via:
142
+
143
+ Postqueue.async_processing = false
144
+
145
+ You can also enable this on a queue-by-queue base via:
146
+
147
+ Postqueue.new do |queue|
148
+ queue.async_processing = false
149
+ end
98
150
 
99
151
  ## Installation
100
152
 
@@ -112,19 +164,21 @@ Or install it yourself as:
112
164
 
113
165
  $ gem install postqueue
114
166
 
115
- ## Usage
116
-
117
167
  ## Development
118
168
 
119
- After checking out the repo, run `bin/setup` to install dependencies. Make sure you have a local postgresql implementation of
120
- at least version 9.5. Add a `postqueue` user with a `postqueue` password, and create a `postqueue_test` database for it.
121
- The script `./scripts/prepare_pg` can be helpful in establishing that.
169
+ After checking out the repo, run `bin/setup` to install dependencies. Make sure you have
170
+ a local postgresql implementation of at least version 9.5. Add a `postqueue` user with
171
+ a `postqueue` password, and create a `postqueue_test` database for it. The script
172
+ `./scripts/prepare_pg` can be somewhat helpful in establishing that.
122
173
 
123
- Then, run `rake spec` to run the tests. You can also run `bin/console` for an interactive prompt that will allow you to experiment.
174
+ Then, run `rake spec` to run the tests. You can also run `bin/console` for an interactive
175
+ prompt that will allow you to experiment.
124
176
 
125
177
  To install this gem onto your local machine, run `bundle exec rake install`.
126
178
 
127
- To release a new version, run `./scripts/release`, which will bump the version number, create a git tag for the version, push git commits and tags, and push the `.gem` file to [rubygems.org](https://rubygems.org).
179
+ To release a new version, run `./scripts/release`, which will bump the version number,
180
+ create a git tag for the version, push git commits and tags, and push the `.gem` file
181
+ to [rubygems.org](https://rubygems.org).
128
182
 
129
183
  ## Contributing
130
184
 
@@ -0,0 +1,33 @@
1
+ module Postqueue
2
+ class Item
3
+ # Enqueues an queue item. If the operation is duplicate, and an entry with
4
+ # the same combination of op and entity_id exists already, no new entry will
5
+ # be added to the queue.
6
+ #
7
+ # Returns the number of items that have been enqueued.
8
+ def self.enqueue(op:, entity_id:, ignore_duplicates: false)
9
+ if entity_id.is_a?(Enumerable)
10
+ return enqueue_many(op: op, entity_ids: entity_id, ignore_duplicates: ignore_duplicates)
11
+ end
12
+
13
+ if ignore_duplicates && where(op: op, entity_id: entity_id).present?
14
+ return 0
15
+ end
16
+
17
+ insert_item op: op, entity_id: entity_id
18
+ return 1
19
+ end
20
+
21
+ def self.enqueue_many(op:, entity_ids:, ignore_duplicates:) #:nodoc:
22
+ entity_ids.uniq! if ignore_duplicates
23
+
24
+ transaction do
25
+ entity_ids.each do |entity_id|
26
+ enqueue(op: op, entity_id: entity_id, ignore_duplicates: ignore_duplicates)
27
+ end
28
+ end
29
+
30
+ entity_ids.count
31
+ end
32
+ end
33
+ end
@@ -0,0 +1,29 @@
1
+ require "active_record"
2
+
3
+ module Postqueue
4
+ #
5
+ # An item class.
6
+ class Item < ActiveRecord::Base
7
+ module ActiveRecordInserter
8
+ def insert_item(op:, entity_id:)
9
+ create!(op: op, entity_id: entity_id)
10
+ end
11
+ end
12
+
13
+ module RawInserter
14
+ def prepared_inserter_statement
15
+ @prepared_inserter_statement ||= begin
16
+ name = "postqueue-insert-{table_name}-#{Thread.current.object_id}"
17
+ connection.raw_connection.prepare(name, "INSERT INTO #{table_name}(op, entity_id) VALUES($1, $2)")
18
+ name
19
+ end
20
+ end
21
+
22
+ def insert_item(op:, entity_id:)
23
+ connection.raw_connection.exec_prepared(prepared_inserter_statement, [op, entity_id])
24
+ end
25
+ end
26
+
27
+ extend RawInserter
28
+ end
29
+ end
@@ -1,8 +1,19 @@
1
1
  require "active_record"
2
2
 
3
3
  module Postqueue
4
+ #
5
+ # An item class.
4
6
  class Item < ActiveRecord::Base
5
7
  self.table_name = :postqueue
8
+
9
+ def self.postpone(ids)
10
+ connection.exec_query <<-SQL
11
+ UPDATE #{table_name}
12
+ SET failed_attempts = failed_attempts+1,
13
+ next_run_at = next_run_at + power(failed_attempts + 1, 1.5) * interval '10 second'
14
+ WHERE id IN (#{ids.join(',')})
15
+ SQL
16
+ end
6
17
  end
7
18
 
8
19
  def self.unmigrate!(table_name = "postqueue")
@@ -33,3 +44,6 @@ module Postqueue
33
44
  SQL
34
45
  end
35
46
  end
47
+
48
+ require_relative "item/inserter"
49
+ require_relative "item/enqueue"
@@ -0,0 +1,9 @@
1
+ module Postqueue
2
+ def self.logger=(logger)
3
+ @logger ||= logger
4
+ end
5
+
6
+ def self.logger
7
+ @logger ||= Logger.new(STDOUT)
8
+ end
9
+ end
@@ -0,0 +1,60 @@
1
+ module Postqueue
2
+ class MissingHandler < RuntimeError
3
+ attr_reader :queue, :op, :entity_ids
4
+
5
+ def initialize(queue:, op:, entity_ids:)
6
+ @queue = queue
7
+ @op = op
8
+ @entity_ids = entity_ids
9
+ end
10
+
11
+ def to_s
12
+ "#{queue.item_class.table_name}: Unknown operation #{op} with #{entity_ids.count} entities"
13
+ end
14
+ end
15
+
16
+ class Queue
17
+ Timing = Struct.new(:avg_queue_time, :max_queue_time, :total_processing_time, :processing_time)
18
+
19
+ def on(op, &block)
20
+ raise ArgumentError, "Invalid op #{op.inspect}, must be a string" unless op.is_a?(String)
21
+ callbacks[op] = block
22
+ self
23
+ end
24
+
25
+ private
26
+
27
+ def callbacks
28
+ @callbacks ||= {}
29
+ end
30
+
31
+ def callback_for(op:)
32
+ callbacks[op] || callbacks['*']
33
+ end
34
+
35
+ def on_missing_handler(op:, entity_ids:)
36
+ raise MissingHandler.new(queue: self, op: op, entity_ids: entity_ids)
37
+ end
38
+
39
+ private
40
+
41
+ def run_callback(op:, entity_ids:)
42
+ queue_times = item_class.find_by_sql <<-SQL
43
+ SELECT extract('epoch' from AVG(now() - created_at)) AS avg,
44
+ extract('epoch' from MAX(now() - created_at)) AS max
45
+ FROM #{item_class.table_name} WHERE entity_id IN (#{entity_ids.join(',')})
46
+ SQL
47
+ queue_time = queue_times.first
48
+
49
+ total_processing_time = Benchmark.realtime do
50
+ if callback = callback_for(op: op)
51
+ callback.call(op, entity_ids)
52
+ else
53
+ on_missing_handler(op: op, entity_ids: entity_ids)
54
+ end
55
+ end
56
+
57
+ Timing.new(queue_time.avg, queue_time.max, total_processing_time, total_processing_time / entity_ids.length)
58
+ end
59
+ end
60
+ end
@@ -0,0 +1,22 @@
1
+ module Postqueue
2
+ # The Postqueue processor processes items in a single Postqueue table.
3
+ class Queue
4
+ private
5
+
6
+ def on_processing(op, entity_ids, timing)
7
+ msg = "processing '#{op}' for id(s) #{entity_ids.join(',')}: "
8
+ msg += "processing #{entity_ids.length} items took #{'%.3f secs' % timing.total_processing_time}"
9
+
10
+ msg += ", queue_time: avg: #{'%.3f secs' % timing.avg_queue_time}/max: #{'%.3f secs' % timing.max_queue_time}"
11
+ logger.info msg
12
+ end
13
+
14
+ def on_exception(exception, op, entity_ids)
15
+ logger.warn "processing '#{op}' for id(s) #{entity_ids.inspect}: caught #{exception}"
16
+ end
17
+
18
+ def logger
19
+ Postqueue.logger
20
+ end
21
+ end
22
+ end
@@ -0,0 +1,57 @@
1
+ module Postqueue
2
+ # The Postqueue processor processes items in a single Postqueue table.
3
+ class Queue
4
+ # Processes up to batch_size entries
5
+ #
6
+ # process batch_size: 100
7
+ def process(op: nil, batch_size: 100)
8
+ item_class.transaction do
9
+ process_inside_transaction(op: op, batch_size: batch_size)
10
+ end
11
+ end
12
+
13
+ # processes a single entry
14
+ def process_one(op: nil, &block)
15
+ process(op: op, batch_size: 1)
16
+ end
17
+
18
+ def process_until_empty(op: nil, batch_size: 100)
19
+ count = 0
20
+ loop do
21
+ processed_items = process(op: op, batch_size: batch_size)
22
+ break if processed_items == 0
23
+ count += processed_items
24
+ end
25
+ count
26
+ end
27
+
28
+ private
29
+
30
+ # The actual processing. Returns [ op, [ ids-of-processed-items ] ] or nil
31
+ def process_inside_transaction(op:, batch_size:)
32
+ items = select_and_lock_batch(op: op, max_batch_size: batch_size)
33
+ match = items.first
34
+ return 0 unless match
35
+
36
+ entity_ids = items.map(&:entity_id)
37
+ timing = run_callback(op: match.op, entity_ids: entity_ids)
38
+
39
+ on_processing(match.op, entity_ids, timing)
40
+ item_class.where(id: items.map(&:id)).delete_all
41
+
42
+ # even though we try not to enqueue duplicates we cannot guarantee that,
43
+ # since concurrent enqueue transactions might still insert duplicates.
44
+ # That's why we explicitely remove all non-failed duplicates here.
45
+ if idempotent_operation?(match.op)
46
+ duplicates = select_and_lock_duplicates(op: match.op, entity_ids: entity_ids)
47
+ item_class.where(id: duplicates.map(&:id)).delete_all unless duplicates.empty?
48
+ end
49
+
50
+ entity_ids.length
51
+ rescue => e
52
+ on_exception(e, match.op, entity_ids)
53
+ item_class.postpone items.map(&:id)
54
+ raise
55
+ end
56
+ end
57
+ end
@@ -1,15 +1,20 @@
1
1
  module Postqueue
2
- class Base
3
- # Select and lock up to \a limit unlocked items in the queue.
2
+ class Queue
3
+ # Select and lock up to \a limit unlocked items in the queue. Used by
4
+ # select_and_lock_batch.
4
5
  def select_and_lock(relation, limit:)
5
6
  # Ordering by next_run_at and id should not strictly be necessary, but helps
6
7
  # processing entries in the passed in order when enqueued at the same time.
7
- relation = relation.where("failed_attempts < ? AND next_run_at < ?", MAX_ATTEMPTS, Time.now).order(:next_run_at, :id)
8
+ relation = relation
9
+ .select(:id, :entity_id, :op)
10
+ .where("failed_attempts < ? AND next_run_at < ?", max_attemps, Time.now)
11
+ .order(:next_run_at, :id)
8
12
 
9
13
  # FOR UPDATE SKIP LOCKED selects and locks entries, but skips those that
10
14
  # are already locked - preventing this transaction from being locked.
11
15
  sql = relation.to_sql + " FOR UPDATE SKIP LOCKED"
12
16
  sql += " LIMIT #{limit}" if limit
17
+
13
18
  item_class.find_by_sql(sql)
14
19
  end
15
20
 
@@ -20,25 +25,35 @@ module Postqueue
20
25
  # passed in, that one is chosen as a filter condition, otherwise the op value
21
26
  # of the first queue entry is used insteatd.
22
27
  #
23
- # This method will at maximum select and lock batch_size items. If the batch_size
24
- # returned by the #batch_size method is smaller than the passed in value here
25
- # that one is used instead.
26
- def select_and_lock_batch(op:, batch_size:, &_block)
28
+ # This method will at maximum select and lock \a batch_size items.
29
+ # If the \a batch_size configured in the queue is smaller than the value
30
+ # passed in here that one is used instead.
31
+ #
32
+ # Returns an array of item objects.
33
+ def select_and_lock_batch(op:, max_batch_size:)
27
34
  relation = item_class.all
28
35
  relation = relation.where(op: op) if op
29
36
 
30
37
  match = select_and_lock(relation, limit: 1).first
31
38
  return [] unless match
32
39
 
33
- batch_size = calculate_batch_size(op: match.op, max_batch_size: batch_size)
40
+ batch_size = calculate_batch_size(op: match.op, max_batch_size: max_batch_size)
34
41
  return [ match ] if batch_size <= 1
35
42
 
36
43
  batch_relation = relation.where(op: match.op)
37
44
  select_and_lock(batch_relation, limit: batch_size)
38
45
  end
39
46
 
47
+ def select_and_lock_duplicates(op:, entity_ids:)
48
+ raise ArgumentError, "Missing op argument" unless op
49
+ return [] if entity_ids.empty?
50
+
51
+ relation = item_class.where(op: op, entity_id: entity_ids)
52
+ select_and_lock(relation, limit: nil)
53
+ end
54
+
40
55
  def calculate_batch_size(op:, max_batch_size:)
41
- recommended_batch_size = batch_size(op: op) || 1
56
+ recommended_batch_size = batch_size(op: op)
42
57
  return 1 if recommended_batch_size < 2
43
58
  return recommended_batch_size unless max_batch_size
44
59
  max_batch_size < recommended_batch_size ? max_batch_size : recommended_batch_size
@@ -0,0 +1,61 @@
1
+ module Postqueue
2
+ class Queue
3
+ # The AR::Base class to use. You would only change this if you want to run
4
+ # the queue in a different database or in a different table.
5
+ attr_accessor :item_class
6
+
7
+ # The default batch size. Will be used if no specific batch size is defined
8
+ # for an operation.
9
+ attr_accessor :default_batch_size
10
+
11
+ # batch size for a given op
12
+ attr_reader :batch_sizes
13
+
14
+ # maximum number of processing attempts.
15
+ attr_reader :max_attemps
16
+
17
+ def async_processing?
18
+ @async_processing
19
+ end
20
+
21
+ attr_writer :async_processing
22
+
23
+ def initialize(&block)
24
+ @batch_sizes = {}
25
+ @item_class = ::Postqueue::Item
26
+ @default_batch_size = 1
27
+ @max_attemps = 5
28
+ @async_processing = Postqueue.async_processing?
29
+
30
+ yield self if block
31
+ end
32
+
33
+ def batch_size(op:)
34
+ batch_sizes[op] || default_batch_size || 1
35
+ end
36
+
37
+ def idempotent_operations
38
+ @idempotent_operations ||= {}
39
+ end
40
+
41
+ def idempotent_operation?(op)
42
+ idempotent_operations.fetch(op) { idempotent_operations.fetch('*', false) }
43
+ end
44
+
45
+ def idempotent_operation(op, flag = true)
46
+ idempotent_operations[op] = flag
47
+ end
48
+
49
+ def enqueue(op:, entity_id:)
50
+ enqueued_items = item_class.enqueue op: op, entity_id: entity_id, ignore_duplicates: idempotent_operation?(op)
51
+ return unless enqueued_items > 0
52
+
53
+ process_until_empty(op: op) unless async_processing?
54
+ end
55
+ end
56
+ end
57
+
58
+ require_relative "queue/select_and_lock"
59
+ require_relative "queue/processing"
60
+ require_relative "queue/callback"
61
+ require_relative "queue/logging"
@@ -1,3 +1,3 @@
1
1
  module Postqueue
2
- VERSION = "0.2.1"
2
+ VERSION = "0.4.0"
3
3
  end
data/lib/postqueue.rb CHANGED
@@ -1,8 +1,22 @@
1
- require "postqueue/item"
2
- require "postqueue/base"
3
- require "postqueue/version"
1
+ require_relative "postqueue/logger"
2
+ require_relative "postqueue/item"
3
+ require_relative "postqueue/version"
4
+ require_relative "postqueue/queue"
4
5
 
5
6
  module Postqueue
7
+ def self.new(*args, &block)
8
+ ::Postqueue::Queue.new(*args, &block)
9
+ end
10
+
11
+ def self.async_processing=(async_processing)
12
+ @async_processing = async_processing
13
+ end
14
+
15
+ def self.async_processing?
16
+ @async_processing
17
+ end
18
+
19
+ self.async_processing = true
6
20
  end
7
21
 
8
- # require 'postqueue/railtie' if defined?(Rails)
22
+ # require_relative 'postqueue/railtie' if defined?(Rails)
@@ -0,0 +1,39 @@
1
+ module AdvisoryLock
2
+ ADVISORY_LOCK = self
3
+
4
+ def self.included(base)
5
+ base.extend ClassMethods
6
+ end
7
+
8
+ module ClassMethods
9
+ def exclusive(lock_identifier = self.class.name.hash, &block)
10
+ ADVISORY_LOCK.exclusive(lock_identifier, connection, &block)
11
+ end
12
+ end
13
+
14
+ module Implementation
15
+ def exclusive(lock_identifier, connection = ActiveRecord::Base.connection, &_block)
16
+ if obtained_lock?(lock_identifier, connection)
17
+ begin
18
+ yield
19
+ ensure
20
+ release_lock(lock_identifier, connection)
21
+ end
22
+ else
23
+ raise "Cannot get lock #{lock_identifier.inspect}"
24
+ end
25
+ end
26
+
27
+ private
28
+
29
+ def obtained_lock?(lock_identifier, connection)
30
+ connection.select_value("select pg_try_advisory_lock(#{lock_identifier})")
31
+ end
32
+
33
+ def release_lock(lock_identifier, connection)
34
+ connection.execute "select pg_advisory_unlock(#{lock_identifier})"
35
+ end
36
+ end
37
+
38
+ extend Implementation
39
+ end