litestack 0.1.5 → 0.1.7

Sign up to get free protection for your applications and to get access to all the features.
checksums.yaml CHANGED
@@ -1,7 +1,7 @@
1
1
  ---
2
2
  SHA256:
3
- metadata.gz: 88034778fbac75441d1a9ed2e066ad6ba1aa4962eb48629de1d9ba7ae85f3468
4
- data.tar.gz: d6ac66ffd1e78856bd74aa1d88163b95581494aaa481de88b673d84b5a8b8ff6
3
+ metadata.gz: 2662941f303da99554039370e6e53e4c13956c52070e93ad6862742881ca8063
4
+ data.tar.gz: ccd5583a9e0e7f5c559f8dafdde913bba256b7b5a679fe927cd31805c1289022
5
5
  SHA512:
6
- metadata.gz: 37043f1eab519ea41e81e5b9cb6b20798ea5f6e7ed7ce99d8105454067129e7f49f4b8357b0855948e6df51bf6d466966802085b67943b1f9416ee90f443502a
7
- data.tar.gz: 602d0e03d53eeb8e780c31b2f9bb1b3add550035c3cb3fd07f1d14fe94d94aea50a93dea19ab819454465c7824249bb40f7768ec4eebf6662af713218175071d
6
+ metadata.gz: 1a139b3c42cd3f9a327fc7d8f1487dab2e95f2c88d07e063eecc2e438c91236f301a8e3928c9a3b41030fd8c08f4c118c449e56ea4be35b73b00fe2aa240faf8
7
+ data.tar.gz: 409221a087df5707bd6da477bb9091daf2ee0fc5f8114d71a7666324676bc62e5d77b6edfd1e3e7d29d601985f7936b2e3545e618b1c888b6c7a748544bbafde
data/CHANGELOG.md CHANGED
@@ -1,5 +1,19 @@
1
1
  ## [Unreleased]
2
2
 
3
+ ## [0.1.7] - 2022-03-05
4
+
5
+ - Code cleanup, removal of references to older name
6
+ - Fix for the litedb rake tasks (thanks: netmute)
7
+ - More fixes for the new concurrency model
8
+ - Introduced a logger for the Litejobqueue (doesn't work with Polyphony, fix should come soon)
9
+
10
+ ## [0.1.6] - 2022-03-03
11
+
12
+ - Revamped the locking model, more robust, minimal performance hit
13
+ - Introduced a new resource pooling class
14
+ - Litecache and Litejob now use the resource pool
15
+ - Much less memory usage for Litecache and Litejob
16
+
3
17
  ## [0.1.0] - 2022-02-26
4
18
 
5
19
  - Initial release
data/README.md CHANGED
@@ -1,9 +1,9 @@
1
1
  ![litestack](https://github.com/oldmoe/litestack/blob/master/assets/litestack_logo_teal_large.png?raw=true)
2
2
 
3
3
 
4
- litestack is a revolutionary gem for Ruby and Ruby on Rails that provides an all-in-one solution for web application development. It exploits the power and embeddedness of SQLite to include a full-fledged SQL database, a fast cache, a robust job queue, and a simple yet performant full-text search all in a single package.
4
+ litestack is a revolutionary gem for Ruby and Ruby on Rails that provides an all-in-one solution for web application development. It exploits the power and embeddedness of SQLite to include a full-fledged SQL database, a fast cache and a robust job queue all in a single package.
5
5
 
6
- Compared to conventional approaches that require separate servers and databases, LiteStack offers superior performance, efficiency, ease of use, and cost savings. Its embedded database and cache reduce memory and CPU usage, while its simple interface streamlines the development process. Overall, LiteStack sets a new standard for web application development and is an excellent choice for those who demand speed, efficiency, and simplicity.
6
+ Compared to conventional approaches that require separate servers and databases, Litestack offers superior performance, efficiency, ease of use, and cost savings. Its embedded database and cache reduce memory and CPU usage, while its simple interface streamlines the development process. Overall, LiteStack sets a new standard for web application development and is an excellent choice for those who demand speed, efficiency, and simplicity.
7
7
 
8
8
  You can read more about why litestack can be a good choice for your next web application **[here](WHYLITESTACK.md)**, you might also be interested in litestack **[benchmarks](BENCHMARKS.md)**.
9
9
 
@@ -83,7 +83,7 @@ adapter: litedb
83
83
  litedb offers integration with the Sequel database toolkit and can be configured as follows
84
84
 
85
85
  ```ruby
86
- DB = Sequel.conncet("litedb://path_to_db_file")
86
+ DB = Sequel.connect("litedb://path_to_db_file")
87
87
  ```
88
88
 
89
89
 
@@ -152,9 +152,9 @@ You can add more configuration in litejob.yml (or config/litejob.yml if you are
152
152
 
153
153
  ```yaml
154
154
  queues:
155
- - [default 1]
156
- - [urgent 5]
157
- - [critical 10 "spawn"]
155
+ - [default, 1]
156
+ - [urgent, 5]
157
+ - [critical, 10, "spawn"]
158
158
  ```
159
159
 
160
160
  The queues need to include a name and a priority (a number between 1 and 10) and can also optionally add the token "spawn", which means every job will run it its own concurrency context (thread or fiber)
data/bench/bench.rb CHANGED
@@ -2,8 +2,8 @@ require 'sqlite3'
2
2
 
3
3
  def bench(msg, iterations=1000)
4
4
  GC.start
5
- GC.compact
6
- print "Starting #{iterations} iterations of #{msg} ... "
5
+ #GC.compact
6
+ STDERR.puts "Starting #{iterations} iterations of #{msg} ... "
7
7
  t1 = Process.clock_gettime(Process::CLOCK_MONOTONIC)
8
8
  iterations.times do |i|
9
9
  yield i
@@ -12,7 +12,7 @@ def bench(msg, iterations=1000)
12
12
  time = ((t2 - t1)*1000).to_i.to_f / 1000 rescue 0
13
13
  ips = ((iterations/(t2-t1))*100).to_i.to_f / 100 rescue "infinity?"
14
14
  #{m: msg, t: time, ips: iteratinos/time, i: iterations}
15
- puts "finished in #{time} seconds (#{ips} ips)"
15
+ STDERR.puts " .. finished in #{time} seconds (#{ips} ips)"
16
16
  end
17
17
 
18
18
  @db = SQLite3::Database.new(":memory:") # sqlite database for fast random string generation
@@ -9,7 +9,7 @@ Fiber.set_scheduler Async::Scheduler.new
9
9
  Fiber.scheduler.run
10
10
 
11
11
  require_relative '../lib/litestack'
12
-
12
+ #require 'litestack'
13
13
 
14
14
  cache = Litecache.new({path: '../db/cache.db'}) # default settings
15
15
  redis = Redis.new # default settings
@@ -1,29 +1,45 @@
1
- #require 'polyphony'
2
- require 'async/scheduler'
3
1
  require './bench'
4
2
 
5
- Fiber.set_scheduler Async::Scheduler.new
3
+ count = ARGV[0].to_i rescue 1000
4
+ env = ARGV[1] || "t"
5
+ delay = ARGV[2].to_f rescue 0
6
6
 
7
- count = 10000
7
+ # Sidekiq bench
8
+ ###############
8
9
  require './skjob.rb'
9
- require './uljob.rb'
10
-
11
- puts Litesupport.environment
12
10
 
13
- t = Time.now.to_f
11
+ t = Process.clock_gettime(Process::CLOCK_MONOTONIC)
14
12
  puts "make sure sidekiq is started with skjob.rb as the job"
15
13
  bench("enqueuing sidekiq jobs", count) do |i|
16
- SidekiqJob.perform_async(count, t)
14
+ SidekiqJob.perform_async(count, t, delay)
17
15
  end
18
16
 
19
17
  puts "Don't forget to check the sidekiq log for processing time conclusion"
20
18
 
21
- t = Time.now.to_f
19
+ # Litejob bench
20
+ ###############
21
+
22
+ if env == "t" # threaded
23
+ # do nothing
24
+ elsif env == "a" # async
25
+ require 'async/scheduler'
26
+ Fiber.set_scheduler Async::Scheduler.new
27
+ elsif env == "p" # polyphony
28
+ require 'polyphony'
29
+ end
30
+
31
+ require './uljob.rb'
32
+
33
+ STDERR.puts "litejob started in #{Litesupport.environment} environmnet"
34
+
35
+ t = Process.clock_gettime(Process::CLOCK_MONOTONIC)
22
36
  bench("enqueuing litejobs", count) do |i|
23
- MyJob.perform_async(count, t)
37
+ MyJob.perform_async(count, t, delay)
24
38
  end
25
39
 
26
- Fiber.scheduler.run
40
+ puts "Please wait for the benchmark to finish .."
41
+
42
+ Fiber.scheduler.run if env == "a"
27
43
 
28
44
  sleep
29
45
 
data/bench/bench_queue.rb CHANGED
@@ -5,11 +5,11 @@ count = 1000
5
5
 
6
6
  q = Litequeue.new({path: '../db/queue.db' })
7
7
 
8
- bench("enqueue", count) do |i|
8
+ bench("Litequeue enqueue", count) do |i|
9
9
  q.push i.to_s
10
10
  end
11
11
 
12
- bench("dequeue", count) do |i|
12
+ bench("Litequeue dequeue", count) do |i|
13
13
  q.pop
14
14
  end
15
15
 
data/bench/skjob.rb CHANGED
@@ -3,11 +3,13 @@ require 'sidekiq'
3
3
  class SidekiqJob
4
4
  include Sidekiq::Job
5
5
  @@count = 0
6
- def perform(count, time)
7
- sleep 0.1
6
+ def perform(count, time, sleep_interval = nil)
7
+ sleep sleep_interval if sleep_interval
8
8
  @@count += 1
9
9
  if @@count == count
10
- puts "finished in #{Time.now.to_f - time} seconds (#{count / (Time.now.to_f - time)} jps)"
10
+ now = Process.clock_gettime(Process::CLOCK_MONOTONIC)
11
+ STDERR.puts "Sidekiq finished in #{now - time} seconds (#{count / (now - time)} jps)"
12
+ @@count = 0
11
13
  end
12
14
  end
13
15
  end
data/bench/uljob.rb CHANGED
@@ -4,12 +4,13 @@ require '../lib/litestack'
4
4
  class MyJob
5
5
  include Litejob
6
6
  @@count = 0
7
- # self.queue = :normal
8
- def perform(count, time)
9
- #sleep 0.1
7
+ #self.queue = :default
8
+ def perform(count, time, sleep_interval = nil)
9
+ sleep sleep_interval if sleep_interval
10
10
  @@count += 1
11
11
  if @@count == count
12
- puts "UL finished in #{Time.now.to_f - time} seconds (#{count / (Time.now.to_f - time)} jps)"
12
+ now = Process.clock_gettime(Process::CLOCK_MONOTONIC)
13
+ STDERR.puts "Litejob finished in #{now - time} seconds (#{count / (now - time)} jps)"
13
14
  end
14
15
  end
15
16
  end
@@ -95,7 +95,7 @@ module ActiveRecord
95
95
  end
96
96
 
97
97
  module DatabaseTasks
98
- register_task(/ultralite/, "ActiveRecord::Tasks::LitedbDatabaseTasks")
98
+ register_task(/litedb/, "ActiveRecord::Tasks::LitedbDatabaseTasks")
99
99
 
100
100
  end
101
101
  end
@@ -25,12 +25,14 @@ module ActiveSupport
25
25
  def increment(key, amount = 1, options = nil)
26
26
  key = key.to_s
27
27
  options = merged_options(options)
28
- @cache.transaction(:immediate) do
28
+ # todo: fix me
29
+ # this is currently a hack to avoid dealing with Rails cache encoding and decoding
30
+ #@cache.transaction(:immediate) do
29
31
  if value = read(key, options)
30
32
  value = value.to_i + amount
31
33
  write(key, value, options)
32
34
  end
33
- end
35
+ #end
34
36
  end
35
37
 
36
38
  def decrement(key, amount = 1, options = nil)
@@ -72,9 +72,7 @@ class Litecache
72
72
  :counter => "SELECT count(*) FROM data",
73
73
  :sizer => "SELECT size.page_size * count.page_count FROM pragma_page_size() AS size, pragma_page_count() AS count"
74
74
  }
75
- @cache = create_store
76
- @stmts = {}
77
- @sql.each_pair{|k, v| @stmts[k] = @cache.prepare(v)}
75
+ @cache = Litesupport::Pool.new(1){create_db}
78
76
  @stats = {hit: 0, miss: 0}
79
77
  @last_visited = {}
80
78
  @running = true
@@ -85,12 +83,14 @@ class Litecache
85
83
  def set(key, value, expires_in = nil)
86
84
  key = key.to_s
87
85
  expires_in = @options[:expires_in] if expires_in.nil? or expires_in.zero?
88
- begin
89
- @stmts[:setter].execute!(key, value, expires_in)
90
- rescue SQLite3::FullException
91
- @stmts[:extra_pruner].execute!(0.2)
92
- @cache.execute("vacuum")
93
- retry
86
+ @cache.acquire do |cache|
87
+ begin
88
+ cache.stmts[:setter].execute!(key, value, expires_in)
89
+ rescue SQLite3::FullException
90
+ cache.stmts[:extra_pruner].execute!(0.2)
91
+ cache.execute("vacuum")
92
+ retry
93
+ end
94
94
  end
95
95
  return true
96
96
  end
@@ -99,15 +99,18 @@ class Litecache
99
99
  def set_unless_exists(key, value, expires_in = nil)
100
100
  key = key.to_s
101
101
  expires_in = @options[:expires_in] if expires_in.nil? or expires_in.zero?
102
- begin
103
- transaction(:immediate) do
104
- @stmts[:inserter].execute!(key, value, expires_in)
105
- changes = @cache.changes
102
+ changes = 0
103
+ @cache.acquire do |cache|
104
+ begin
105
+ transaction(:immediate) do
106
+ cache.stmts[:inserter].execute!(key, value, expires_in)
107
+ changes = @cache.changes
108
+ end
109
+ rescue SQLite3::FullException
110
+ cache.stmts[:extra_pruner].execute!(0.2)
111
+ cache.execute("vacuum")
112
+ retry
106
113
  end
107
- rescue SQLite3::FullException
108
- @stmts[:extra_pruner].execute!(0.2)
109
- @cache.execute("vacuum")
110
- retry
111
114
  end
112
115
  return changes > 0
113
116
  end
@@ -116,7 +119,7 @@ class Litecache
116
119
  # if the key doesn't exist or it is expired then null will be returned
117
120
  def get(key)
118
121
  key = key.to_s
119
- if record = @stmts[:getter].execute!(key)[0]
122
+ if record = @cache.acquire{|cache| cache.stmts[:getter].execute!(key)[0] }
120
123
  @last_visited[key] = true
121
124
  @stats[:hit] +=1
122
125
  return record[1]
@@ -127,14 +130,18 @@ class Litecache
127
130
 
128
131
  # delete a key, value pair from the cache
129
132
  def delete(key)
130
- @stmts[:deleter].execute!(key)
131
- return @cache.changes > 0
133
+ changes = 0
134
+ @cache.aquire do |cache|
135
+ cache.stmts[:deleter].execute!(key)
136
+ changes = cache.changes
137
+ end
138
+ return changes > 0
132
139
  end
133
140
 
134
141
  # increment an integer value by amount, optionally add an expiry value (in seconds)
135
142
  def increment(key, amount, expires_in = nil)
136
143
  expires_in = @expires_in unless expires_in
137
- @stmts[:incrementer].execute!(key.to_s, amount, expires_in)
144
+ @cache.acquire{|cache| cache.stmts[:incrementer].execute!(key.to_s, amount, expires_in) }
138
145
  end
139
146
 
140
147
  # decrement an integer value by amount, optionally add an expiry value (in seconds)
@@ -144,43 +151,43 @@ class Litecache
144
151
 
145
152
  # delete all entries in the cache up limit (ordered by LRU), if no limit is provided approximately 20% of the entries will be deleted
146
153
  def prune(limit=nil)
147
- if limit and limit.is_a? Integer
148
- @stmts[:limited_pruner].execute!(limit)
149
- elsif limit and limit.is_a? Float
150
- @stmts[:extra_pruner].execute!(limit)
151
- else
152
- @stmts[:pruner].execute!
154
+ @cache.acquire do |cache|
155
+ if limit and limit.is_a? Integer
156
+ cache.stmts[:limited_pruner].execute!(limit)
157
+ elsif limit and limit.is_a? Float
158
+ cache.stmts[:extra_pruner].execute!(limit)
159
+ else
160
+ cache.stmts[:pruner].execute!
161
+ end
153
162
  end
154
163
  end
155
164
 
156
165
  # return the number of key, value pairs in the cache
157
166
  def count
158
- @stmts[:counter].execute!.to_a[0][0]
167
+ @cache.acquire{|cache| cache.stmts[:counter].execute!.to_a[0][0] }
159
168
  end
160
169
 
161
170
  # return the actual size of the cache file
162
171
  def size
163
- @stmts[:sizer].execute!.to_a[0][0]
172
+ @cache.acquire{|cache| cache.stmts[:sizer].execute!.to_a[0][0] }
164
173
  end
165
174
 
166
175
  # delete all key, value pairs in the cache
167
176
  def clear
168
- @cache.execute("delete FROM data")
177
+ @cache.acquire{|cache| cache.execute("delete FROM data") }
169
178
  end
170
179
 
171
180
  # close the connection to the cache file
172
181
  def close
173
182
  @running = false
174
183
  #Litesupport.synchronize do
175
- @cache.close
184
+ @cache.acquire{|cache| cache.close }
176
185
  #end
177
186
  end
178
187
 
179
188
  # return the maximum size of the cache
180
189
  def max_size
181
- Litesupport.synchronize do
182
- @cache.get_first_value("SELECT s.page_size * c.max_page_count FROM pragma_page_size() as s, pragma_max_page_count() as c")
183
- end
190
+ @cache.acquire{|cache| cache.get_first_value("SELECT s.page_size * c.max_page_count FROM pragma_page_size() as s, pragma_max_page_count() as c") }
184
191
  end
185
192
 
186
193
  # hits and misses for get operations performed over this particular connection (not cache wide)
@@ -192,8 +199,10 @@ class Litecache
192
199
 
193
200
  # low level access to SQLite transactions, use with caution
194
201
  def transaction(mode)
195
- @cache.transaction(mode) do
196
- yield
202
+ @cache.acquire do |cache|
203
+ cache.transaction(mode) do
204
+ yield
205
+ end
197
206
  end
198
207
  end
199
208
 
@@ -201,35 +210,29 @@ class Litecache
201
210
 
202
211
  def spawn_worker
203
212
  Litesupport.spawn do
204
- # create a specific cache instance for this worker
205
- # to overcome SQLite3 Database is locked error
206
- cache = create_store
207
- stmts = {}
208
- [:toucher, :pruner, :extra_pruner].each do |stmt|
209
- stmts[stmt] = cache.prepare(@sql[stmt])
210
- end
211
213
  while @running
212
- Litesupport.synchronize do
214
+ @cache.acquire do |cache|
213
215
  begin
214
216
  cache.transaction(:immediate) do
215
217
  @last_visited.delete_if do |k| # there is a race condition here, but not a serious one
216
- stmts[:toucher].execute!(k) || true
218
+ cache.stmts[:toucher].execute!(k) || true
217
219
  end
218
- stmts[:pruner].execute!
220
+ cache.stmts[:pruner].execute!
219
221
  end
220
222
  rescue SQLite3::BusyException
221
223
  retry
222
224
  rescue SQLite3::FullException
223
- stmts[:extra_pruner].execute!(0.2)
225
+ cache.stmts[:extra_pruner].execute!(0.2)
226
+ rescue Exception
227
+ # database is closed
224
228
  end
225
229
  end
226
230
  sleep @options[:sleep_interval]
227
231
  end
228
- cache.close
229
232
  end
230
233
  end
231
234
 
232
- def create_store
235
+ def create_db
233
236
  db = Litesupport.create_db(@options[:path])
234
237
  db.synchronous = 0
235
238
  db.cache_size = 2000
@@ -240,6 +243,7 @@ class Litecache
240
243
  db.execute("CREATE table if not exists data(id text primary key, value text, expires_in integer, last_used integer)")
241
244
  db.execute("CREATE index if not exists expiry_index on data (expires_in)")
242
245
  db.execute("CREATE index if not exists last_used_index on data (last_used)")
246
+ @sql.each_pair{|k, v| db.stmts[k] = db.prepare(v)}
243
247
  db
244
248
  end
245
249
 
@@ -1,5 +1,5 @@
1
1
  # frozen_stringe_literal: true
2
-
2
+ require 'logger'
3
3
  require 'oj'
4
4
  require 'yaml'
5
5
  require_relative './litequeue'
@@ -32,9 +32,10 @@ class Litejobqueue
32
32
  DEFAULT_OPTIONS = {
33
33
  config_path: "./litejob.yml",
34
34
  path: "./queue.db",
35
- queues: [["default", 5]],
36
- workers: 1,
37
- sleep_intervals: [0.001, 0.005, 0.025, 0.125, 0.625, 3.125]
35
+ queues: [["default", 1]],
36
+ workers: 5,
37
+ logger: STDOUT,
38
+ sleep_intervals: [0.001, 0.005, 0.025, 0.125, 0.625, 1.0, 2.0]
38
39
  }
39
40
 
40
41
  @@queue = nil
@@ -57,7 +58,6 @@ class Litejobqueue
57
58
  #
58
59
  def initialize(options = {})
59
60
  @options = DEFAULT_OPTIONS.merge(options)
60
- @worker_sleep_index = 0
61
61
  config = YAML.load_file(@options[:config_path]) rescue {} # an empty hash won't hurt
62
62
  config.keys.each do |k| # symbolize keys
63
63
  config[k.to_sym] = config[k]
@@ -65,6 +65,11 @@ class Litejobqueue
65
65
  end
66
66
  @options.merge!(config)
67
67
  @queue = Litequeue.new(@options) # create a new queue object
68
+ if @options[:logger].respond_to? :info
69
+ @logger = @options[:logger]
70
+ else
71
+ @logger = Logger.new(@options[:logger])
72
+ end
68
73
  # group and order queues according to their priority
69
74
  pgroups = {}
70
75
  @options[:queues].each do |q|
@@ -72,7 +77,7 @@ class Litejobqueue
72
77
  pgroups[q[1]] << [q[0], q[2] == "spawn"]
73
78
  end
74
79
  @queues = pgroups.keys.sort.reverse.collect{|p| [p, pgroups[p]]}
75
- @workers = @options[:workers].times.collect{create_worker}
80
+ @workers = @options[:workers].times.collect{ create_worker }
76
81
  end
77
82
 
78
83
  # push a job to the queue
@@ -85,7 +90,10 @@ class Litejobqueue
85
90
  # jobqueue.push(EasyJob, params) # the job will be performed asynchronously
86
91
  def push(jobclass, params, delay=0, queue=nil)
87
92
  payload = Oj.dump([jobclass, params])
88
- @queue.push(payload, delay, queue)
93
+ #res =
94
+ res = @queue.push(payload, delay, queue)
95
+ @logger.info("[litejob]:[ENQ] id: #{res} class: #{jobclass}")
96
+ res
89
97
  end
90
98
 
91
99
  # delete a job from the job queue
@@ -99,7 +107,9 @@ class Litejobqueue
99
107
  # jobqueue.delete(id)
100
108
  def delete(id)
101
109
  job = @queue.delete(id)
110
+ @logger.info("[litejob]:[DEL] job: #{job}")
102
111
  Oj.load(job) if job
112
+ job
103
113
  end
104
114
 
105
115
  private
@@ -111,30 +121,36 @@ class Litejobqueue
111
121
  else
112
122
  yield
113
123
  end
114
- end
115
-
124
+ end
125
+
116
126
  # create a worker according to environment
117
127
  def create_worker
118
128
  Litesupport.spawn do
119
- # we create a queue object specific to this worker here
120
- # this way we can survive potential SQLite3 Database is locked errors
121
- queue = Litequeue.new(@options)
129
+ if @options[:logger].respond_to? :info
130
+ logger = @options[:logger]
131
+ else
132
+ logger = Logger.new(@options[:logger])
133
+ end
134
+ worker_sleep_index = 0
135
+ i = 0
122
136
  loop do
123
137
  processed = 0
124
138
  @queues.each do |level| # iterate through the levels
125
139
  level[1].each do |q| # iterate through the queues in the level
126
140
  index = 0
127
141
  max = level[0]
128
- while index < max && payload = queue.pop(q[0])
142
+ while index < max && payload = @queue.pop(q[0], 1) # fearlessly use the same queue object
129
143
  processed += 1
130
144
  index += 1
131
145
  begin
132
146
  id, job = payload[0], payload[1]
133
147
  job = Oj.load(job)
148
+ logger.info "[litejob]:[DEQ] id: #{id} class: #{job[0]}"
134
149
  klass = eval(job[0])
135
150
  schedule(q[1]) do # run the job in a new context
136
151
  begin
137
152
  klass.new.perform(*job[1])
153
+ logger.info "[litejob]:[END] id: #{id} class: #{job[0]}"
138
154
  rescue Exception => e
139
155
  puts e
140
156
  puts e.message
@@ -146,18 +162,18 @@ class Litejobqueue
146
162
  puts e.message
147
163
  puts e.backtrace
148
164
  end
149
- Litesupport.switch #give other context a chance to run here
165
+ Litesupport.switch #give other contexts a chance to run here
150
166
  end
151
167
  end
152
168
  end
153
169
  if processed == 0
154
- sleep @options[:sleep_intervals][@worker_sleep_index]
155
- @worker_sleep_index += 1 if @worker_sleep_index < @options[:sleep_intervals].length - 1
170
+ sleep @options[:sleep_intervals][worker_sleep_index]
171
+ worker_sleep_index += 1 if worker_sleep_index < @options[:sleep_intervals].length - 1
156
172
  else
157
- @worker_sleep_index = 0 # reset the index
173
+ worker_sleep_index = 0 # reset the index
158
174
  end
159
175
  end
160
176
  end
161
- end
177
+ end
162
178
 
163
179
  end
@@ -34,23 +34,28 @@ class Litequeue
34
34
 
35
35
  def initialize(options = {})
36
36
  @options = DEFAULT_OPTIONS.merge(options)
37
- @queue = create_db #(@options[:path])
38
- prepare
37
+ @queue = Litesupport::Pool.new(1){create_db} # delegate the db creation to the litepool
39
38
  end
40
39
 
41
40
  # push an item to the queue, optionally specifying the queue name (defaults to default) and after how many seconds it should be ready to pop (defaults to zero)
42
41
  # a unique job id is returned from this method, can be used later to delete it before it fires. You can push string, integer, float, true, false or nil values
43
42
  #
44
43
  def push(value, delay=0, queue='default')
45
- result = @push.execute!(queue, delay, value)[0]
44
+ # @todo - check if queue is busy, back off if it is
45
+ # also bring back the synchronize block, to prevent
46
+ # a race condition if a thread hits the busy handler
47
+ # before the current thread proceeds after a backoff
48
+ result = @queue.acquire { |q| q.stmts[:push].execute!(queue, delay, value)[0] }
46
49
  return result[0] if result
47
50
  end
48
51
 
49
52
  alias_method :"<<", :push
50
53
 
51
54
  # pop an item from the queue, optionally with a specific queue name (default queue name is 'default')
52
- def pop(queue='default')
53
- @pop.execute!(queue)[0]
55
+ def pop(queue='default', limit = 1)
56
+ res = @queue.acquire {|q| res = q.stmts[:pop].execute!(queue, limit)[0] }
57
+ #return res[0] if res.length == 1
58
+ #res
54
59
  end
55
60
 
56
61
  # delete an item from the queue
@@ -60,22 +65,22 @@ class Litequeue
60
65
  # queue.pop # => nil
61
66
  def delete(id, queue='default')
62
67
  fire_at, id = id.split("_")
63
- result = @deleter.execute!(queue, fire_at.to_i, id)[0]
68
+ result = @queue.acquire{|q| q.stmts[:delete].execute!(queue, fire_at.to_i, id)[0] }
64
69
  end
65
70
 
66
71
  # deletes all the entries in all queues, or if a queue name is given, deletes all entries in that specific queue
67
72
  def clear(queue=nil)
68
- @queue.execute("DELETE FROM _ul_queue_ WHERE iif(?, queue = ?, 1)", queue)
73
+ @queue.acquire{|q| q.execute("DELETE FROM _ul_queue_ WHERE iif(?, queue = ?, 1)", queue) }
69
74
  end
70
75
 
71
76
  # returns a count of entries in all queues, or if a queue name is given, reutrns the count of entries in that queue
72
77
  def count(queue=nil)
73
- @queue.get_first_value("SELECT count(*) FROM _ul_queue_ WHERE iif(?, queue = ?, 1)", queue)
78
+ @queue.acquire{|q| q.get_first_value("SELECT count(*) FROM _ul_queue_ WHERE iif(?, queue = ?, 1)", queue) }
74
79
  end
75
80
 
76
81
  # return the size of the queue file on disk
77
82
  def size
78
- @queue.get_first_value("SELECT size.page_size * count.page_count FROM pragma_page_size() AS size, pragma_page_count() AS count")
83
+ @queue.acquire{|q| q.get_first_value("SELECT size.page_size * count.page_count FROM pragma_page_size() AS size, pragma_page_count() AS count") }
79
84
  end
80
85
 
81
86
  private
@@ -86,14 +91,12 @@ class Litequeue
86
91
  db.wal_autocheckpoint = 10000
87
92
  db.mmap_size = @options[:mmap_size]
88
93
  db.execute("CREATE TABLE IF NOT EXISTS _ul_queue_(queue TEXT DEFAULT('default') NOT NULL ON CONFLICT REPLACE, fire_at INTEGER DEFAULT(unixepoch()) NOT NULL ON CONFLICT REPLACE, id TEXT DEFAULT(hex(randomblob(8)) || (strftime('%f') * 100)) NOT NULL ON CONFLICT REPLACE, value TEXT, created_at INTEGER DEFAULT(unixepoch()) NOT NULL ON CONFLICT REPLACE, PRIMARY KEY(queue, fire_at ASC, id) ) WITHOUT ROWID")
94
+ db.stmts[:push] = db.prepare("INSERT INTO _ul_queue_(queue, fire_at, value) VALUES ($1, (strftime('%s') + $2), $3) RETURNING fire_at || '-' || id")
95
+ db.stmts[:pop] = db.prepare("DELETE FROM _ul_queue_ WHERE (queue, fire_at, id) IN (SELECT queue, fire_at, id FROM _ul_queue_ WHERE queue = ifnull($1, 'default') AND fire_at <= (unixepoch()) ORDER BY fire_at ASC LIMIT ifnull($2, 1)) RETURNING fire_at || '-' || id, value")
96
+ db.stmts[:delete] = db.prepare("DELETE FROM _ul_queue_ WHERE queue = ifnull($1, 'default') AND fire_at = $2 AND id = $3 RETURNING value")
89
97
  db
90
98
  end
91
99
 
92
- def prepare
93
- @push = @queue.prepare("INSERT INTO _ul_queue_(queue, fire_at, value) VALUES ($1, (strftime('%s') + $2), $3) RETURNING fire_at || '-' || id")
94
- @pop = @queue.prepare("DELETE FROM _ul_queue_ WHERE (queue, fire_at, id) = (SELECT queue, min(fire_at), id FROM _ul_queue_ WHERE queue = ifnull($1, 'default') AND fire_at <= (unixepoch()) limit 1) RETURNING fire_at || '-' || id, value")
95
- @deleter = @queue.prepare("DELETE FROM _ul_queue_ WHERE queue = ifnull($1, 'default') AND fire_at = $2 AND id = $3 RETURNING value")
96
- end
97
100
 
98
101
  end
99
102