litestack 0.1.8 → 0.2.1

Sign up to get free protection for your applications and to get access to all the features.
checksums.yaml CHANGED
@@ -1,7 +1,7 @@
1
1
  ---
2
2
  SHA256:
3
- metadata.gz: d08724a0b9293f55ebed24ba6d738b103795b69563a210c4454322fc382e174e
4
- data.tar.gz: 01401bfca727b4ef9452a9efafba846c886845dd43c1d8f5f5ce84cee22ab61f
3
+ metadata.gz: aded5660c8623a899124bf67fbc10a6ccaa8d7f3ddedee663390d7161bbeb100
4
+ data.tar.gz: 995f3256dc8beaa01f906202639c057ddf0a8c401cf1daef3ba7de310ca7ccae
5
5
  SHA512:
6
- metadata.gz: 729320670e62261596eabbfd8f8d931117507317127e9fd9e5a796928e7031418455ca351dd2de815d4860edc2e688c0bbdb79560d3907cb3545deb75a1b4fae
7
- data.tar.gz: d75cc4f23694c726a361b0bc474f561b8ffc4db074ef275839917d4c82461aca7a1c4a0b0e939cd749a7e7afbb58de86f3de254a6adadfc7f6d91ea9021218f3
6
+ metadata.gz: 59764a40453a8e4a809ce079fea5e99263bd3db3570f45f7b7b85975f4c9ef63f7737a5e1d92b5122bd1fd3f582bafa185d362017a904cb3d9f0af6ac1237801
7
+ data.tar.gz: 3b59c9b49d965b0eb6820fa9cdb12924f65902323fa53e0d83d55338a7c3d8ef50c0a2259fa0b47121b42b1d896fd59d47772d6b92546d6bd4e84d34b89671d6
data/CHANGELOG.md CHANGED
@@ -1,6 +1,19 @@
1
1
  ## [Unreleased]
2
2
 
3
- ## [0.1.8] - 2022-03-08
3
+ ## [0.2.1] - 2023-05-08
4
+
5
+ - Fix a race condition in Litecable
6
+
7
+ ## [0.2.0] - 2023-05-08
8
+
9
+ - Litecable, a SQLite driver for ActionCable
10
+ - Litemetric for metrics collection support (experimental, disabled by default)
11
+ - New schema for Litejob, old jobs are auto-migrated
12
+ - Code refactoring, extraction of SQL statements to external files
13
+ - Graceful shutdown support working properly
14
+ - Fork resilience
15
+
16
+ ## [0.1.8] - 2023-03-08
4
17
 
5
18
  - More code cleanups, more test coverage
6
19
  - Retry support for jobs in Litejob
@@ -8,20 +21,20 @@
8
21
  - Initial graceful shutdown support for Litejob (incomplete)
9
22
  - More configuration options for Litejob
10
23
 
11
- ## [0.1.7] - 2022-03-05
24
+ ## [0.1.7] - 2023-03-05
12
25
 
13
26
  - Code cleanup, removal of references to older name
14
27
  - Fix for the litedb rake tasks (thanks: netmute)
15
28
  - More fixes for the new concurrency model
16
29
  - Introduced a logger for the Litejobqueue (doesn't work with Polyphony, fix should come soon)
17
30
 
18
- ## [0.1.6] - 2022-03-03
31
+ ## [0.1.6] - 2023-03-03
19
32
 
20
33
  - Revamped the locking model, more robust, minimal performance hit
21
34
  - Introduced a new resource pooling class
22
35
  - Litecache and Litejob now use the resource pool
23
36
  - Much less memory usage for Litecache and Litejob
24
37
 
25
- ## [0.1.0] - 2022-02-26
38
+ ## [0.1.0] - 2023-02-26
26
39
 
27
40
  - Initial release
data/README.md CHANGED
@@ -16,12 +16,14 @@ litestack provides integration with popular libraries, including:
16
16
  - ActiveRecord
17
17
  - ActiveSupport::Cache
18
18
  - ActiveJob
19
+ - ActionCable
19
20
 
20
21
  With litestack you only need to add a single gem to your app which would replace a host of other gems and services, for example, a typical Rails app using litestack will no longer need the following services:
21
22
 
22
23
  - Database Server (e.g. PostgreSQL, MySQL)
23
24
  - Cache Server (e.g. Redis, Memcached)
24
25
  - Job Processor (e.g. Sidekiq, Goodjob)
26
+ - Pubsub Server (e.g. Redis, PostgreSQL)
25
27
 
26
28
  To make it even more efficient, litestack will detect the presence of Fiber based IO frameworks like Async (e.g. when you use the Falcon web server) or Polyphony. It will then switch its background workers for caches and queues to fibers (using the semantics of the existing framework). This is done transparently and will generally lead to lower CPU and memory utilization.
27
29
 
@@ -50,6 +52,7 @@ litestack currently offers three main components
50
52
  - litedb
51
53
  - litecache
52
54
  - litejob
55
+ - litecable
53
56
 
54
57
  > ![litedb](https://github.com/oldmoe/litestack/blob/master/assets/litedb_logo_teal.png?raw=true)
55
58
 
@@ -113,6 +116,8 @@ litecache spawns a background thread for cleanup purposes. In case it detects th
113
116
 
114
117
  > ![litejob](https://github.com/oldmoe/litestack/blob/master/assets/litejob_logo_teal.png?raw=true)
115
118
 
119
+ More info about Litejob can be found in the [litejob guide](https://github.com/oldmoe/litestack/wiki/Litejob-guide)
120
+
116
121
  litejob is a fast and very efficient job queue processor for Ruby applications. It builds on top of SQLite as well, which provides transactional guarantees, persistence and exceptional performance.
117
122
 
118
123
  #### Direct litejob usage
@@ -159,6 +164,28 @@ queues:
159
164
 
160
165
  The queues need to include a name and a priority (a number between 1 and 10) and can also optionally add the token "spawn", which means every job will run it its own concurrency context (thread or fiber)
161
166
 
167
+ > ![litecable](https://github.com/oldmoe/litestack/blob/master/assets/litecable_logo_teal.png?raw=true)
168
+
169
+ #### ActionCable
170
+
171
+ This is a drop in replacement adapter for actioncable that replaces `async` and other production adapters (e.g. PostgreSQL, Redis). This adapter is currently only tested in local (inline) mode.
172
+
173
+ Getting up and running with litecable requires configuring your cable.yaml file under the config/ directory
174
+
175
+ cable.yaml
176
+ ```yaml
177
+ development:
178
+ adapter: litecable
179
+
180
+ test:
181
+ adapter: test
182
+
183
+ staging:
184
+ adapter: litecable
185
+
186
+ production:
187
+ adapter: litecable
188
+ ```
162
189
 
163
190
  ## Contributing
164
191
 
Binary file
@@ -8,7 +8,7 @@ require 'async/scheduler'
8
8
  Fiber.set_scheduler Async::Scheduler.new
9
9
  Fiber.scheduler.run
10
10
 
11
- require_relative '../lib/litestack'
11
+ require_relative '../lib/litestack/litecache'
12
12
  #require 'litestack'
13
13
 
14
14
  cache = Litecache.new({path: '../db/cache.db'}) # default settings
@@ -16,10 +16,10 @@ redis = Redis.new # default settings
16
16
 
17
17
  values = []
18
18
  keys = []
19
- count = 5
19
+ count = 1000
20
20
  count.times { keys << random_str(10) }
21
21
 
22
- [10, 100, 1000, 10000, 100000, 1000000, 10000000, 100000000].each do |size|
22
+ [10, 100, 1000, 10000].each do |size|
23
23
  count.times do
24
24
  values << random_str(size)
25
25
  end
@@ -35,8 +35,15 @@ count.times { keys << random_str(10) }
35
35
  cache.set(keys[i], values[i])
36
36
  end
37
37
 
38
+ #bench("file writes", count) do |i|
39
+ # f = File.open("../files/#{keys[i]}.data", 'w+')
40
+ # f.write(values[i])
41
+ # f.close
42
+ #end
43
+
44
+
38
45
  bench("Redis writes", count) do |i|
39
- #redis.set(keys[i], values[i])
46
+ redis.set(keys[i], values[i])
40
47
  end
41
48
 
42
49
  puts "== Reads =="
@@ -44,8 +51,12 @@ count.times { keys << random_str(10) }
44
51
  cache.get(random_keys[i])
45
52
  end
46
53
 
54
+ #bench("file reads", count) do |i|
55
+ # data = File.read("../files/#{keys[i]}.data")
56
+ #end
57
+
47
58
  bench("Redis reads", count) do |i|
48
- #redis.get(random_keys[i])
59
+ redis.get(random_keys[i])
49
60
  end
50
61
  puts "=========================================================="
51
62
 
@@ -0,0 +1,36 @@
1
+ # frozen_stringe_literal: true
2
+
3
+ require_relative '../../litestack/litecable'
4
+
5
+ module ActionCable
6
+ module SubscriptionAdapter
7
+ class Litecable < ::Litecable# :nodoc:
8
+
9
+ attr_reader :logger, :server
10
+
11
+ prepend ChannelPrefix
12
+
13
+ DEFAULT_OPTIONS = {
14
+ config_path: "./config/litecable.yml",
15
+ path: "./db/cable.db",
16
+ sync: 0, # no need to sync at all
17
+ mmap_size: 16 * 1024 * 1024, # 16MB of memory hold hot messages
18
+ expire_after: 10, # remove messages older than 10 seconds
19
+ listen_interval: 0.005, # check new messages every 5 milliseconds
20
+ metrics: false
21
+ }
22
+
23
+ def initialize(server, logger=nil)
24
+ @server = server
25
+ @logger = server.logger
26
+ super(DEFAULT_OPTIONS.dup)
27
+ end
28
+
29
+ def shutdown
30
+ close
31
+ end
32
+
33
+ end
34
+ end
35
+ end
36
+
@@ -7,21 +7,12 @@ require "active_job"
7
7
 
8
8
  module ActiveJob
9
9
  module QueueAdapters
10
- # == Ultralite adapter for Active Job
10
+ # == Litestack adapter for Active Job
11
11
  #
12
12
  #
13
13
  # Rails.application.config.active_job.queue_adapter = :litejob
14
14
  class LitejobAdapter
15
-
16
- DEFAULT_OPTIONS = {
17
- config_path: "./config/litejob.yml",
18
- path: "../db/queue.db",
19
- queues: [["default", 1]],
20
- logger: nil, # Rails performs its logging already
21
- retries: 5, # It is recommended to stop retries at the Rails level
22
- workers: 5
23
- }
24
-
15
+
25
16
  def initialize(options={})
26
17
  # we currently don't honour individual options per job class
27
18
  # possible in the future?
@@ -40,6 +31,15 @@ module ActiveJob
40
31
 
41
32
  class Job # :nodoc:
42
33
 
34
+ DEFAULT_OPTIONS = {
35
+ config_path: "./config/litejob.yml",
36
+ path: "../db/queue.db",
37
+ queues: [["default", 1]],
38
+ logger: nil, # Rails performs its logging already
39
+ retries: 5, # It is recommended to stop retries at the Rails level
40
+ workers: 5
41
+ }
42
+
43
43
  include ::Litejob
44
44
 
45
45
  def perform(job_data)
@@ -0,0 +1,138 @@
1
+ # frozen_stringe_literal: true
2
+
3
+ # all components should require the support module
4
+ require_relative 'litesupport'
5
+ require_relative 'litemetric'
6
+
7
+ require 'base64'
8
+ require 'oj'
9
+
10
+ class Litecable
11
+
12
+ include Litesupport::Liteconnection
13
+ include Litemetric::Measurable
14
+
15
+
16
+ DEFAULT_OPTIONS = {
17
+ config_path: "./litecable.yml",
18
+ path: "./cable.db",
19
+ sync: 0,
20
+ mmap_size: 16 * 1024 * 1024, # 16MB
21
+ expire_after: 5, # remove messages older than 5 seconds
22
+ listen_interval: 0.05, # check new messages every 50 milliseconds
23
+ metrics: false
24
+ }
25
+
26
+ def initialize(options = {})
27
+ @messages = []
28
+ init(options)
29
+ end
30
+
31
+ # broadcast a message to a specific channel
32
+ def broadcast(channel, payload=nil)
33
+ # group meesages and only do broadcast every 10 ms
34
+ #run_stmt(:publish, channel.to_s, Oj.dump(payload), @pid)
35
+ # but broadcast locally normally
36
+ @mutex.synchronize{ @messages << [channel.to_s, Oj.dump(payload)] }
37
+ local_broadcast(channel, payload)
38
+ end
39
+
40
+ # subscribe to a channel, optionally providing a success callback proc
41
+ def subscribe(channel, subscriber, success_callback = nil)
42
+ @mutex.synchronize do
43
+ @subscribers[channel] = {} unless @subscribers[channel]
44
+ @subscribers[channel][subscriber] = true
45
+ end
46
+ end
47
+
48
+ # unsubscribe from a channel
49
+ def unsubscribe(channel, subscriber)
50
+ @mutex.synchronize do
51
+ @subscribers[channel].delete(subscriber) rescue nil
52
+ end
53
+ end
54
+
55
+ private
56
+
57
+ def local_broadcast(channel, payload=nil)
58
+ return unless @subscribers[channel]
59
+ subscribers = []
60
+ @mutex.synchronize do
61
+ subscribers = @subscribers[channel].keys
62
+ end
63
+ subscribers.each do |subscriber|
64
+ subscriber.call(payload)
65
+ end
66
+ end
67
+
68
+ def setup
69
+ super # create connection
70
+ @pid = Process.pid
71
+ @subscribers = {}
72
+ @mutex = Litesupport::Mutex.new
73
+ @running = true
74
+ @listener = create_listener
75
+ @pruner = create_pruner
76
+ @broadcaster = create_broadcaster
77
+ @last_fetched_id = nil
78
+ end
79
+
80
+ def create_broadcaster
81
+ Litesupport.spawn do
82
+ while @running do
83
+ @mutex.synchronize do
84
+ if @messages.length > 0
85
+ run_sql("BEGIN IMMEDIATE")
86
+ while msg = @messages.shift
87
+ run_stmt(:publish, msg[0], msg[1], @pid)
88
+ end
89
+ run_sql("END")
90
+ end
91
+ end
92
+ sleep 0.02
93
+ end
94
+ end
95
+ end
96
+
97
+ def create_pruner
98
+ Litesupport.spawn do
99
+ while @running do
100
+ run_stmt(:prune, @options[:expire_after])
101
+ sleep @options[:expire_after]
102
+ end
103
+ end
104
+ end
105
+
106
+ def create_listener
107
+ Litesupport.spawn do
108
+ while @running do
109
+ @last_fetched_id ||= (run_stmt(:last_id)[0][0] || 0)
110
+ @logger.info @last_fetched_id
111
+ run_stmt(:fetch, @last_fetched_id, @pid).to_a.each do |msg|
112
+ @logger.info "RECEIVED #{msg}"
113
+ @last_fetched_id = msg[0]
114
+ local_broadcast(msg[1], Oj.load(msg[2]))
115
+ end
116
+ sleep @options[:listen_interval]
117
+ end
118
+ end
119
+ end
120
+
121
+ def create_connection
122
+ conn = super
123
+ conn.wal_autocheckpoint = 10000
124
+ sql = YAML.load_file("#{__dir__}/litecable.sql.yml")
125
+ version = conn.get_first_value("PRAGMA user_version")
126
+ sql["schema"].each_pair do |v, obj|
127
+ if v > version
128
+ conn.transaction do
129
+ obj.each{|k, s| conn.execute(s)}
130
+ conn.user_version = v
131
+ end
132
+ end
133
+ end
134
+ sql["stmts"].each { |k, v| conn.stmts[k.to_sym] = conn.prepare(v) }
135
+ conn
136
+ end
137
+
138
+ end
@@ -0,0 +1,24 @@
1
+ schema:
2
+ 1:
3
+ create_table_messages: >
4
+ CREATE TABLE IF NOT EXISTS messages(
5
+ id INTEGER PRIMARY KEY autoincrement,
6
+ channel TEXT NOT NULL,
7
+ value TEXT NOT NULL,
8
+ pid INTEGER,
9
+ created_at INTEGER NOT NULL ON CONFLICT REPLACE DEFAULT(unixepoch())
10
+ );
11
+ create_index_messages_by_date: >
12
+ CREATE INDEX IF NOT EXISTS messages_by_date ON messages(created_at);
13
+
14
+ stmts:
15
+
16
+ publish: INSERT INTO messages(channel, value, pid) VALUES ($1, $2, $3)
17
+
18
+ last_id: SELECT max(id) FROM messages
19
+
20
+ fetch: SELECT id, channel, value FROM messages WHERE id > $1 and pid != $2
21
+
22
+ prune: DELETE FROM messages WHERE created_at < (unixepoch() - $1)
23
+
24
+ check_prune: SELECT count(*) FROM messages WHERE created_at < (unixepoch() - $1)
@@ -2,6 +2,7 @@
2
2
 
3
3
  # all components should require the support module
4
4
  require_relative 'litesupport'
5
+ require_relative 'litemetric'
5
6
 
6
7
  ##
7
8
  #Litecache is a caching library for Ruby applications that is built on top of SQLite. It is designed to be simple to use, very fast, and feature-rich, providing developers with a reliable and efficient way to cache data.
@@ -16,6 +17,9 @@ require_relative 'litesupport'
16
17
 
17
18
  class Litecache
18
19
 
20
+ include Litesupport::Liteconnection
21
+ include Litemetric::Measurable
22
+
19
23
  # the default options for the cache
20
24
  # can be overriden by passing new options in a hash
21
25
  # to Litecache.new
@@ -29,12 +33,15 @@ class Litecache
29
33
 
30
34
  DEFAULT_OPTIONS = {
31
35
  path: "./cache.db",
36
+ config_path: "./litecache.yml",
37
+ sync: 0,
32
38
  expiry: 60 * 60 * 24 * 30, # one month
33
39
  size: 128 * 1024 * 1024, #128MB
34
40
  mmap_size: 128 * 1024 * 1024, #128MB
35
- min_size: 32 * 1024, #32MB
41
+ min_size: 8 * 1024 * 1024, #16MB
36
42
  return_full_record: false, #only return the payload
37
- sleep_interval: 1 # 1 second
43
+ sleep_interval: 1, # 1 second
44
+ metrics: false
38
45
  }
39
46
 
40
47
  # creates a new instance of Litecache
@@ -56,36 +63,20 @@ class Litecache
56
63
  # litecache.close # optional, you can safely kill the process
57
64
 
58
65
  def initialize(options = {})
59
- @options = DEFAULT_OPTIONS.merge(options)
60
- @options[:size] = @options[:min_size] if @options[:size] < @options[:min_size]
61
- @sql = {
62
- :pruner => "DELETE FROM data WHERE expires_in <= $1",
63
- :extra_pruner => "DELETE FROM data WHERE id IN (SELECT id FROM data ORDER BY last_used ASC LIMIT (SELECT CAST((count(*) * $1) AS int) FROM data))",
64
- :limited_pruner => "DELETE FROM data WHERE id IN (SELECT id FROM data ORDER BY last_used asc limit $1)",
65
- :toucher => "UPDATE data SET last_used = unixepoch('now') WHERE id = $1",
66
- :setter => "INSERT into data (id, value, expires_in, last_used) VALUES ($1, $2, unixepoch('now') + $3, unixepoch('now')) on conflict(id) do UPDATE SET value = excluded.value, last_used = excluded.last_used, expires_in = excluded.expires_in",
67
- :inserter => "INSERT into data (id, value, expires_in, last_used) VALUES ($1, $2, unixepoch('now') + $3, unixepoch('now')) on conflict(id) do UPDATE SET value = excluded.value, last_used = excluded.last_used, expires_in = excluded.expires_in WHERE id = $1 and expires_in <= unixepoch('now')",
68
- :finder => "SELECT id FROM data WHERE id = $1",
69
- :getter => "SELECT id, value, expires_in FROM data WHERE id = $1",
70
- :deleter => "delete FROM data WHERE id = $1 returning value",
71
- :incrementer => "INSERT into data (id, value, expires_in, last_used) VALUES ($1, $2, unixepoch('now') + $3, unixepoch('now')) on conflict(id) do UPDATE SET value = cast(value AS int) + cast(excluded.value as int), last_used = excluded.last_used, expires_in = excluded.expires_in",
72
- :counter => "SELECT count(*) FROM data",
73
- :sizer => "SELECT size.page_size * count.page_count FROM pragma_page_size() AS size, pragma_page_count() AS count"
74
- }
75
- @cache = Litesupport::Pool.new(1){create_db}
76
- @stats = {hit: 0, miss: 0}
66
+ options[:size] = DEFAULT_OPTIONS[:min_size] if options[:size] && options[:size] < DEFAULT_OPTIONS[:min_size]
67
+ init(options)
77
68
  @last_visited = {}
78
- @running = true
79
- @bgthread = spawn_worker
69
+ collect_metrics if @options[:metrics]
80
70
  end
81
71
 
82
72
  # add a key, value pair to the cache, with an optional expiry value (number of seconds)
83
73
  def set(key, value, expires_in = nil)
84
74
  key = key.to_s
85
75
  expires_in = @options[:expires_in] if expires_in.nil? or expires_in.zero?
86
- @cache.acquire do |cache|
76
+ @conn.acquire do |cache|
87
77
  begin
88
78
  cache.stmts[:setter].execute!(key, value, expires_in)
79
+ capture(:write, key)
89
80
  rescue SQLite3::FullException
90
81
  cache.stmts[:extra_pruner].execute!(0.2)
91
82
  cache.execute("vacuum")
@@ -100,12 +91,13 @@ class Litecache
100
91
  key = key.to_s
101
92
  expires_in = @options[:expires_in] if expires_in.nil? or expires_in.zero?
102
93
  changes = 0
103
- @cache.acquire do |cache|
94
+ @conn.acquire do |cache|
104
95
  begin
105
- transaction(:immediate) do
96
+ cache.transaction(:immediate) do
106
97
  cache.stmts[:inserter].execute!(key, value, expires_in)
107
- changes = @cache.changes
98
+ changes = cache.changes
108
99
  end
100
+ capture(:write, key)
109
101
  rescue SQLite3::FullException
110
102
  cache.stmts[:extra_pruner].execute!(0.2)
111
103
  cache.execute("vacuum")
@@ -119,19 +111,19 @@ class Litecache
119
111
  # if the key doesn't exist or it is expired then null will be returned
120
112
  def get(key)
121
113
  key = key.to_s
122
- if record = @cache.acquire{|cache| cache.stmts[:getter].execute!(key)[0] }
114
+ if record = @conn.acquire{|cache| cache.stmts[:getter].execute!(key)[0] }
123
115
  @last_visited[key] = true
124
- @stats[:hit] +=1
116
+ capture(:hit, key)
125
117
  return record[1]
126
118
  end
127
- @stats[:miss] += 1
119
+ capture(:miss, key)
128
120
  nil
129
121
  end
130
122
 
131
123
  # delete a key, value pair from the cache
132
124
  def delete(key)
133
125
  changes = 0
134
- @cache.aquire do |cache|
126
+ @conn.acquire do |cache|
135
127
  cache.stmts[:deleter].execute!(key)
136
128
  changes = cache.changes
137
129
  end
@@ -141,7 +133,7 @@ class Litecache
141
133
  # increment an integer value by amount, optionally add an expiry value (in seconds)
142
134
  def increment(key, amount, expires_in = nil)
143
135
  expires_in = @expires_in unless expires_in
144
- @cache.acquire{|cache| cache.stmts[:incrementer].execute!(key.to_s, amount, expires_in) }
136
+ @conn.acquire{|cache| cache.stmts[:incrementer].execute!(key.to_s, amount, expires_in) }
145
137
  end
146
138
 
147
139
  # decrement an integer value by amount, optionally add an expiry value (in seconds)
@@ -151,7 +143,7 @@ class Litecache
151
143
 
152
144
  # delete all entries in the cache up limit (ordered by LRU), if no limit is provided approximately 20% of the entries will be deleted
153
145
  def prune(limit=nil)
154
- @cache.acquire do |cache|
146
+ @conn.acquire do |cache|
155
147
  if limit and limit.is_a? Integer
156
148
  cache.stmts[:limited_pruner].execute!(limit)
157
149
  elsif limit and limit.is_a? Float
@@ -164,42 +156,34 @@ class Litecache
164
156
 
165
157
  # return the number of key, value pairs in the cache
166
158
  def count
167
- @cache.acquire{|cache| cache.stmts[:counter].execute!.to_a[0][0] }
159
+ run_stmt(:counter)[0][0]
168
160
  end
169
161
 
170
162
  # return the actual size of the cache file
171
163
  def size
172
- @cache.acquire{|cache| cache.stmts[:sizer].execute!.to_a[0][0] }
164
+ run_stmt(:sizer)[0][0]
173
165
  end
174
166
 
175
167
  # delete all key, value pairs in the cache
176
168
  def clear
177
- @cache.acquire{|cache| cache.execute("delete FROM data") }
169
+ run_sql("delete FROM data")
178
170
  end
179
171
 
180
172
  # close the connection to the cache file
181
173
  def close
182
174
  @running = false
183
- #Litesupport.synchronize do
184
- @cache.acquire{|cache| cache.close }
185
- #end
175
+ super
186
176
  end
187
177
 
188
178
  # return the maximum size of the cache
189
179
  def max_size
190
- @cache.acquire{|cache| cache.get_first_value("SELECT s.page_size * c.max_page_count FROM pragma_page_size() as s, pragma_max_page_count() as c") }
180
+ run_sql("SELECT s.page_size * c.max_page_count FROM pragma_page_size() as s, pragma_max_page_count() as c")[0][0]
191
181
  end
192
182
 
193
- # hits and misses for get operations performed over this particular connection (not cache wide)
194
- #
195
- # litecache.stats # => {hit: 543, miss: 31}
196
- def stats
197
- @stats
198
- end
199
-
200
183
  # low level access to SQLite transactions, use with caution
201
- def transaction(mode)
202
- @cache.acquire do |cache|
184
+ def transaction(mode, acquire=true)
185
+ return cache.transaction(mode){yield} unless acquire
186
+ @conn.acquire do |cache|
203
187
  cache.transaction(mode) do
204
188
  yield
205
189
  end
@@ -208,10 +192,15 @@ class Litecache
208
192
 
209
193
  private
210
194
 
195
+ def setup
196
+ super # create connection
197
+ @bgthread = spawn_worker # create backgroud pruner thread
198
+ end
199
+
211
200
  def spawn_worker
212
201
  Litesupport.spawn do
213
202
  while @running
214
- @cache.acquire do |cache|
203
+ @conn.acquire do |cache|
215
204
  begin
216
205
  cache.transaction(:immediate) do
217
206
  @last_visited.delete_if do |k| # there is a race condition here, but not a serious one
@@ -232,19 +221,24 @@ class Litecache
232
221
  end
233
222
  end
234
223
 
235
- def create_db
236
- db = Litesupport.create_db(@options[:path])
237
- db.synchronous = 0
238
- db.cache_size = 2000
239
- db.journal_size_limit = [(@options[:size]/2).to_i, @options[:min_size]].min
240
- db.mmap_size = @options[:mmap_size]
241
- db.max_page_count = (@options[:size] / db.page_size).to_i
242
- db.case_sensitive_like = true
243
- db.execute("CREATE table if not exists data(id text primary key, value text, expires_in integer, last_used integer)")
244
- db.execute("CREATE index if not exists expiry_index on data (expires_in)")
245
- db.execute("CREATE index if not exists last_used_index on data (last_used)")
246
- @sql.each_pair{|k, v| db.stmts[k] = db.prepare(v)}
247
- db
224
+ def create_connection
225
+ conn = super
226
+ conn.cache_size = 2000
227
+ conn.journal_size_limit = [(@options[:size]/2).to_i, @options[:min_size]].min
228
+ conn.max_page_count = (@options[:size] / conn.page_size).to_i
229
+ conn.case_sensitive_like = true
230
+ sql = YAML.load_file("#{__dir__}/litecache.sql.yml")
231
+ version = conn.get_first_value("PRAGMA user_version")
232
+ sql["schema"].each_pair do |v, obj|
233
+ if v > version
234
+ conn.transaction do
235
+ obj.each{|k, s| conn.execute(s)}
236
+ conn.user_version = v
237
+ end
238
+ end
239
+ end
240
+ sql["stmts"].each { |k, v| conn.stmts[k.to_sym] = conn.prepare(v) }
241
+ conn
248
242
  end
249
243
 
250
244
  end
@@ -0,0 +1,28 @@
1
+ schema:
2
+ 1:
3
+ create_table_data: >
4
+ CREATE table if not exists data(id text primary key, value text, expires_in integer, last_used integer)
5
+ create_expiry_index: >
6
+ CREATE index if not exists expiry_index on data (expires_in)
7
+ create_last_used_index: >
8
+ CREATE index if not exists last_used_index on data (last_used)
9
+
10
+ stmts:
11
+ pruner: DELETE FROM data WHERE expires_in <= $1
12
+ extra_pruner: DELETE FROM data WHERE id IN (SELECT id FROM data ORDER BY last_used ASC LIMIT (SELECT CAST((count(*) * $1) AS int) FROM data))
13
+ limited_pruner: DELETE FROM data WHERE id IN (SELECT id FROM data ORDER BY last_used asc limit $1)
14
+ toucher: UPDATE data SET last_used = unixepoch('now') WHERE id = $1
15
+ setter: >
16
+ INSERT into data (id, value, expires_in, last_used) VALUES ($1, $2, unixepoch('now') + $3, unixepoch('now')) on conflict(id) do
17
+ UPDATE SET value = excluded.value, last_used = excluded.last_used, expires_in = excluded.expires_in
18
+ inserter: >
19
+ INSERT into data (id, value, expires_in, last_used) VALUES ($1, $2, unixepoch('now') + $3, unixepoch('now')) on conflict(id) do
20
+ UPDATE SET value = excluded.value, last_used = excluded.last_used, expires_in = excluded.expires_in WHERE id = $1 and expires_in <= unixepoch('now')
21
+ finder: SELECT id FROM data WHERE id = $1
22
+ getter: SELECT id, value, expires_in FROM data WHERE id = $1
23
+ deleter: delete FROM data WHERE id = $1 returning value
24
+ incrementer: >
25
+ INSERT into data (id, value, expires_in, last_used) VALUES ($1, $2, unixepoch('now') + $3, unixepoch('now')) on conflict(id) do
26
+ UPDATE SET value = cast(value AS int) + cast(excluded.value as int), last_used = excluded.last_used, expires_in = excluded.expires_in
27
+ counter: SELECT count(*) FROM data
28
+ sizer: SELECT size.page_size * count.page_count FROM pragma_page_size() AS size, pragma_page_count() AS count