litestack 0.3.0 → 0.4.1

Sign up to get free protection for your applications and to get access to all the features.
checksums.yaml CHANGED
@@ -1,7 +1,7 @@
1
1
  ---
2
2
  SHA256:
3
- metadata.gz: a1125a2cb5a2d9ea277fb9a0774632eb297ed4d322882dd33776d8a8f1c6471e
4
- data.tar.gz: 9cb917c999850b2bd668e8826a4be4f091ee4e8e0f963b54ed87f5b7a7b060d6
3
+ metadata.gz: 11ef03890b2883bd21fb443d959774f9932f56e927d2ee2a40a028713526ce9c
4
+ data.tar.gz: c531925eeffa84973475c14d4d230d8919d5e5ec095e5055f9bab6b0f94519e3
5
5
  SHA512:
6
- metadata.gz: e51460d99e66732e3dcd72e9444f1d739df7ae37804eb9b308cdae0faa7c9c34d4e7554e8f9d0bf91905f7ae3e9b4d7470509a84653da2ee736582b268fe188a
7
- data.tar.gz: 873b188bd6db924f1e56034c22f5cc465b01d653ac925a2d30de648271b0b64a705894667b90005bd3d457ad318d3ee4736b16a60fa128c34d894172bc9b8e16
6
+ metadata.gz: 82c0878fb57fa89290c6550ddbf5b393e436883252285dd78083ba7e96f78947aa3fdf3e92d96f3baec6b9b2b0d2edfdc470f4e1c544427f3da8db704179de27
7
+ data.tar.gz: 358bc249c3f1371714e12f4df0a0d82883e42eec14528f4faaef8a5e611e62af1039d8c00ebdc9d89be1a514a3bfd698ce3f988fa1b4f29eb1cf72fd1ca03b25
data/BENCHMARKS.md CHANGED
@@ -3,6 +3,8 @@
3
3
  This is a set of initial (simple) benchmars, designed to understand the baseline performance for different litestack components against their counterparts.
4
4
  These are not real life scenarios and I hope I will be able to produce some interesting ones soon.
5
5
 
6
+ All these benchmarks were run on an 8 core, 16 thread, AMD 5700U based laptop, in a Virtual Box VM
7
+
6
8
  > ![litedb](https://github.com/oldmoe/litestack/blob/master/assets/litedb_logo_teal.png?raw=true)
7
9
 
8
10
  ### Point Read
@@ -109,5 +111,14 @@ Two scenarios were benchmarked, an empty job and one with a 100ms sleep to simul
109
111
 
110
112
  Running Litejob with fibers is producing much faster results than any threaded solution. Still though, threaded Litejob remains ahead of Sidekiq in all scenarios.
111
113
 
114
+ > ![litecable](https://github.com/oldmoe/litestack/blob/master/assets/litecable_logo_teal.png?raw=true)
115
+
116
+ A client written using the Iodine web server was used to generate the WS load in an event driven fashion. The Rails application, the Iodine based load generator and the Redis server were all run on the same machine to exclude network overheads (Redis still pays for the TCP stack overhead though)
112
117
 
118
+ |Requests|Redis Req/Sec|Litestack Req/sec|Redis p90 Latency (ms)|Litestack p90 Latency (ms)|Redis p99 Latency (ms)|Litestack p99 Latancy (ms)|
119
+ |-:|-:|-:|-:|-:|-:|-:|
120
+ |1,000|2611|3058|34|27|153|78|
121
+ |10,000|3110|5328|81|40|138|122
122
+ |100,000|3403|5385|41|36|153|235
113
123
 
124
+ On average, Litecable is quite faster than the Redis based version and offers better latenices for over 90% of the requests, though Redis usually delivers better p99 latencies,
data/CHANGELOG.md CHANGED
@@ -1,5 +1,15 @@
1
1
  ## [Unreleased]
2
2
 
3
+ ## [0.4.1] - 2023-10-11
4
+
5
+ - Add missing Litesearch::Model dependency
6
+
7
+ ## [0.4.0] - 2023-10-11
8
+
9
+ - Introduced Litesearch, dynamic & fast full text search capability for Litedb
10
+ - ActiveRecord and Sequel integration for Litesearch
11
+ - Slight improvement to the Sequel Litedb adapter for better Litesearch integration
12
+
3
13
  ## [0.3.0] - 2023-08-13
4
14
 
5
15
  - Reworked the Litecable thread safety model
data/Gemfile CHANGED
@@ -8,3 +8,5 @@ gemspec
8
8
 
9
9
 
10
10
  gem "rack", "~> 3.0"
11
+
12
+ gem "simplecov"
@@ -29,7 +29,7 @@ if env == "a" # threaded
29
29
  end
30
30
 
31
31
  require_relative '../lib/active_job/queue_adapters/litejob_adapter'
32
- puts Litesupport.scheduler
32
+ puts Litescheduler.backend
33
33
 
34
34
  RailsJob.queue_adapter = :litejob
35
35
  t = Time.now.to_f
@@ -28,7 +28,7 @@ end
28
28
 
29
29
  require './uljob.rb'
30
30
 
31
- STDERR.puts "litejob started in #{Litesupport.scheduler} environmnet"
31
+ STDERR.puts "litejob started in #{Litescheduler.backend} environmnet"
32
32
 
33
33
  t = Process.clock_gettime(Process::CLOCK_MONOTONIC)
34
34
  bench("enqueuing litejobs", count) do |i|
data/bench/uljob.rb CHANGED
@@ -1,5 +1,5 @@
1
1
  require './bench'
2
- require '../lib/litestack'
2
+ require '../lib/litestack/litejob'
3
3
 
4
4
  class MyJob
5
5
  include Litejob
@@ -48,7 +48,7 @@ module ActiveSupport
48
48
  @cache.prune(limit)
49
49
  end
50
50
 
51
- def clear()
51
+ def clear(options = nil)
52
52
  @cache.clear
53
53
  end
54
54
 
@@ -80,7 +80,7 @@ class Litecable
80
80
  end
81
81
 
82
82
  def create_broadcaster
83
- Litesupport.spawn do
83
+ Litescheduler.spawn do
84
84
  while @running do
85
85
  @messages.acquire do |msgs|
86
86
  if msgs.length > 0
@@ -97,7 +97,7 @@ class Litecable
97
97
  end
98
98
 
99
99
  def create_pruner
100
- Litesupport.spawn do
100
+ Litescheduler.spawn do
101
101
  while @running do
102
102
  run_stmt(:prune, @options[:expire_after])
103
103
  sleep @options[:expire_after]
@@ -106,7 +106,7 @@ class Litecable
106
106
  end
107
107
 
108
108
  def create_listener
109
- Litesupport.spawn do
109
+ Litescheduler.spawn do
110
110
  while @running do
111
111
  @last_fetched_id ||= (run_stmt(:last_id)[0][0] || 0)
112
112
  run_stmt(:fetch, @last_fetched_id, @pid).to_a.each do |msg|
@@ -212,7 +212,7 @@ class Litecache
212
212
  end
213
213
 
214
214
  def spawn_worker
215
- Litesupport.spawn do
215
+ Litescheduler.spawn do
216
216
  while @running
217
217
  @conn.acquire do |cache|
218
218
  begin
@@ -4,12 +4,18 @@ require_relative 'litesupport'
4
4
  # all measurable components should require the litemetric class
5
5
  require_relative 'litemetric'
6
6
 
7
+ # litedb in particular gets access to litesearch
8
+ require_relative 'litesearch'
9
+
7
10
  # Litedb inherits from the SQLite3::Database class and adds a few initialization options
8
11
  class Litedb < ::SQLite3::Database
9
12
 
10
13
  # add litemetric support
11
14
  include Litemetric::Measurable
12
15
 
16
+ # add litesearch support
17
+ include Litesearch
18
+
13
19
  # overrride the original initilaizer to allow for connection configuration
14
20
  def initialize(file, options = {}, zfs = nil )
15
21
  if block_given?
@@ -68,9 +68,8 @@ module Litejob
68
68
  get_jobqueue
69
69
  end
70
70
 
71
- def delete(id, queue_name=nil)
72
- queue_name ||= queue
73
- get_jobqueue.delete(id, queue)
71
+ def delete(id)
72
+ get_jobqueue.delete(id)
74
73
  end
75
74
 
76
75
  def queue
@@ -54,7 +54,7 @@ class Litejobqueue < Litequeue
54
54
  # a method that returns a single instance of the job queue
55
55
  # for use by Litejob
56
56
  def self.jobqueue(options = {})
57
- @@queue ||= Litesupport.synchronize{self.new(options)}
57
+ @@queue ||= Litescheduler.synchronize{self.new(options)}
58
58
  end
59
59
 
60
60
  def self.new(options = {})
@@ -95,7 +95,7 @@ class Litejobqueue < Litequeue
95
95
  # jobqueue = Litejobqueue.new
96
96
  # jobqueue.push(EasyJob, params) # the job will be performed asynchronously
97
97
  def push(jobclass, params, delay=0, queue=nil)
98
- payload = Oj.dump({klass: jobclass, params: params, retries: @options[:retries], queue: queue})
98
+ payload = Oj.dump({klass: jobclass, params: params, retries: @options[:retries], queue: queue}, mode: :strict)
99
99
  res = super(payload, delay, queue)
100
100
  capture(:enqueue, queue)
101
101
  @logger.info("[litejob]:[ENQ] queue:#{res[1]} class:#{jobclass} job:#{res[0]}")
@@ -103,7 +103,7 @@ class Litejobqueue < Litequeue
103
103
  end
104
104
 
105
105
  def repush(id, job, delay=0, queue=nil)
106
- res = super(id, Oj.dump(job), delay, queue)
106
+ res = super(id, Oj.dump(job, mode: :strict), delay, queue)
107
107
  capture(:enqueue, queue)
108
108
  @logger.info("[litejob]:[ENQ] queue:#{res[0]} class:#{job[:klass]} job:#{id}")
109
109
  res
@@ -121,7 +121,7 @@ class Litejobqueue < Litequeue
121
121
  def delete(id)
122
122
  job = super(id)
123
123
  @logger.info("[litejob]:[DEL] job: #{job}")
124
- job = Oj.load(job[0]) if job
124
+ job = Oj.load(job[0], symbol_keys: true) if job
125
125
  job
126
126
  end
127
127
 
@@ -163,17 +163,17 @@ class Litejobqueue < Litequeue
163
163
  end
164
164
 
165
165
  def job_started
166
- Litesupport.synchronize(@mutex){@jobs_in_flight += 1}
166
+ Litescheduler.synchronize(@mutex){@jobs_in_flight += 1}
167
167
  end
168
168
 
169
169
  def job_finished
170
- Litesupport.synchronize(@mutex){@jobs_in_flight -= 1}
170
+ Litescheduler.synchronize(@mutex){@jobs_in_flight -= 1}
171
171
  end
172
172
 
173
173
  # optionally run a job in its own context
174
174
  def schedule(spawn = false, &block)
175
175
  if spawn
176
- Litesupport.spawn &block
176
+ Litescheduler.spawn &block
177
177
  else
178
178
  yield
179
179
  end
@@ -181,50 +181,23 @@ class Litejobqueue < Litequeue
181
181
 
182
182
  # create a worker according to environment
183
183
  def create_worker
184
- Litesupport.spawn do
184
+ Litescheduler.spawn do
185
185
  worker_sleep_index = 0
186
186
  while @running do
187
187
  processed = 0
188
- @queues.each do |level| # iterate through the levels
189
- level[1].each do |q| # iterate through the queues in the level
190
- index = 0
191
- max = level[0]
192
- while index < max && payload = pop(q[0], 1) # fearlessly use the same queue object
193
- capture(:dequeue, q[0])
188
+ @queues.each do |priority, queues| # iterate through the levels
189
+ queues.each do |queue, spawns| # iterate through the queues in the level
190
+ batched = 0
191
+
192
+ while (batched < priority) && (payload = pop(queue, 1)) # fearlessly use the same queue object
193
+ capture(:dequeue, queue)
194
194
  processed += 1
195
- index += 1
196
- begin
197
- id, job = payload[0], payload[1]
198
- job = Oj.load(job)
199
- @logger.info "[litejob]:[DEQ] queue:#{q[0]} class:#{job[:klass]} job:#{id}"
200
- klass = eval(job[:klass])
201
- schedule(q[1]) do # run the job in a new context
202
- job_started #(Litesupport.current_context)
203
- begin
204
- measure(:perform, q[0]){ klass.new.perform(*job[:params]) }
205
- @logger.info "[litejob]:[END] queue:#{q[0]} class:#{job[:klass]} job:#{id}"
206
- rescue Exception => e
207
- # we can retry the failed job now
208
- capture(:fail, q[0])
209
- if job[:retries] == 0
210
- @logger.error "[litejob]:[ERR] queue:#{q[0]} class:#{job[:klass]} job:#{id} failed with #{e}:#{e.message}, retries exhausted, moved to _dead queue"
211
- repush(id, job, @options[:dead_job_retention], '_dead')
212
- else
213
- capture(:retry, q[0])
214
- retry_delay = @options[:retry_delay_multiplier].pow(@options[:retries] - job[:retries]) * @options[:retry_delay]
215
- job[:retries] -= 1
216
- @logger.error "[litejob]:[ERR] queue:#{q[0]} class:#{job[:klass]} job:#{id} failed with #{e}:#{e.message}, retrying in #{retry_delay} seconds"
217
- repush(id, job, retry_delay, q[0])
218
- end
219
- end
220
- job_finished #(Litesupport.current_context)
221
- end
222
- rescue Exception => e
223
- # this is an error in the extraction of job info, retrying here will not be useful
224
- @logger.error "[litejob]:[ERR] failed to extract job info for: #{payload} with #{e}:#{e.message}"
225
- job_finished #(Litesupport.current_context)
226
- end
227
- Litesupport.switch #give other contexts a chance to run here
195
+ batched += 1
196
+
197
+ id, serialized_job = payload
198
+ process_job(queue, id, serialized_job, spawns)
199
+
200
+ Litescheduler.switch # give other contexts a chance to run here
228
201
  end
229
202
  end
230
203
  end
@@ -240,7 +213,7 @@ class Litejobqueue < Litequeue
240
213
 
241
214
  # create a gc for dead jobs
242
215
  def create_garbage_collector
243
- Litesupport.spawn do
216
+ Litescheduler.spawn do
244
217
  while @running do
245
218
  while jobs = pop('_dead', 100)
246
219
  if jobs[0].is_a? Array
@@ -254,4 +227,34 @@ class Litejobqueue < Litequeue
254
227
  end
255
228
  end
256
229
 
230
+ def process_job(queue, id, serialized_job, spawns)
231
+ job = Oj.load(serialized_job)
232
+ @logger.info "[litejob]:[DEQ] queue:#{queue} class:#{job["klass"]} job:#{id}"
233
+ klass = Object.const_get(job["klass"])
234
+ schedule(spawns) do # run the job in a new context
235
+ job_started # (Litesupport.current_context)
236
+ begin
237
+ measure(:perform, queue) { klass.new.perform(*job["params"]) }
238
+ @logger.info "[litejob]:[END] queue:#{queue} class:#{job["klass"]} job:#{id}"
239
+ rescue Exception => e # standard:disable Lint/RescueException
240
+ # we can retry the failed job now
241
+ capture(:fail, queue)
242
+ if job["retries"] == 0
243
+ @logger.error "[litejob]:[ERR] queue:#{queue} class:#{job["klass"]} job:#{id} failed with #{e}:#{e.message}, retries exhausted, moved to _dead queue"
244
+ repush(id, job, @options[:dead_job_retention], "_dead")
245
+ else
246
+ capture(:retry, queue)
247
+ retry_delay = @options[:retry_delay_multiplier].pow(@options[:retries] - job["retries"]) * @options[:retry_delay]
248
+ job["retries"] -= 1
249
+ @logger.error "[litejob]:[ERR] queue:#{queue} class:#{job["klass"]} job:#{id} failed with #{e}:#{e.message}, retrying in #{retry_delay} seconds"
250
+ repush(id, job, retry_delay, queue)
251
+ end
252
+ end
253
+ job_finished # (Litesupport.current_context)
254
+ end
255
+ rescue Exception => e # standard:disable Lint/RescueException
256
+ # this is an error in the extraction of job info, retrying here will not be useful
257
+ @logger.error "[litejob]:[ERR] failed to extract job info for: #{serialized_job} with #{e}:#{e.message}"
258
+ job_finished # (Litesupport.current_context)
259
+ end
257
260
  end
@@ -179,7 +179,7 @@ class Litemetric
179
179
  end
180
180
 
181
181
  def create_flusher
182
- Litesupport.spawn do
182
+ Litescheduler.spawn do
183
183
  while @running do
184
184
  sleep @options[:flush_interval]
185
185
  flush
@@ -188,7 +188,7 @@ class Litemetric
188
188
  end
189
189
 
190
190
  def create_summarizer
191
- Litesupport.spawn do
191
+ Litescheduler.spawn do
192
192
  while @running do
193
193
  sleep @options[:summarize_interval]
194
194
  summarize
@@ -211,7 +211,7 @@ class Litemetric
211
211
  end
212
212
 
213
213
  def create_snapshotter
214
- Litesupport.spawn do
214
+ Litescheduler.spawn do
215
215
  while @running do
216
216
  sleep @litemetric.options[:snapshot_interval]
217
217
  capture_snapshot
@@ -0,0 +1,84 @@
1
+ # frozen_stringe_literal: true
2
+
3
+ module Litescheduler
4
+ # cache the scheduler we are running in
5
+ # it is an error to change the scheduler for a process
6
+ # or for a child forked from that process
7
+ def self.backend
8
+ @backend ||= if Fiber.scheduler
9
+ :fiber
10
+ elsif defined? Polyphony
11
+ :polyphony
12
+ elsif defined? Iodine
13
+ :iodine
14
+ else
15
+ :threaded
16
+ end
17
+ end
18
+
19
+ # spawn a new execution context
20
+ def self.spawn(&block)
21
+ if backend == :fiber
22
+ Fiber.schedule(&block)
23
+ elsif backend == :polyphony
24
+ spin(&block)
25
+ elsif backend == :threaded or backend == :iodine
26
+ Thread.new(&block)
27
+ end
28
+ # we should never reach here
29
+ end
30
+
31
+ def self.storage
32
+ if backend == :fiber || backend == :poylphony
33
+ Fiber.current.storage
34
+ else
35
+ Thread.current
36
+ end
37
+ end
38
+
39
+ def self.current
40
+ if backend == :fiber || backend == :poylphony
41
+ Fiber.current
42
+ else
43
+ Thread.current
44
+ end
45
+ end
46
+
47
+ # switch the execution context to allow others to run
48
+ def self.switch
49
+ if backend == :fiber
50
+ Fiber.scheduler.yield
51
+ true
52
+ elsif backend == :polyphony
53
+ Fiber.current.schedule
54
+ Thread.current.switch_fiber
55
+ true
56
+ else
57
+ #Thread.pass
58
+ false
59
+ end
60
+ end
61
+
62
+ # bold assumption, we will only synchronize threaded code!
63
+ # If some code explicitly wants to synchronize a fiber
64
+ # they must send (true) as a parameter to this method
65
+ # else it is a no-op for fibers
66
+ def self.synchronize(fiber_sync = false, &block)
67
+ if backend == :fiber or backend == :polyphony
68
+ yield # do nothing, just run the block as is
69
+ else
70
+ self.mutex.synchronize(&block)
71
+ end
72
+ end
73
+
74
+ def self.max_contexts
75
+ return 50 if backend == :fiber || backend == :polyphony
76
+ 5
77
+ end
78
+
79
+ # mutex initialization
80
+ def self.mutex
81
+ # a single mutex per process (is that ok?)
82
+ @@mutex ||= Mutex.new
83
+ end
84
+ end
@@ -0,0 +1,230 @@
1
+ require 'oj'
2
+ require_relative './schema.rb'
3
+
4
+ class Litesearch::Index
5
+
6
+ DEFAULT_SEARCH_OPTIONS = {limit: 25, offset: 0}
7
+
8
+ def initialize(db, name)
9
+ @db = db # this index instance will always belong to this db instance
10
+ @stmts = {}
11
+ name = name.to_s.downcase.to_sym
12
+ # if in the db then put in cache and return if no schema is given
13
+ # if a schema is given then compare the new and the existing schema
14
+ # if they are the same put in cache and return
15
+ # if they differ only in weights then set the new weights, update the schema, put in cache and return
16
+ # if they differ in fields (added/removed/renamed) then update the structure, then rebuild if auto-rebuild is on
17
+ # if they differ in tokenizer then rebuild if auto-rebuild is on (error otherwise)
18
+ # if they differ in both then update the structure and rebuild if auto-rebuild is on (error otherwise)
19
+ load_index(name) if exists?(name)
20
+
21
+ if block_given?
22
+ schema = Litesearch::Schema.new
23
+ schema.schema[:name] = name
24
+ yield schema
25
+ schema.post_init
26
+ # now that we have a schema object we need to check if we need to create or modify and existing index
27
+ @db.transaction(:immediate) do
28
+ if exists?(name)
29
+ load_index(name)
30
+ do_modify(schema)
31
+ else
32
+ do_create(schema)
33
+ end
34
+ prepare_statements
35
+ end
36
+ else
37
+ if exists?(name)
38
+ # an index already exists, load it from the database and return the index instance to the caller
39
+ load_index(name)
40
+ prepare_statements
41
+ else
42
+ raise "index does not exist and no schema was supplied"
43
+ end
44
+ end
45
+ end
46
+
47
+ def load_index(name)
48
+ # we cannot use get_config_value here since the schema object is not created yet, should we allow something here?
49
+ @schema = Litesearch::Schema.new(Oj.load(@db.get_first_value("SELECT v from #{name}_config where k = ?", :litesearch_schema.to_s))) rescue nil
50
+ raise "index configuration not found, either corrupted or not a litesearch index!" if @schema.nil?
51
+ self
52
+ end
53
+
54
+ def modify
55
+ schema = Litesearch::Schema.new
56
+ yield schema
57
+ schema.schema[:name] = @schema.schema[:name]
58
+ do_modify(schema)
59
+ end
60
+
61
+ def rebuild!
62
+ @db.transaction(:immediate) do
63
+ do_rebuild
64
+ end
65
+ end
66
+
67
+ def add(document)
68
+ @stmts[:insert].execute!(document)
69
+ return @db.last_insert_row_id
70
+ end
71
+
72
+ def remove(id)
73
+ @stmts[:delete].execute!(id)
74
+ end
75
+
76
+ def count(term = nil)
77
+ if term
78
+ @stmts[:count].execute!(term)[0][0]
79
+ else
80
+ @stmts[:count_all].execute!()[0][0]
81
+ end
82
+ end
83
+
84
+ # search options include
85
+ # limit: how many records to return
86
+ # offset: start from which record
87
+ def search(term, options = {})
88
+ result = []
89
+ options = DEFAULT_SEARCH_OPTIONS.merge(options)
90
+ rs = @stmts[:search].execute(term, options[:limit], options[:offset])
91
+ if @db.results_as_hash
92
+ rs.each_hash do |hash|
93
+ result << hash
94
+ end
95
+ else
96
+ result = rs.to_a
97
+ end
98
+ result
99
+ end
100
+
101
+ def clear!
102
+ @stmts[:delete_all].execute!(id)
103
+ end
104
+
105
+ def drop!
106
+ if @schema.get(:type) == :backed
107
+ @db.execute_batch(@schema.sql_for(:drop_primary_triggers))
108
+ if secondary_triggers_sql = @schema.sql_for(:create_secondary_triggers)
109
+ @db.execute_batch(@schema.sql_for(:drop_secondary_triggers))
110
+ end
111
+ end
112
+ @db.execute(@schema.sql_for(:drop))
113
+ end
114
+
115
+
116
+ private
117
+
118
+ def exists?(name)
119
+ @db.get_first_value("SELECT count(*) FROM SQLITE_MASTER WHERE name = ? AND type = 'table' AND (sql like '%fts5%' OR sql like '%FTS5%')", name.to_s) == 1
120
+ end
121
+
122
+ def prepare_statements
123
+ stmt_names = [:insert, :delete, :delete_all, :drop, :count, :count_all, :search]
124
+ stmt_names.each do |stmt_name|
125
+ @stmts[stmt_name] = @db.prepare(@schema.sql_for(stmt_name))
126
+ end
127
+ end
128
+
129
+ def do_create(schema)
130
+ @schema = schema
131
+ @schema.clean
132
+ # create index
133
+ @db.execute(schema.sql_for(:create_index, true))
134
+ # adjust ranking function
135
+ @db.execute(schema.sql_for(:ranks, true))
136
+ # create triggers (if any)
137
+ if @schema.get(:type) == :backed
138
+ @db.execute_batch(@schema.sql_for(:create_primary_triggers))
139
+ if secondary_triggers_sql = @schema.sql_for(:create_secondary_triggers)
140
+ @db.execute_batch(secondary_triggers_sql)
141
+ end
142
+ @db.execute(@schema.sql_for(:rebuild)) if @schema.get(:rebuild_on_create)
143
+ end
144
+ set_config_value(:litesearch_schema, @schema.schema)
145
+ end
146
+
147
+ def do_modify(new_schema)
148
+ changes = @schema.compare(new_schema)
149
+ # ensure the new schema maintains feild order
150
+ new_schema.order_fields(@schema)
151
+ # with the changes object decide what needs to be done to the schema
152
+ requires_schema_change = false
153
+ requires_trigger_change = false
154
+ requires_rebuild = false
155
+ if changes[:fields] || changes[:table] || changes[:tokenizer] || changes[:filter_column] || changes[:removed_fields_count] > 0# any change here will require a schema change
156
+ requires_schema_change = true
157
+ # only a change in tokenizer
158
+ requires_rebuild = changes[:tokenizer] || new_schema.get(:rebuild_on_modify)
159
+ requires_trigger_change = (changes[:table] || changes[:fields] || changes[:filter_column]) && @schema.get(:type) == :backed
160
+ end
161
+ if requires_schema_change
162
+ # 1. enable schema editing
163
+ @db.execute("PRAGMA WRITABLE_SCHEMA = TRUE")
164
+ # 2. update the index sql
165
+ @db.execute(new_schema.sql_for(:update_index), new_schema.sql_for(:create_index))
166
+ # 3. update the content table sql (if it exists)
167
+ @db.execute(new_schema.sql_for(:update_content_table), new_schema.sql_for(:create_content_table, new_schema.schema[:fields].count))
168
+ # adjust shadow tables
169
+ @db.execute(new_schema.sql_for(:expand_data), changes[:extra_fields_count])
170
+ @db.execute(new_schema.sql_for(:expand_docsize), changes[:extra_fields_count])
171
+ @db.execute("PRAGMA WRITABLE_SCHEMA = RESET")
172
+ # need to reprepare statements
173
+ end
174
+ if requires_trigger_change
175
+ @db.execute_batch(new_schema.sql_for(:drop_primary_triggers))
176
+ @db.execute_batch(new_schema.sql_for(:create_primary_triggers))
177
+ if secondary_triggers_sql = new_schema.sql_for(:create_secondary_triggers)
178
+ @db.execute_batch(new_schema.sql_for(:drop_secondary_triggers))
179
+ @db.execute_batch(secondary_triggers_sql)
180
+ end
181
+ end
182
+ if changes[:fields] || changes[:table] || changes[:tokenizer] || changes[:weights] || changes[:filter_column]
183
+ @schema = new_schema
184
+ set_config_value(:litesearch_schema, @schema.schema)
185
+ prepare_statements
186
+ #save_schema
187
+ end
188
+ do_rebuild if requires_rebuild
189
+ # update the weights if they changed
190
+ @db.execute(@schema.sql_for(:ranks)) if changes[:weights]
191
+ end
192
+
193
+ def do_rebuild
194
+ # remove any zero weight columns
195
+ if @schema.get(:type) == :backed
196
+ @db.execute_batch(@schema.sql_for(:drop_primary_triggers))
197
+ if secondary_triggers_sql = @schema.sql_for(:create_secondary_triggers)
198
+ @db.execute_batch(@schema.sql_for(:drop_secondary_triggers))
199
+ end
200
+ @db.execute(@schema.sql_for(:drop))
201
+ @db.execute(@schema.sql_for(:create_index, true))
202
+ @db.execute_batch(@schema.sql_for(:create_primary_triggers))
203
+ @db.execute_batch(secondary_triggers_sql) if secondary_triggers_sql
204
+ @db.execute(@schema.sql_for(:rebuild))
205
+ elsif @schema.get(:type) == :standalone
206
+ removables = []
207
+ @schema.get(:fields).each_with_index{|f, i| removables << [f[0], i] if f[1][:weight] == 0 }
208
+ removables.each do |col|
209
+ @db.execute(@schema.sql_for(:drop_content_col, col[1]))
210
+ @schema.get(:fields).delete(col[0])
211
+ end
212
+ @db.execute("PRAGMA WRITABLE_SCHEMA = TRUE")
213
+ @db.execute(@schema.sql_for(:update_index), @schema.sql_for(:create_index, true))
214
+ @db.execute(@schema.sql_for(:update_content_table), @schema.sql_for(:create_content_table, @schema.schema[:fields].count))
215
+ @db.execute("PRAGMA WRITABLE_SCHEMA = RESET")
216
+ @db.execute(@schema.sql_for(:rebuild))
217
+ end
218
+ set_config_value(:litesearch_schema, @schema.schema)
219
+ @db.execute(@schema.sql_for(:ranks, true))
220
+ end
221
+
222
+ def get_config_value(key)
223
+ Oj.load(@db.get_first_value(@schema.sql_for(:get_config_value), key.to_s)) #rescue nil
224
+ end
225
+
226
+ def set_config_value(key, value)
227
+ @db.execute(@schema.sql_for(:set_config_value), key.to_s, Oj.dump(value))
228
+ end
229
+
230
+ end