tantiny 0.2.2

Sign up to get free protection for your applications and to get access to all the features.
checksums.yaml ADDED
@@ -0,0 +1,7 @@
1
+ ---
2
+ SHA256:
3
+ metadata.gz: e082705a8a556a2bf8ebfd08445ea2917e308a1fdd80e2e2dcf75edaa71bc135
4
+ data.tar.gz: 2098370ed2edcf0aa533abf3327ba4513f76fdf1119ba6f007de68d162510897
5
+ SHA512:
6
+ metadata.gz: 76171526e6677a13874d8cb2b915496eb844ac120404f220440552d00217b059f3ac55f329318ec020249ec4106a2977ee7251a9c526c0fa237077be8401c937
7
+ data.tar.gz: 4425ba8f9a882b9e6b2b9eb7c4d0d8aa4a87342249fd0588fd2d82a6f0c8f709fcc2ee5e22e63b66b84256c398decb2a7a207414636cf6d588c64a63c85a496f
data/CHANGELOG.md ADDED
@@ -0,0 +1,20 @@
1
+ # Changelog
2
+
3
+ ### [0.2.2](https://github.com/baygeldin/tantiny/compare/v0.2.1...v0.2.2) (2022-03-07)
4
+
5
+
6
+ ### Bug Fixes
7
+
8
+ * Fix native extension initialization ([78c7495](https://github.com/baygeldin/tantiny/commit/78c74951a4ade684395f756467aa583aad1f90a8))
9
+ * Include transpiled files in the gem build ([5dba8f6](https://github.com/baygeldin/tantiny/commit/5dba8f6a75f36eb27756c9e8d8f7f3872d73bf97))
10
+
11
+ ### [0.2.1](https://github.com/baygeldin/tantiny/compare/v0.2.0...v0.2.1) (2022-03-07)
12
+
13
+
14
+ ### Features
15
+
16
+ * Initial release ([f10ed87](https://github.com/baygeldin/tantiny/commit/f10ed878e0b781580d5a04d854c44e6b868621b1))
17
+
18
+ ### [0.2.0] (2022-03-07)
19
+
20
+ - Dummy release
data/Cargo.toml ADDED
@@ -0,0 +1,20 @@
1
+ [package]
2
+ name = "tantiny"
3
+ version = "0.2.2" # {x-release-please-version}
4
+ edition = "2021"
5
+ authors = ["Alexander Baygeldin"]
6
+ repository = "https://github.com/baygeldin/tantiny"
7
+
8
+ [lib]
9
+ crate-type = ["cdylib"]
10
+
11
+ [dependencies]
12
+ rutie = "0.8"
13
+ tantivy = "0.16"
14
+ lazy_static = "1.4"
15
+ paste = "1.0"
16
+
17
+ [package.metadata.thermite]
18
+ github_releases = true
19
+ github_release_type = "latest"
20
+ git_tag_regex = "^v(\\d+\\.\\d+\\.\\d+)$"
data/LICENSE ADDED
@@ -0,0 +1,21 @@
1
+ MIT License
2
+
3
+ Copyright (c) 2022 Alexander Baygeldin
4
+
5
+ Permission is hereby granted, free of charge, to any person obtaining a copy
6
+ of this software and associated documentation files (the "Software"), to deal
7
+ in the Software without restriction, including without limitation the rights
8
+ to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
9
+ copies of the Software, and to permit persons to whom the Software is
10
+ furnished to do so, subject to the following conditions:
11
+
12
+ The above copyright notice and this permission notice shall be included in all
13
+ copies or substantial portions of the Software.
14
+
15
+ THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
16
+ IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
17
+ FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
18
+ AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
19
+ LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
20
+ OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
21
+ SOFTWARE.
data/README.md ADDED
@@ -0,0 +1,309 @@
1
+ # Tantiny
2
+
3
+ Need a fast full-text search for your Ruby script, but Solr and Elasticsearch are an overkill? 😏
4
+
5
+ You're in the right place. **Tantiny** is a minimalistic full-text search library for Ruby based on [Tantivy](https://github.com/quickwit-oss/tantivy) (an awesome alternative to Apache Lucene written in Rust). The greatest advantage of using it is that you don't need *anything* to make it work (no separate server or process), it's purely embeddable. So, if your task at hand requires a full-text search, but a full-blown distributed search engine would be an overkill, Tantiny would be perfect for you.
6
+
7
+ The main philosophy is to provide low-level access Tantivy's inverted index, but with a nice Ruby-esque API, sensible defaults, and additional functionality sprinkled on top (so, Tantiny not exactly bindings to Tantivy, but it tries to be close).
8
+
9
+ Take a look at the most basic example:
10
+
11
+ ```ruby
12
+ index = Tantiny::Index.new("/path/to/index") { text :description }
13
+
14
+ index << { id: 1, description: "Hello World!" }
15
+ index << { id: 2, description: "What's up?" }
16
+ index << { id: 3, description: "Goodbye World!" }
17
+
18
+ index.commit
19
+ index.reload
20
+
21
+ index.search("world") # 1, 3
22
+ ```
23
+
24
+ ## Installation
25
+
26
+ Add this line to your application's Gemfile:
27
+
28
+ ```ruby
29
+ gem 'tantiny'
30
+ ```
31
+
32
+ And then execute:
33
+
34
+ $ bundle install
35
+
36
+ Or install it yourself as:
37
+
38
+ $ gem install tantiny
39
+
40
+ You don't **have to** have Rust installed on your system since Tantiny will try to download the pre-compiled binaries hosted on GitHub releases during the installation. However, if no pre-compiled binaries were found for your system (which is a combination of platform, architecture, and Ruby version) you will need to [install Rust](https://www.rust-lang.org/tools/install) first.
41
+
42
+ ## Defining the index
43
+
44
+ You have to specify a path to where the index would be stored and a block that defines the schema:
45
+
46
+ ```ruby
47
+ Tantiny::Index.new "/tmp/index" do
48
+ id :imdb_id
49
+ facet :category
50
+ string :title
51
+ text :description
52
+ integer :duration
53
+ double :rating
54
+ date :release_date
55
+ end
56
+ ```
57
+
58
+ Here are the descriptions for every field type:
59
+
60
+ | Type | Description |
61
+ | --- | --- |
62
+ | id | Specifies where documents' ids are stored (defaults to `:id`). |
63
+ | facet | Fields with values like `/animals/birds` (i.e. hierarchial categories). |
64
+ | string | Fields with text that are **not** tokenized. |
65
+ | text | Fields with text that are tokenized by the specified tokenizer. |
66
+ | integer | Fields with integer values. |
67
+ | double | Fields with float values. |
68
+ | date | Fields with either `DateTime` type or something that converts to it. |
69
+
70
+ ## Managing documents
71
+
72
+ You can feed the index any kind of object that has methods specified in your schema, but plain hashes also work:
73
+
74
+ ```ruby
75
+ rio_bravo = OpenStruct.new(
76
+ imdb_id: "tt0053221",
77
+ type: '/western/US',
78
+ title: "Rio Bravo",
79
+ description: "A small-town sheriff enlists a drunk, a kid and an old man to help him fight off a ruthless cattle baron.",
80
+ duration: 141,
81
+ rating: 8.0,
82
+ release_date: Date.parse("March 18, 1959")
83
+ )
84
+
85
+ hanabi = {
86
+ imdb_id: "tt0119250",
87
+ type: "/crime/Japan",
88
+ title: "Hana-bi",
89
+ description: "Nishi leaves the police in the face of harrowing personal and professional difficulties. Spiraling into depression, he makes questionable decisions.",
90
+ duration: 103,
91
+ rating: 7.7,
92
+ release_date: Date.parse("December 1, 1998")
93
+ }
94
+
95
+ brother = {
96
+ imdb_id: "tt0118767",
97
+ type: "/crime/Russia",
98
+ title: "Brother",
99
+ description: "An ex-soldier with a personal honor code enters the family crime business in St. Petersburg, Russia.",
100
+ duration: 99,
101
+ rating: 7.9,
102
+ release_date: Date.parse("December 12, 1997")
103
+ }
104
+
105
+ index << rio_bravo
106
+ index << hanabi
107
+ index << brother
108
+ ```
109
+
110
+ In order to update the document just add it again (as long as the id is the same):
111
+
112
+ ```ruby
113
+ rio_bravo.rating = 10.0
114
+ index << rio_bravo
115
+ ```
116
+
117
+ You can also delete it if you want:
118
+
119
+ ```ruby
120
+ index.delete(rio_bravo.imdb_id)
121
+ ```
122
+
123
+ After that you need to commit the index for the changes to take place:
124
+
125
+ ```ruby
126
+ index.commit
127
+ ```
128
+
129
+ ## Searching
130
+
131
+ Make sure that your index is up-to-date by reloading it first:
132
+
133
+ ```ruby
134
+ index.reload
135
+ ```
136
+
137
+ And search it (finally!):
138
+
139
+ ```ruby
140
+ index.search("a drunk, a kid, and an old man")
141
+ ```
142
+
143
+ By default it will return ids of 10 best matching documents, but you can customize it:
144
+
145
+ ```ruby
146
+ index.search("a drunk, a kid, and an old man", limit: 100)
147
+ ```
148
+
149
+ You may wonder, how exactly does it conduct the search? Well, the default behavior is to use `smart_query` search (see below for details) over all `text` fields defined in your schema. So, you can pass the parameters that the `smart_query` accepts right here:
150
+
151
+ ```ruby
152
+ index.search("a dlunk, a kib, and an olt mab", fuzzy_distance: 1)
153
+ ```
154
+
155
+ However, you can customize it by composing your own query out of basic building blocks:
156
+
157
+ ```ruby
158
+ popular_movies = index.range_query(:rating, 8.0..10.0)
159
+ about_sheriffs = index.term_query(:description, "sheriff")
160
+ crime_movies = index.facet_query(:cetegory, "/crime")
161
+ long_ass_movies = index.range_query(:duration, 180..9999)
162
+ something_flashy = index.smart_query(:description, "bourgeoisie")
163
+
164
+ index.search((popular_movies & about_sheriffs) | (crime_movies & !long_ass_movies) | something_flashy)
165
+ ```
166
+
167
+ I know, weird taste! But pretty cool, huh? Take a look at all the available queries below.
168
+
169
+ ### Supported queries
170
+
171
+ | Query | Behavior |
172
+ | --- | --- |
173
+ | all_query | Returns all indexed documents. |
174
+ | empty_query | Returns exactly nothing (used internally). |
175
+ | term_query | Documents that contain the specified term. |
176
+ | fuzzy_term_query | Documents that contain the specified term within a Levenshtein distance. |
177
+ | phrase_query | Documents that contain the specified sequence of terms. |
178
+ | regex_query | Documents that contain a term that matches the specified regex. |
179
+ | prefix_query | Documents that contain a term with the specified prefix. |
180
+ | range_query | Documents that with an `integer`, `double` or `date` field within the specified range. |
181
+ | facet_query | Documents that belong to the specified category. |
182
+ | smart_query | A combination of `term_query`, `fuzzy_term_query` and `prefix_query`. |
183
+
184
+ Take a look at the [signatures file](https://github.com/baygeldin/tantiny/blob/main/sig/tantiny/query.rbs) to see what parameters do queries accept.
185
+
186
+ ### Searching on multiple fields
187
+
188
+ All queries can search on multuple fields (except for `facet_query` because it doesn't make sense there).
189
+
190
+ So, the following query:
191
+
192
+ ```ruby
193
+ index.term_query(%i[title, description], "hello")
194
+ ```
195
+
196
+ Is equivalent to:
197
+
198
+ ```ruby
199
+ index.term_query(:title, "hello") | index.term_query(:description, "hello")
200
+ ```
201
+
202
+ ### Boosting queries
203
+
204
+ All queries support the `boost` parameter that allows to bump documents position in the search:
205
+
206
+ ```ruby
207
+ about_cowboys = index.term_query(:description, "cowboy", boost: 2.0)
208
+ about_samurai = index.term_query(:description, "samurai") # sorry, Musashi...
209
+
210
+ index.search(about_cowboys | about_samurai)
211
+ ```
212
+
213
+ ### `smart_query` behavior
214
+
215
+ The `smart_query` search will extract terms from your query string using the respective field tokenizers and search the index for documents that contain those terms via the `term_query`. If the `fuzzy_distance` parameter is specified it will use the `fuzzy_term_query`. Also, it allows the last term to be unfinished by using the `prefix_query`.
216
+
217
+ So, the following query:
218
+
219
+ ```ruby
220
+ index.smart_query(%i[en_text ru_text], "dollars рубли eur", fuzzy_distance: 1)
221
+ ```
222
+
223
+ Is equivalent to:
224
+
225
+ ```ruby
226
+ t1_en = index.fuzzy_term_query(:en_text, "dollar")
227
+ t2_en = index.fuzzy_term_query(:en_text, "рубли")
228
+ t3_en = index.fuzzy_term_query(:en_text, "eur")
229
+ t3_prefix_en = index.prefix_query(:en_text, "eur")
230
+
231
+ t1_ru = index.fuzzy_term_query(:ru_text, "dollars")
232
+ t2_ru = index.fuzzy_term_query(:ru_text, "рубл")
233
+ t3_ru = index.fuzzy_term_query(:ru_text, "eur")
234
+ t3_prefix_ru = index.prefix_query(:ru_text, "eur")
235
+
236
+ (t1_en & t2_en & (t3_en | t3_prefix_en)) | (t1_ru & t2_ru & (t3_ru | t3_prefix_ru))
237
+ ```
238
+
239
+ Notice how words "dollars" and "рубли" are stemmed differently depending on the field we are searching. This is assuming we have `en_text` and `ru_text` fields in our schema that use English and Russian stemmer tokenizers respectively.
240
+
241
+ ### About `regex_query`
242
+
243
+ The `regex_query` accepts the regex pattern, but it has to be a [Rust regex](https://docs.rs/regex/latest/regex/#syntax), not a Ruby `Regexp`. So, instead of `index.regex_query(:description, /hel[lp]/)` you need to use `index.regex_query(:description, "hel[lp]")`. As a side note, the `regex_query` is pretty fast because it uses the [fst crate](https://github.com/BurntSushi/fst) internally.
244
+
245
+ ## Tokenizers
246
+
247
+ So, we've mentioned tokenizers more than once already. What are they?
248
+
249
+ Tokenizers is what Tantivy uses to chop your text onto terms to build an inverted index. Then you can search the index by these terms. It's an important concept to understand so that you don't get confused when `index.term_query(:description, "Hello")` returns nothing because `Hello` isn't a term, but `hello` is. You have to extract the terms from the query before searching the index. Currently, only `smart_query` does that for you. Also, the only field type that is tokenized is `text`, so for `string` fields you should use the exact match (i.e. `index.term_query(:title, "Hello")`).
250
+
251
+ ### Specifying the tokenizer
252
+
253
+ By default the `simple` tokenizer is used, but you can specify the desired tokenizer globally via index options or locally via field specific options:
254
+
255
+ ```ruby
256
+ en_stemmer = Tantiny::Tokenizer.new(:stemmer)
257
+ ru_stemmer = Tantiny::Tokenizer.new(:stemmer, language: :ru)
258
+
259
+ Tantiny::Index.new "/tmp/index", tokenizer: en_stemmer do
260
+ text :description_en
261
+ text :description_ru, tokenizer: ru_stemmer
262
+ end
263
+ ```
264
+
265
+ ### Simple tokenizer
266
+
267
+ Simple tokenizer chops the text on punctuation and whitespaces, removes long tokens, and lowercases the text.
268
+
269
+ ```ruby
270
+ tokenizer = Tantiny::Tokenizer.new(:simple)
271
+ tokenizer.terms("Hello World!") # ["hello", "world"]
272
+ ```
273
+
274
+ ### Stemmer tokenizer
275
+
276
+ Stemmer tokenizers is exactly like simple tokenizer, but with additional stemming according to the specified language (defaults to English).
277
+
278
+ ```ruby
279
+ tokenizer = Tantiny::Tokenizer.new(:stemmer, language: :ru)
280
+ tokenizer.terms("Привет миру сему!") # ["привет", "мир", "сем"]
281
+ ```
282
+
283
+ Take a look at the [source](https://github.com/baygeldin/tantiny/blob/main/src/helpers.rs) to see what languages are supported.
284
+
285
+ ### Ngram tokenizer
286
+
287
+ Ngram tokenizer chops your text onto ngrams of specified size.
288
+
289
+ ```ruby
290
+ tokenizer = Tantiny::Tokenizer.new(:ngram, min: 5, max: 10, prefix_only: true)
291
+ tokenizer.terms("Morrowind") # ["Morro", "Morrow", "Morrowi", "Morrowin", "Morrowind"]
292
+ ```
293
+ ## Retrieving documents
294
+
295
+ You may have noticed that `search` method returns only documents ids. This is by design. The documents themselves are **not** stored in the index. Tantiny is a minimalistic library, so it tries to keep things simple. If you need to retrieve a full document, use a key-value store like Redis alongside.
296
+
297
+ ## Development
298
+
299
+ After checking out the repo, run `bin/setup` to install dependencies. Then, run `rake spec` to run the tests. You can also run `bin/console` for an interactive prompt that will allow you to experiment.
300
+
301
+ We use [conventional commits](https://www.conventionalcommits.org) to automatically generate the CHANGELOG, bump the semantic version, and to publish and release the gem. All you need to do is stick to the convention and [CI will take care of everything else](https://github.com/baygeldin/tantiny/blob/main/.github/workflows/release.yml) for you.
302
+
303
+ ## Contributing
304
+
305
+ Bug reports and pull requests are welcome on GitHub at https://github.com/baygeldin/tantiny.
306
+
307
+ ## License
308
+
309
+ The gem is available as open source under the terms of the [MIT License](https://opensource.org/licenses/MIT).
data/bin/console ADDED
@@ -0,0 +1,59 @@
1
+ #!/usr/bin/env ruby
2
+ # frozen_string_literal: true
3
+
4
+ require "bundler/setup"
5
+ require "pry"
6
+
7
+ require "tantiny"
8
+
9
+ path = File.join(__dir__, "../tmp")
10
+ en_stem = Tantiny::Tokenizer.new(:stemmer, language: :en)
11
+
12
+ index = Tantiny::Index.new path, tokenizer: en_stem do
13
+ id :imdb_id
14
+ facet :category
15
+ string :title
16
+ text :description
17
+ integer :duration
18
+ double :rating
19
+ date :release_date
20
+ end
21
+
22
+ rio_bravo = OpenStruct.new(
23
+ imdb_id: "tt0053221",
24
+ type: '/western/US',
25
+ title: "Rio Bravo",
26
+ description: "A small-town sheriff enlists a drunk, a kid and an old man to help him fight off a ruthless cattle baron.",
27
+ duration: 141,
28
+ rating: 8.0,
29
+ release_date: Date.parse("March 18, 1959")
30
+ )
31
+
32
+ hanabi = {
33
+ imdb_id: "tt0119250",
34
+ type: "/crime/Japan",
35
+ title: "Hana-bi",
36
+ description: "Nishi leaves the police in the face of harrowing personal and professional difficulties. Spiraling into depression, he makes questionable decisions.",
37
+ duration: 103,
38
+ rating: 7.7,
39
+ release_date: Date.parse("December 1, 1998")
40
+ }
41
+
42
+ brother = {
43
+ imdb_id: "tt0118767",
44
+ type: "/crime/Russia",
45
+ title: "Brother",
46
+ description: "An ex-soldier with a personal honor code enters the family crime business in St. Petersburg, Russia.",
47
+ duration: 99,
48
+ rating: 7.9,
49
+ release_date: Date.parse("December 12, 1997")
50
+ }
51
+
52
+ index << rio_bravo
53
+ index << hanabi
54
+ index << brother
55
+
56
+ index.commit
57
+ index.reload
58
+
59
+ binding.pry
data/bin/setup ADDED
@@ -0,0 +1,6 @@
1
+ #!/usr/bin/env bash
2
+ set -euo pipefail
3
+ IFS=$'\n\t'
4
+ set -vx
5
+
6
+ bundle install
data/ext/Rakefile ADDED
@@ -0,0 +1,5 @@
1
+ require "thermite/tasks"
2
+
3
+ project_dir = File.dirname(File.dirname(__FILE__))
4
+ Thermite::Tasks.new(cargo_project_path: project_dir, ruby_project_path: project_dir)
5
+ task default: %w[thermite:build]
@@ -0,0 +1,53 @@
1
+ # frozen_string_literal: true
2
+
3
+ module Tantiny
4
+ class Schema
5
+ attr_reader :default_tokenizer,
6
+ :id_field,
7
+ :text_fields,
8
+ :string_fields,
9
+ :integer_fields,
10
+ :double_fields,
11
+ :date_fields,
12
+ :facet_fields,
13
+ :field_tokenizers
14
+
15
+ def initialize(tokenizer, &block)
16
+ @default_tokenizer = tokenizer
17
+ @id_field = :id
18
+ @text_fields = []
19
+ @string_fields = []
20
+ @integer_fields = []
21
+ @double_fields = []
22
+ @date_fields = []
23
+ @facet_fields = []
24
+ @field_tokenizers = {}
25
+
26
+ instance_exec(&block)
27
+ end
28
+
29
+ def tokenizer_for(field)
30
+ field_tokenizers[field] || default_tokenizer
31
+ end
32
+
33
+ private
34
+
35
+ def id(key) ; @id_field = key; end
36
+
37
+ def string(key) ; @string_fields << key; end
38
+
39
+ def integer(key) ; @integer_fields << key; end
40
+
41
+ def double(key) ; @double_fields << key; end
42
+
43
+ def date(key) ; @date_fields << key; end
44
+
45
+ def facet(key) ; @facet_fields << key; end
46
+
47
+ def text(key, tokenizer: nil)
48
+ @field_tokenizers[key] = tokenizer if tokenizer
49
+
50
+ @text_fields << key
51
+ end
52
+ end
53
+ end
@@ -0,0 +1,29 @@
1
+ # frozen_string_literal: true
2
+
3
+ module Tantiny
4
+ class TantivyError < StandardError; end
5
+
6
+ class UnknownField < StandardError
7
+ def initialize
8
+ super("Can't find the specified field in the schema.")
9
+ end
10
+ end
11
+
12
+ class UnknownTokenizer < StandardError
13
+ def initialize(tokenizer_type)
14
+ super("Can't find \"#{tokenizer_type}\" tokenizer.")
15
+ end
16
+ end
17
+
18
+ class UnsupportedRange < StandardError
19
+ def initialize(range_type)
20
+ super("#{range_type} range is not supported by range_query.")
21
+ end
22
+ end
23
+
24
+ class UnsupportedField < StandardError
25
+ def initialize(field)
26
+ super("Can't search the \"#{field}\" field with this query.")
27
+ end
28
+ end
29
+ end
@@ -0,0 +1,9 @@
1
+ # frozen_string_literal: true
2
+
3
+ module Tantiny
4
+ module Helpers
5
+ def self.timestamp(date)
6
+ date.to_datetime.iso8601
7
+ end
8
+ end
9
+ end
@@ -0,0 +1,94 @@
1
+ # frozen_string_literal: true
2
+
3
+ module Tantiny
4
+ class Index
5
+ DEFAULT_INDEX_SIZE = 50_000_000
6
+ DEFAULT_LIMIT = 10
7
+
8
+ def self.new(path, **options, &block)
9
+ index_size = options[:size] || DEFAULT_INDEX_SIZE
10
+ default_tokenizer = options[:tokenizer] || Tokenizer.default
11
+
12
+ schema = Schema.new(default_tokenizer, &block)
13
+
14
+ object = __new(
15
+ path.to_s,
16
+ index_size,
17
+ schema.default_tokenizer,
18
+ schema.field_tokenizers.transform_keys(&:to_s),
19
+ schema.text_fields.map(&:to_s),
20
+ schema.string_fields.map(&:to_s),
21
+ schema.integer_fields.map(&:to_s),
22
+ schema.double_fields.map(&:to_s),
23
+ schema.date_fields.map(&:to_s),
24
+ schema.facet_fields.map(&:to_s)
25
+ )
26
+
27
+ object.send(:schema=, schema)
28
+
29
+ object
30
+ end
31
+
32
+ attr_reader :schema
33
+
34
+ def commit
35
+ __commit
36
+ end
37
+
38
+ def reload
39
+ __reload
40
+ end
41
+
42
+ def <<(document)
43
+ __add_document(
44
+ resolve(document, schema.id_field).to_s,
45
+ slice_document(document, schema.text_fields) { |v| v.to_s },
46
+ slice_document(document, schema.string_fields) { |v| v.to_s },
47
+ slice_document(document, schema.integer_fields) { |v| v.to_i },
48
+ slice_document(document, schema.double_fields) { |v| v.to_f },
49
+ slice_document(document, schema.date_fields) { |v| Helpers.timestamp(v) },
50
+ slice_document(document, schema.facet_fields) { |v| v.to_s }
51
+ )
52
+ end
53
+
54
+ def delete(id)
55
+ __delete_document(id.to_s)
56
+ end
57
+
58
+ def search(query, limit: DEFAULT_LIMIT, **smart_query_options)
59
+ unless query.is_a?(Query)
60
+ fields = schema.text_fields
61
+ query = Query.smart_query(self, fields, query.to_s, **smart_query_options)
62
+ end
63
+
64
+ __search(query, limit)
65
+ end
66
+
67
+ # Shortcuts for creating queries:
68
+ Query::TYPES.each do |query_type|
69
+ method_name = "#{query_type}_query"
70
+ define_method(method_name) do |*args, **kwargs|
71
+ # Ruby 2.6 fix (https://www.ruby-lang.org/en/news/2019/12/12/separation-of-positional-and-keyword-arguments-in-ruby-3-0/)
72
+ if kwargs.empty?
73
+ Query.send(method_name, self, *args)
74
+ else
75
+ Query.send(method_name, self, *args, **kwargs)
76
+ end
77
+ end
78
+ end
79
+
80
+ private
81
+
82
+ attr_writer :schema
83
+
84
+ def slice_document(document, fields, &block)
85
+ fields.inject({}) do |hash, field|
86
+ hash.tap { |h| h[field.to_s] = resolve(document, field) }
87
+ end.compact.transform_values(&block)
88
+ end
89
+
90
+ def resolve(document, field)
91
+ document.is_a?(Hash) ? document[field] : document.send(field)
92
+ end
93
+ end
94
+ end