search_syntax 0.1.2 → 0.1.3
Sign up to get free protection for your applications and to get access to all the features.
- checksums.yaml +4 -4
- data/Gemfile.lock +1 -1
- data/docs/approximate-string-matching-algorithms.md +121 -0
- data/docs/approximate-string-matching.md +172 -0
- data/docs/language-design.md +16 -1
- data/lib/search_syntax/ransack_transformer.rb +19 -4
- data/lib/search_syntax/version.rb +1 -1
- metadata +4 -2
checksums.yaml
CHANGED
@@ -1,7 +1,7 @@
|
|
1
1
|
---
|
2
2
|
SHA256:
|
3
|
-
metadata.gz:
|
4
|
-
data.tar.gz:
|
3
|
+
metadata.gz: 27ad7918f5ef1cfd2595f64bac7f7140ad11b189a1d360f01b007c11447c18b4
|
4
|
+
data.tar.gz: 0eec14f49542a2c57218433eb19c36955b0e1e04c431ed1d0b99b45207c28fa8
|
5
5
|
SHA512:
|
6
|
-
metadata.gz:
|
7
|
-
data.tar.gz:
|
6
|
+
metadata.gz: e8b1768a2b3de1ac000f4b90ab82e2322bcefb7d6abe1853f74dc4143ba55edf67efcda43b26444a264ac617c0cbc974c54961c78c61c98ae291678c1fcba837
|
7
|
+
data.tar.gz: ab73fc00bda9e55b8f0a9ebe11f75d7aa591447d519e4e064abfe480bb52ccb274087431160ea1783e1a5983ee8ef52deff262df6ee9d7d59992ffb2fd37c1ae
|
data/Gemfile.lock
CHANGED
@@ -0,0 +1,121 @@
|
|
1
|
+
# Approximate string matching algorithms
|
2
|
+
|
3
|
+
## Preprocessing
|
4
|
+
|
5
|
+
```mermaid
|
6
|
+
graph LR;
|
7
|
+
String --> Split --> 1[Sequence of letters]
|
8
|
+
Split --> 2[Set of n-grams]
|
9
|
+
Split --> 3[Set of n-grams with frequency]
|
10
|
+
|
11
|
+
String --> Tokenizer --> 4[Sequence of words]
|
12
|
+
Tokenizer --> 5[Set of words]
|
13
|
+
Tokenizer --> 6[Set of words with frequency]
|
14
|
+
|
15
|
+
1 --> Sequence
|
16
|
+
4 --> Sequence
|
17
|
+
2 --> Set
|
18
|
+
5 --> Set
|
19
|
+
3 --> swc[Set with frequency]
|
20
|
+
6 --> swc
|
21
|
+
```
|
22
|
+
|
23
|
+
- **Tokenizer** is language dependent, so algorithm would need to know language upfront or be able to detect it.
|
24
|
+
- **Sequence** is required for "edit distance" algorithms, because they need to know position.
|
25
|
+
- In means of implementation **set** can be implemented as hash table e.g. `{a: true, b: true}` (`aba`)
|
26
|
+
- Than **set with frequency** can be implemented as hash where value would be a frequency `{a: 2, b: 1}` (`aba`)
|
27
|
+
- There are aslo **skip-grams** which are not shown here
|
28
|
+
|
29
|
+
There can be more steps in this process, for example:
|
30
|
+
|
31
|
+
- Converting strings to lower case
|
32
|
+
- Normalizing alphabet, for example `β` can be converted to `ss` or `è` can be converted to `e`
|
33
|
+
- String can be padded before splitting in n-grams, this will add "weight" to the start or the end of the word.
|
34
|
+
- Tokenizer can have a lot of dictionary-based operations after splitting, for example:
|
35
|
+
- Removing **stop-words** e.g. words which are very common in the language and doesn't add a lot of information, like `the`, `in`, etc.
|
36
|
+
- Fixing common spelling errors
|
37
|
+
- **Stemming** words e.g. converting them into canonical form, for example `birds` will be converted to `bird`, etc.
|
38
|
+
- Replace words with more common synonyms, or convert acronyms to full verstion
|
39
|
+
|
40
|
+
## Measures
|
41
|
+
|
42
|
+
Measures can be separated in following categories:
|
43
|
+
|
44
|
+
1. similarity/dissimilarity
|
45
|
+
- similarity - higher value means more closer strings
|
46
|
+
- dissimilarity - lower value means more closer strings
|
47
|
+
- distance - disimilarity which has metrics properties
|
48
|
+
2. ranking/relevance
|
49
|
+
- if measure returns only `true` and `false` it can be used as relevance, but not as ranking function
|
50
|
+
- similarity can be used as ranking if ordered descending
|
51
|
+
- dissimilarity can be used as ranking if ordered ascending
|
52
|
+
3. by type of expected input
|
53
|
+
- sequence
|
54
|
+
- set
|
55
|
+
- set with frequency
|
56
|
+
4. normalized/not normalized
|
57
|
+
- measure is normalised if it's values are in the range 0..1
|
58
|
+
- normalised similarity can be converted to dissimilarity using formula `dis(x, y) = 1 - sim(x, y)`
|
59
|
+
5. By type of assumed error
|
60
|
+
- phonetic (if words sound similar). Good for words without but with different spellings, like `Claire`, `Clare`
|
61
|
+
- ortographic (if words look similar). Good for detecting typos and errors
|
62
|
+
|
63
|
+
| category | Measure | Input data | Type | Metric | Normalized |
|
64
|
+
| ------------ | ------------------------------------------- | ------------------- | ---------------------- | ------------- | -------------------------------------- |
|
65
|
+
| Phonetic | Phonetic hashing (Soundex, Metaphone, etc.) | sequence of letters | similarity (relevance) | | Yes |
|
66
|
+
| Orthographic | Levenshtein distance | sequence | dissimilarity | Yes | `l(x, y) / max(len(x), len(y))` , NED |
|
67
|
+
| | Damerau-Levenshtein distance | sequence | dissimilarity | | |
|
68
|
+
| | Hamming distance | sequence | dissimilarity | Yes | |
|
69
|
+
| | Jaro distance | sequence | dissimilarity | | |
|
70
|
+
| | Jaro–Winkler distance | sequence | dissimilarity | | |
|
71
|
+
| | Longest common subsequence (LCS) | sequence | similarity | ? | `len(lcs(x, y)) / max(len(x), len(y))` |
|
72
|
+
| | Jaccard index | set | similarity | `1 - j(x ,y)` | Yes |
|
73
|
+
| | Dice coefficient | set | similarity | ? | Yes |
|
74
|
+
| | Cosine similarity | set with frequency | similarity | ? | Yes |
|
75
|
+
|
76
|
+
**TODO**: it is not full list of measures
|
77
|
+
|
78
|
+
## Algorithms
|
79
|
+
|
80
|
+
1. Some measures can have more than one algorithm to calculate it
|
81
|
+
2. Algorithms are different by computational and space complexity
|
82
|
+
|
83
|
+
| Measure | Algorithm | Computational complexity | Space complexity | Comment |
|
84
|
+
| -------------------------------- | -------------- | ------------------------ | ---------------- | ---------------------------------------------------------------------------------------------------------------------------------------------------- |
|
85
|
+
| Levenshtein distance | Wagner-Fischer | O(mn) | O(mn) | Wagner, Robert A., and Michael J. Fischer. "The string-to-string correction problem." Journal of the ACM 21.1 (1974): 168-173 |
|
86
|
+
| | Myers | | | Myers, Gene. "A fast bit-vector algorithm for approximate string matching based on dynamic programming." Journal of ACM (JACM) 46.3 (1999): 395-415. |
|
87
|
+
| Longest common subsequence (LCS) | Larsen | O(log(m)log(n)) | O(mn²) | "Length of Maximal Common Subsequences", K.S. Larsen |
|
88
|
+
| | Hunt–Szymanski | | | https://imada.sdu.dk/~rolf/Edu/DM823/E16/HuntSzymanski.pdf |
|
89
|
+
|
90
|
+
**TODO**: it is not full list of algorithms
|
91
|
+
|
92
|
+
## Indexes
|
93
|
+
|
94
|
+
All algorithms above assume two input strings. So if we need to search throug a database we would need to go row by by row comparing each value in DB to the query, choose all relevant rows and sort by rank.
|
95
|
+
|
96
|
+
This would be slow, so in order to overcome this we can preprocess data and produce data structure more suitable for the given algorithm, to speed up process of retrieval, by making inserts and updates a bit slower. This data structure in the context of database called **index**.
|
97
|
+
|
98
|
+
For example we can implement following indexes:
|
99
|
+
|
100
|
+
| Algorithm | Index | Example |
|
101
|
+
| ----------------- | ----------------------------------------- | ------------------------------------ |
|
102
|
+
| Phonetic hashing | B-tree with hashed values | |
|
103
|
+
| set of n-grams | inverted index with trigrams | PostgreSQL trigram index |
|
104
|
+
| sequence of words | inverted index with words (and positions) | PostgreSQL and MySQL full-text index |
|
105
|
+
|
106
|
+
- Those indexes won't help much with edit distance algorithm(s), because it is still quite expensive algorithm
|
107
|
+
- There is so called BK-tree index, which works in this case but it is not suitable for databases, it is more suitable for fixed dictionary, like correction for spell errors
|
108
|
+
|
109
|
+
## Ranking
|
110
|
+
|
111
|
+
Theoretically it is possible to use measures as ranking, but the problem is that those functions only take into account two strings. This won't work good for big texts in DB and small queries. Because all measures will be indistinguishable (either very small, or very big).
|
112
|
+
|
113
|
+
For this case there are better rnaking functions, such as:
|
114
|
+
|
115
|
+
- TF-IDF
|
116
|
+
- BM25
|
117
|
+
- DFR similarity
|
118
|
+
- IB similarity
|
119
|
+
- LM Dirichlet similarity
|
120
|
+
- LM Jelinek Mercer similarity
|
121
|
+
- etc.
|
@@ -0,0 +1,172 @@
|
|
1
|
+
# Approximate string matching
|
2
|
+
|
3
|
+
a.k.a. string proximity search, error-tolerant search, string similarity search, fuzzy string searching, error tolerant pattern matching.
|
4
|
+
|
5
|
+
## Definitions
|
6
|
+
|
7
|
+
### Edit distance
|
8
|
+
|
9
|
+
**Edit distance** of two strings `dist(x, y)` is the minimal number of edit operations needed to transform first string (`x`) into the second (`y`).
|
10
|
+
|
11
|
+
Diffrent edit distances can have different **edit operations**: deletion, insertion, substitution, transposition.
|
12
|
+
|
13
|
+
| | deletion | insertion | substitution | transposition |
|
14
|
+
| ---------------------------- | -------- | --------- | ------------ | ------------- |
|
15
|
+
| Levenshtein distance | + | + | + | |
|
16
|
+
| Damerau-Levenshtein distance | + | + | + | + |
|
17
|
+
| Hamming distance | | | + | |
|
18
|
+
| Longest common subsequence | + | + | | |
|
19
|
+
|
20
|
+
Distance is a metric. Metric is a function which has following properties:
|
21
|
+
|
22
|
+
1. `dist(x, x) = 0` identity
|
23
|
+
2. `dist(x, y) >= 0` non-negativity
|
24
|
+
3. `dist(x, y) = dist(y, x)` symmetry
|
25
|
+
4. `dist(x, y) <= dist(x, z) + dist(z, y)` triangle inequality
|
26
|
+
|
27
|
+
Examples of metrics:
|
28
|
+
|
29
|
+
- Euclidean distance
|
30
|
+
- Manhattan distance
|
31
|
+
- Hamming distance
|
32
|
+
|
33
|
+
Different edit operations can have different cost. But if we want it to preserve metric properties deletion and insertion must be the same cost.
|
34
|
+
|
35
|
+
```
|
36
|
+
I N T E - N T I O N
|
37
|
+
- E X E C U T I O N
|
38
|
+
d s s i s
|
39
|
+
```
|
40
|
+
|
41
|
+
- If each operation has cost of 1 - distance between these is 5.
|
42
|
+
- If substitutions cost 2 (Levenshtein) - distance between these is 8.
|
43
|
+
|
44
|
+
In order to determine minimal number of edit operations we need optimal squence alignment:
|
45
|
+
|
46
|
+
```
|
47
|
+
H A N D H A N D - - - - H A N D -
|
48
|
+
A N D I - - - - A N D I - A N D I
|
49
|
+
```
|
50
|
+
|
51
|
+
Some distances may have more than one algorithm to compute it, for example [Levenshtein distance](https://ceptord.net/20200815-Comparison.html):
|
52
|
+
|
53
|
+
- **Wagner-Fischer**. Wagner, Robert A., and Michael J. Fischer. "The string-to-string correction problem." Journal of the ACM 21.1 (1974): 168-173.
|
54
|
+
- **Myers**. Myers, Gene. "A fast bit-vector algorithm for approximate string matching based on dynamic programming." Journal of ACM (JACM) 46.3 (1999): 395-415.
|
55
|
+
|
56
|
+
Distance-like measures which does not hold metric properties are called **dissimilarities**.
|
57
|
+
|
58
|
+
Distance-like measures require sequential data e.g. interpret string as array of tokens (letters or words).
|
59
|
+
|
60
|
+
### String similarity
|
61
|
+
|
62
|
+
**Similarity** is the measure of how two strings are similar. For distance - the lower the value, the closer the two strings. But for similarity: the higher the value, the closer the two strings.
|
63
|
+
|
64
|
+
Similarity is not a metric. Edit distance can be converted to similarity, for example:
|
65
|
+
|
66
|
+
```
|
67
|
+
sim(x, y) = 1 - dist(x, y) / max(len(x), len(y))
|
68
|
+
```
|
69
|
+
|
70
|
+
On the other hand similarity doesn't have to rely on edit distance. For example, simplest but probably not most effective measure can be number of similar symbols in the string (aka unigrams). But trigrams would be quite good estimation.
|
71
|
+
|
72
|
+
```
|
73
|
+
dice(x ,y) = 2 * len(commom(x, y)) / (len(x) + len(y))
|
74
|
+
```
|
75
|
+
|
76
|
+
Normalized similarity is in the range `[0, 1]` . Where 0 means strings are different, 1 that strings are equal.
|
77
|
+
|
78
|
+
Similarity often doesn't need alignment, so it's much computationally-cheaper to calculate. It can be used to pre-filter set of strings before applying more expensive edit distance algorithms.
|
79
|
+
|
80
|
+
### String is ...
|
81
|
+
|
82
|
+
String can be treated as sequence of letters or as sequence of words or can be converted to n-grams (unigrams, bigrams, trigrams etc.)
|
83
|
+
|
84
|
+
Classically edit distance applied to letters. `abc` vs `acb` - one transposition. But also it can be applied to words `abc def` vs `def abc` - one transposition.
|
85
|
+
|
86
|
+
If we want to work with words we need tokenizer, which is language dependant. The most primitive tokenizer for western languages is to split string by all non-alphanumeric letters. `Hello, Joe!` turns into `Hello`, `Joe`. But this approach is quite limited and doesn't work for words like, `O'neil`, `aren't`, `T.V.`, `B-52`, compound proper nouns, etc. Also this approach doesn't work for CJK (Chinese, Japanese, Korean).
|
87
|
+
|
88
|
+
### n-grams
|
89
|
+
|
90
|
+
[n-grams](https://en.wikipedia.org/wiki/N-gram) is a set of all substrings of length `n` contained in a given string. For example, `abc` bigrams are `ab` and `bc`.
|
91
|
+
|
92
|
+
n-grams can be padded. We can add special symbol(s) before and after string to increase number of strings. For example `bigram-1ab` for `abc` are `#a`, `ab`, `bc`, `c#`.
|
93
|
+
|
94
|
+
Sometimes padded n-grams give much better similarity measure. For example, it is empiracally shown that `trigram-2b` gives much better result for matching drug names with errors. See [Similarity as a risk factor in drug-name confusion errors, B Lambert et al., 1999](https://www.researchgate.net/profile/Sanjay-Gandhi-3/publication/12701019_Similarity_as_a_risk_factor_in_drug-name_confusion_errors_the_look-alike_orthographic_and_sound-alike_phonetic_model/links/0deec51e6f14b979c1000000/Similarity-as-a-risk-factor-in-drug-name-confusion-errors-the-look-alike-orthographic-and-sound-alike-phonetic-model.pdf).
|
95
|
+
|
96
|
+
n-grams are language independent (unlike tokenization). n-grams can be used to create indexes in database, for example PostgreSQL has trigram index.
|
97
|
+
|
98
|
+
### Overlap coefficient
|
99
|
+
|
100
|
+
Overlap coefficient is the type of similarity which works for sets. In cotext of strings - can be applied to n-grams or set of tokens (words).
|
101
|
+
|
102
|
+
| name | formula | comment |
|
103
|
+
| ------------------------------- | --------------------------------------------------- | ------------------------------------------------------------------------------------------------- |
|
104
|
+
| Jaccard index | \|x ∩ y\| / \|x ∪ y\| | Grove Karl Gilbert in 1884, Paul Jaccard, Tanimoto |
|
105
|
+
| Dice coefficient | 2\|x ∩ y\| / (\|x\| + \|y\|) | Lee Raymond Dice in 1945, Thorvald Sørensen in 1948. The same as F1 (?) |
|
106
|
+
| Tversky index | \|x ∩ y\| / (\|x ∩ y\| + a\|x \\ y\| + b\|y \\ x\|) | Amos Tversky in 1977. If a=b=1 the same as Jaccard index. If a=b=0.5 the same as Dice coefficient |
|
107
|
+
| Szymkiewicz–Simpson coefficient | \|x ∩ y\| / min(\|x\|, \|y\|) | Sometimes called overlap coefficient |
|
108
|
+
|
109
|
+
### Relevance and ranking
|
110
|
+
|
111
|
+
> Relevance is the degree to which something is related or useful to what is happening or being talked about:
|
112
|
+
>
|
113
|
+
> -- https://dictionary.cambridge.org/dictionary/english/relevance
|
114
|
+
|
115
|
+
> Relevance is the art of ranking content for a search based on how much that content satisfies the needs of the user and the business. The devil is completely in the details.
|
116
|
+
>
|
117
|
+
> -- https://livebook.manning.com/book/relevant-search/chapter-1/8
|
118
|
+
|
119
|
+
Sometimes relevance and ranking considered to be separate steps.
|
120
|
+
|
121
|
+
Relevance is a binary function which returns `true` or `false` e.g. if document (row in a table) is relevant to the search or not.
|
122
|
+
|
123
|
+
Ranking is a function which assigns some score to each relevant result in order to bring more relevant results in the top of the list.
|
124
|
+
|
125
|
+
Relevance and ranking are connected. Sometimes it can be calculated in one step, for example, calculate similarity - similarity would be ranking function, and relevance would be similarity more than some threshold. Sometimes it can be two different steps and two different algorithms.
|
126
|
+
|
127
|
+
### Phonetic indexing
|
128
|
+
|
129
|
+
> Index - a list (as of bibliographical information or citations to a body of literature) arranged usually in alphabetical order of some specified datum (such as author, subject, or keyword)
|
130
|
+
>
|
131
|
+
> -- https://www.merriam-webster.com/dictionary/index
|
132
|
+
|
133
|
+
Phonetic algorithm able to prodcue some kind of hash. If hash for different words the same we can assume that those words sound the same.
|
134
|
+
|
135
|
+
Phonetic algorithms are language dependent and typically used to compare people names, which can have different spelling, for example `Claire` and `Clare`.
|
136
|
+
|
137
|
+
Phonetic algorithm returns a hash, two hashes can be compared - this way we can get relevance function. For ranking function we can use for example edit distance or similarity.
|
138
|
+
|
139
|
+
| name | language | comment |
|
140
|
+
| ----------------------- | ------------------------------------------------------------------- | ----------------------------------------------------------------------------------- |
|
141
|
+
| Soundex | En | Robert C. Russell and Margaret King Odell around 1918 |
|
142
|
+
| Cologne phonetics | En (optimized to match the German language) | Hans Joachim Postel in 1969 |
|
143
|
+
| NYSIIS | En | Sometimes called Reverse Soundex. 1970 |
|
144
|
+
| Match rating approach | En | Western Airlines in 1977 |
|
145
|
+
| Daitch–Mokotoff Soundex | En | Add support for Germanic or Slavic surnames. Gary Mokotoff and Randy Daitch in 1985 |
|
146
|
+
| Metaphone | En | Lawrence Philips in 1990 |
|
147
|
+
| Double metaphone | En (of Slavic, Germanic, Celtic, Greek, Chinese, and other origins) | Lawrence Philips in 2000 |
|
148
|
+
| Metaphone 3 | En | Lawrence Philips in 2009 |
|
149
|
+
| Caverphone | En (optimized for accents present in parts of New Zealand) | David Hood in 2002 |
|
150
|
+
| Beider–Morse | En | Improvement over Daitch–Mokotoff Soundex. 2008 |
|
151
|
+
|
152
|
+
Other variations of Soundex:
|
153
|
+
|
154
|
+
- ONCA - The Oxford Name Compression Algorithm
|
155
|
+
- Phonex
|
156
|
+
- SoundD
|
157
|
+
|
158
|
+
Algorithms for other languages:
|
159
|
+
|
160
|
+
- [French Phonetic Algorithms](https://yomguithereal.github.io/talisman/phonetics/french)
|
161
|
+
- [German Phonetic Algorithms](https://yomguithereal.github.io/talisman/phonetics/german)
|
162
|
+
|
163
|
+
## Types of search
|
164
|
+
|
165
|
+
| Type of search | Is precise? | Name | Intention | Example of data |
|
166
|
+
| -------------- | ----------- | ---------------- | -------------------------------- | ----------------------------------------------------------- |
|
167
|
+
| text | precise | Substring search | starts/ends with..., contains... | logs, match by part of word, etc. |
|
168
|
+
| | | Regexp | contains pattern | logs, match by part of word, etc. |
|
169
|
+
| | approximate | Phonetic | sounds like | names, emails, words with alternative spelling, etc. |
|
170
|
+
| | | Orhtographic | looks like | drug names, biological species, typos in proper nouns, etc. |
|
171
|
+
| | | Full-text | relevant to | texts in natural language |
|
172
|
+
| parametric | precise | Filter | filter rows by parameters | structured data, like RBDMS tables |
|
data/docs/language-design.md
CHANGED
@@ -65,11 +65,22 @@ which may be counterintuitve without syntax checker.
|
|
65
65
|
- [MySQL Full-Text Search](https://dev.mysql.com/doc/refman/8.0/en/fulltext-boolean.html)
|
66
66
|
- [PostgreSQL Full-Text Search](https://www.postgresql.org/docs/current/textsearch-controls.html#TEXTSEARCH-PARSING-QUERIES)
|
67
67
|
- [Meilisearch](https://docs.meilisearch.com/learn/advanced/filtering_and_faceted_search.html#using-filters)
|
68
|
-
- [Elasticsearch](https://www.elastic.co/guide/en/elasticsearch/reference/current/query-dsl.html)
|
69
68
|
- [Solr](https://solr.apache.org/guide/6_6/the-standard-query-parser.html)
|
70
69
|
- [Lucene](https://lucene.apache.org/core/2_9_4/queryparsersyntax.html) ([Lucene vs Solr](https://www.lucenetutorial.com/lucene-vs-solr.html))
|
71
70
|
- [Sphinx](https://sphinxsearch.com/docs/current/extended-syntax.html)
|
72
71
|
|
72
|
+
### Other search engines
|
73
|
+
|
74
|
+
- [Manticore Search](https://github.com/manticoresoftware/manticoresearch) is an open-source database that was created in 2017 as a continuation of Sphinx Search engine.
|
75
|
+
- [RediSearch](https://github.com/RediSearch/RediSearch)
|
76
|
+
- [Elasticsearch](https://www.elastic.co/guide/en/elasticsearch/reference/current/query-dsl.html) and it's alternatives:
|
77
|
+
- [sonic](https://github.com/valeriansaliou/sonic)
|
78
|
+
- [typesense](https://github.com/typesense/typesense)
|
79
|
+
- [zinc](https://github.com/zinclabs/zinc)
|
80
|
+
- [Toshi](https://github.com/toshi-search/Toshi)
|
81
|
+
- [phalanx](https://github.com/mosuka/phalanx)
|
82
|
+
- [pisa](https://github.com/pisa-engine/pisa)
|
83
|
+
|
73
84
|
## Text + parametric search
|
74
85
|
|
75
86
|
- Two separate fields - one for text and one for parametric search
|
@@ -204,3 +215,7 @@ Option 2:
|
|
204
215
|
- `or a`
|
205
216
|
- containing `a`
|
206
217
|
- containing `or` and `a`
|
218
|
+
|
219
|
+
## Similar languages
|
220
|
+
|
221
|
+
- [REST Query Language](https://github.com/jirutka/rsql-parser)
|
@@ -18,9 +18,13 @@ module SearchSyntax
|
|
18
18
|
def initialize(text:, params:, sort: nil)
|
19
19
|
@text = text
|
20
20
|
if params.is_a?(Array)
|
21
|
-
params = params.to_h { |i| [i.to_s, i] }
|
21
|
+
params = params.to_h { |i| [i.to_s.delete(":"), i] }
|
22
22
|
elsif params.is_a?(Hash)
|
23
|
-
params = params.map
|
23
|
+
params = params.map do |k, v|
|
24
|
+
k = k.to_s
|
25
|
+
skip_predicate = k.include?(":")
|
26
|
+
[k.delete(":"), v + (skip_predicate ? ":" : "")]
|
27
|
+
end.to_h
|
24
28
|
end
|
25
29
|
@params = params
|
26
30
|
@allowed_params = params.keys
|
@@ -50,8 +54,7 @@ module SearchSyntax
|
|
50
54
|
result[:s] = transform_sort_param(node[:value])
|
51
55
|
false
|
52
56
|
elsif @allowed_params.include?(node[:name])
|
53
|
-
|
54
|
-
key = "#{@params[node[:name]]}_#{predicate}".to_sym
|
57
|
+
key = name_with_predicate(node)
|
55
58
|
if !result.key?(key)
|
56
59
|
result[key] = node[:value]
|
57
60
|
else
|
@@ -83,5 +86,17 @@ module SearchSyntax
|
|
83
86
|
|
84
87
|
[result, errors]
|
85
88
|
end
|
89
|
+
|
90
|
+
private
|
91
|
+
|
92
|
+
def name_with_predicate(node)
|
93
|
+
name = @params[node[:name]]
|
94
|
+
if name.include?(":")
|
95
|
+
name.delete(":")
|
96
|
+
else
|
97
|
+
predicate = PREDICATES[node[:predicate]] || :eq
|
98
|
+
"#{name}_#{predicate}"
|
99
|
+
end.to_sym
|
100
|
+
end
|
86
101
|
end
|
87
102
|
end
|
metadata
CHANGED
@@ -1,14 +1,14 @@
|
|
1
1
|
--- !ruby/object:Gem::Specification
|
2
2
|
name: search_syntax
|
3
3
|
version: !ruby/object:Gem::Version
|
4
|
-
version: 0.1.
|
4
|
+
version: 0.1.3
|
5
5
|
platform: ruby
|
6
6
|
authors:
|
7
7
|
- stereobooster
|
8
8
|
autorequire:
|
9
9
|
bindir: exe
|
10
10
|
cert_chain: []
|
11
|
-
date: 2022-10-
|
11
|
+
date: 2022-10-24 00:00:00.000000000 Z
|
12
12
|
dependencies:
|
13
13
|
- !ruby/object:Gem::Dependency
|
14
14
|
name: treetop
|
@@ -39,6 +39,8 @@ files:
|
|
39
39
|
- LICENSE.txt
|
40
40
|
- README.md
|
41
41
|
- Rakefile
|
42
|
+
- docs/approximate-string-matching-algorithms.md
|
43
|
+
- docs/approximate-string-matching.md
|
42
44
|
- docs/language-design.md
|
43
45
|
- docs/terminology.md
|
44
46
|
- lib/search_syntax.rb
|