elasticsearch_record 1.5.2 → 1.6.0

Sign up to get free protection for your applications and to get access to all the features.
checksums.yaml CHANGED
@@ -1,7 +1,7 @@
1
1
  ---
2
2
  SHA256:
3
- metadata.gz: 9a53ead4edbffa7bb74c8021008343f80813512f89b6580717d7d4335ce238ba
4
- data.tar.gz: 26603c0b2de3b938c4239401652d4986088be1d29b1c039c2feb0c48b83a5207
3
+ metadata.gz: b29c6db7894f8365eb5a5922633a4adf53a793006c3ffa2fb381dd9ae171b124
4
+ data.tar.gz: 72c7f4260b76be5743e061838df5051006c82a883172d29f2e0442b6215ac4ed
5
5
  SHA512:
6
- metadata.gz: 8d60b93db8b944ac6b0b42c91e411d6ae9673f773feeab7fbc9097e519663401a349936a271f6949d65f46485198cce44c349c38d3d2f668e2f3ee6148417264
7
- data.tar.gz: 9e3a2700baa40c8bc4b6a72c6a94895f3beb030294955ca6d1a168349c81b289d80bf2d93cfa5411f0a8c2a74dd3b183ece5ee64ef3deef7ff2f0f440aef32dc
6
+ metadata.gz: 7e09f30a077c81524c800a5694947d947da7a0a9273a4304592221d2c191c8a8582fbd94e65990ddf21cdbb23d74d5e7d4e47e53e14139c362fda91468aae182
7
+ data.tar.gz: bb03f9c4fa2749ccd23be70de65a1480b2d0a7cfa89ef51f9a3c3c2cd7e0810708e9e1e8050d7de033bfc670d5a127c75349f1f1380578246107395746e41652
data/README.md CHANGED
@@ -53,6 +53,22 @@ Or install it yourself as:
53
53
  * logs Elasticsearch API-calls
54
54
  * shows Runtime in logs
55
55
 
56
+ ## Notice
57
+ Since ActiveRecord does not have any configuration option to support transactions and
58
+ Elasticsearch does **NOT** support transactions, it may be risky to ignore them.
59
+
60
+ As a default, transactions are 'silently swallowed' to not break any existing applications...
61
+
62
+ To raise an exception while using transactions on a ElasticsearchRecord model, the following flag can be enabled.
63
+ However enabling this flag will surely fail transactional tests _(prevent this with 'use_transactional_tests=false')_
64
+
65
+ ```ruby
66
+ # config/initializers/elasticsearch_record.yml
67
+
68
+ # enable transactional exceptions
69
+ ElasticsearchRecord.error_on_transaction = true
70
+ ```
71
+
56
72
  ## Setup
57
73
 
58
74
  ### a) Update your **database.yml** and add a elasticsearch connection:
@@ -224,6 +240,7 @@ total = scope.total
224
240
  - configure
225
241
  - aggregate
226
242
  - refresh
243
+ - timeout
227
244
  - query
228
245
  - filter
229
246
  - must_not
@@ -260,6 +277,7 @@ _see simple documentation about these methods @ [rubydoc](https://rubydoc.info/g
260
277
  - composite
261
278
  - point_in_time
262
279
  - pit_results
280
+ - pit_delete
263
281
 
264
282
  _see simple documentation about these methods @ [rubydoc](https://rubydoc.info/gems/elasticsearch_record/ElasticsearchRecord/Relation/ResultMethods)_
265
283
 
@@ -366,13 +384,26 @@ SearchUser.api.mappings
366
384
  SearchUser.api.insert([{name: 'Hans', age: 34}, {name: 'Peter', age: 22}])
367
385
  ```
368
386
 
387
+ ### dangerous methods
369
388
  * open!
370
389
  * close!
371
390
  * refresh!
372
391
  * block!
373
392
  * unblock!
393
+
394
+ ### dangerous methods with args
395
+ * create!(...)
396
+ * clone!(...)
397
+ * rename!(...)
398
+ * backup!(...)
399
+ * restore!(...)
400
+ * reindex!(...)
401
+
402
+ ### dangerous methods with confirm parameter
374
403
  * drop!(confirm: true)
375
404
  * truncate!(confirm: true)
405
+
406
+ ### table methods
376
407
  * mappings
377
408
  * metas
378
409
  * settings
@@ -380,17 +411,19 @@ SearchUser.api.insert([{name: 'Hans', age: 34}, {name: 'Peter', age: 22}])
380
411
  * state
381
412
  * schema
382
413
  * exists?
383
- * alias_exists?
384
- * setting_exists?
385
- * mapping_exists?
386
- * meta_exists?
387
-
388
- Fast insert, update, delete raw data
389
- * index
390
- * insert
391
- * update
392
- * delete
393
- * bulk
414
+
415
+ ### plain methods
416
+ * alias_exists?(...)
417
+ * setting_exists?(...)
418
+ * mapping_exists?(...)
419
+ * meta_exists?(...)
420
+
421
+ ### Fast insert, update, delete raw data
422
+ * index(...)
423
+ * insert(...)
424
+ * update(...)
425
+ * delete(...)
426
+ * bulk(...)
394
427
 
395
428
  -----
396
429
 
@@ -436,6 +469,9 @@ Access these methods through the model's connection or within any `Migration`.
436
469
  - create_table
437
470
  - change_table
438
471
  - rename_table
472
+ - reindex_table
473
+ - backup_table
474
+ - restore_table
439
475
 
440
476
  ### table actions:
441
477
  - change_meta
data/docs/CHANGELOG.md CHANGED
@@ -1,5 +1,31 @@
1
1
  # ElasticsearchRecord - CHANGELOG
2
2
 
3
+ ## [1.6.0] - 2023-08-11
4
+ * [add] `ElasticsearchRecord::Base#undelegate_id_attribute_with` method to support a temporary 'undelegation' (used to create a new record)
5
+ * [add] `ElasticsearchRecord::Relation#timeout` to directly provide the timeout-parameter to the query
6
+ * [add] `ElasticsearchRecord.error_on_transaction`-flag to throw transactional errors (default: `false`) - this will now **IGNORE** all transactions
7
+ * [add] `ElasticsearchRecord::ModelApi` create!, clone!, rename!, backup!, restore! & reindex!-methods
8
+ * [add] `ElasticsearchRecord::Relation#pit_delete` which executes a delete query in a 'point_in_time' scope.
9
+ * [add] `ActiveRecord::ConnectionAdapters::Elasticsearch::TableStatements#backup_table` to create a backup (snapshot) of the entire table (index)
10
+ * [add] `ActiveRecord::ConnectionAdapters::Elasticsearch::TableStatements#restore_table` to restore a entire table (index)
11
+ * [add] `ActiveRecord::ConnectionAdapters::Elasticsearch::TableStatements#reindex_table` to copy documents from source to destination
12
+ * [ref] `ElasticsearchRecord::Base.delegate_id_attribute` now supports instance writer
13
+ * [ref] `ElasticsearchRecord::Relation#pit_results` adds `ids_only`-parameter to now support a simple return of the records-ids...
14
+ * [fix] Relation `#last`-method will raise an transport exception if cluster setting '**indices.id_field_data.enabled**' is disabled (now checks for `access_id_fielddata?`)
15
+ * [fix] ElasticsearchRecord-connection settings does not support `username` key
16
+ * [fix] ElasticsearchRecord-connection settings does not support `port` key
17
+ * [fix] `_id`-Attribute is erroneously defined as 'virtual' attribute - but is required for insert statements.
18
+ * [fix] unsupported **SAVEPOINT** transactions throws exceptions _(especially in tests)_
19
+ * [fix] `ElasticsearchRecord::ModelApi#bulk` does not recognize `'_id' / :_id` attribute
20
+ * [fix] `ElasticsearchRecord::ModelApi#bulk` does not correctly build the data-hash for `update`-operation _(missing 'doc'-node)_
21
+ * [ref] simplify `ElasticsearchRecord::Base#searchable_column_names`
22
+ * [fix] creating a new record does not recognize a manually provided `_id`-attribute
23
+ * [fix] creating a new record with active `delegate_id_attribute`-flag does not update the records `_id`.
24
+
25
+ ## [1.5.3] - 2023-07-14
26
+ * [fix] `ElasticsearchRecord::Relation#where!` on nested, provided `:none` key
27
+ * [ref] minor code tweaks and comment updates
28
+
3
29
  ## [1.5.2] - 2023-07-12
4
30
  * [fix] `ElasticsearchRecord::Relation#limit` setter method `limit_value=` to work with **delegate_query_nil_limit?**
5
31
 
@@ -9,10 +35,10 @@
9
35
 
10
36
  ## [1.5.0] - 2023-07-10
11
37
  * [add] additional `ElasticsearchRecord::ModelApi` methods **drop!** & **truncate!**, which have to be called with a `confirm:true` parameter
12
- * [add] `.ElasticsearchRecord::Base.delegate_query_nil_limit` to automatically delegate a relations `limit(nil)`-call to the **max_result_window** _(set to 10.000 as default)_
38
+ * [add] `ElasticsearchRecord::Base.delegate_query_nil_limit` to automatically delegate a relations `limit(nil)`-call to the **max_result_window** _(set to 10.000 as default)_
13
39
  * [add] `ActiveRecord::ConnectionAdapters::Elasticsearch::SchemaStatements#access_shard_doc?` which checks, if the **PIT**-shard_doc order is available
14
40
  * [add] support for **_shard_doc** as a default order for `ElasticsearchRecord::Relation#pit_results`
15
- * [ref] `.ElasticsearchRecord::Base.relay_id_attribute` to a more coherent name: `delegate_id_attribute`
41
+ * [ref] `ElasticsearchRecord::Base.relay_id_attribute` to a more coherent name: `delegate_id_attribute`
16
42
  * [ref] `ElasticsearchRecord::Relation#ordered_relation` to optimize already ordered relations
17
43
  * [ref] gemspecs to support different versions of Elasticsearch
18
44
  * [ref] improved README
@@ -39,7 +39,7 @@ This Code of Conduct applies within all community spaces, and also applies when
39
39
 
40
40
  ## Enforcement
41
41
 
42
- Instances of abusive, harassing, or otherwise unacceptable behavior may be reported to the community leaders responsible for enforcement at tg@reimbursement.institute. All complaints will be reviewed and investigated promptly and fairly.
42
+ Instances of abusive, harassing, or otherwise unacceptable behavior may be reported to the community leaders responsible for enforcement at info@ruby-smart.org. All complaints will be reviewed and investigated promptly and fairly.
43
43
 
44
44
  All community leaders are obligated to respect the privacy and security of the reporter of any incident.
45
45
 
@@ -116,7 +116,7 @@ module ActiveRecord
116
116
  # Defaults to false.
117
117
  # @param [String] table_name
118
118
  # @param [Boolean] if_exists
119
- # @return [Array] acknowledged status
119
+ # @return [Boolean] acknowledged status
120
120
  def drop_table(table_name, if_exists: false, **)
121
121
  schema_cache.clear_data_source_cache!(table_name)
122
122
  api(:indices, :delete, { index: table_name, ignore: (if_exists ? 404 : nil) }, 'DROP TABLE').dig('acknowledged')
@@ -148,11 +148,12 @@ module ActiveRecord
148
148
  end
149
149
  end
150
150
 
151
- # clones an entire table (index) to the provided +target_name+.
151
+ # clones an entire table (index) with its docs to the provided +target_name+.
152
152
  # During cloning, the table will be automatically 'write'-blocked.
153
153
  # @param [String] table_name
154
154
  # @param [String] target_name
155
155
  # @param [Hash] options
156
+ # @return [Boolean] acknowledged status
156
157
  def clone_table(table_name, target_name, **options)
157
158
  # create new definition
158
159
  definition = clone_table_definition(table_name, target_name, **extract_table_options!(options))
@@ -168,6 +169,54 @@ module ActiveRecord
168
169
  definition.exec!
169
170
  end
170
171
 
172
+ # creates a backup (snapshot) of the entire table (index) from provided +table_name+.
173
+ # The backup will be closed, to prevent read/write access.
174
+ # The +target_name+ will be auto-generated, if not provided.
175
+ #
176
+ # @example
177
+ # backup_table('screenshots', to: 'screenshots-backup-v1')
178
+ #
179
+ # @param [String] table_name
180
+ # @param [String] to - target_name
181
+ # @param [Boolean] close - closes backup after creation (default: true)
182
+ # @return [String] backup_name
183
+ def backup_table(table_name, to: nil, close: true)
184
+ to ||= "#{table_name}-snapshot-#{Time.now.strftime('%s%3N')}"
185
+ raise ArgumentError, "unable to backup '#{table_name}' to already existing target '#{to}'!" if table_exists?(to)
186
+
187
+ clone_table(table_name, to)
188
+ close_table(to) if close
189
+
190
+ to
191
+ end
192
+
193
+ # restores a entire table (index) from provided +target_name+.
194
+ # The +table_name+ will be dropped, if exists.
195
+ # The +from+ will persist, if not provided +drop_backup:true+.
196
+ #
197
+ # @example
198
+ # restore_table('screenshots', from: 'screenshots-backup-v1')
199
+ #
200
+ # @param [String] table_name
201
+ # @param [String] from
202
+ # @param [String (frozen)] timeout - renaming timout (default: '30s')
203
+ # @param [Boolean] open - opens restored backup after creation (default: true)
204
+ # @return [Boolean] acknowledged status
205
+ def restore_table(table_name, from:, timeout: nil, open: true, drop_backup: false)
206
+ raise ArgumentError, "unable to restore from missing target '#{from}'!" unless table_exists?(from)
207
+ drop_table(table_name, if_exists: true)
208
+
209
+ # choose best strategy
210
+ if drop_backup
211
+ rename_table(from, table_name, timeout: timeout)
212
+ else
213
+ clone_table(from, table_name)
214
+ end
215
+
216
+ # open, if provided
217
+ open_table(from) if open
218
+ end
219
+
171
220
  # renames a table (index) by executing multiple steps:
172
221
  # - clone table
173
222
  # - wait for 'green' state
@@ -178,11 +227,11 @@ module ActiveRecord
178
227
  # @param [String] target_name
179
228
  # @param [String (frozen)] timeout (default: '30s')
180
229
  # @param [Hash] options - additional 'clone' options (like settings, alias, ...)
181
- def rename_table(table_name, target_name, timeout: '30s', **options)
230
+ def rename_table(table_name, target_name, timeout: nil, **options)
182
231
  schema_cache.clear_data_source_cache!(table_name)
183
232
 
184
233
  clone_table(table_name, target_name, **options)
185
- cluster_health(index: target_name, wait_for_status: 'green', timeout: timeout)
234
+ cluster_health(index: target_name, wait_for_status: 'green', timeout: timeout.presence || '30s')
186
235
  drop_table(table_name)
187
236
  end
188
237
 
@@ -255,6 +304,15 @@ module ActiveRecord
255
304
  definition.exec!
256
305
  end
257
306
 
307
+ # Copies documents from a source to a destination.
308
+ # @param [String] table_name
309
+ # @param [String] target_name
310
+ # @param [Hash] options
311
+ # @return [Hash] reindex stats
312
+ def reindex_table(table_name, target_name, **options)
313
+ api(:core, :reindex, { body: { source: { index: table_name }, dest: { index: target_name } } }.merge(options), 'REINDEX TABLE')
314
+ end
315
+
258
316
  # -- mapping -------------------------------------------------------------------------------------------------
259
317
 
260
318
  def add_mapping(table_name, name, type, **options, &block)
@@ -0,0 +1,54 @@
1
+ # frozen_string_literal: true
2
+
3
+ module ActiveRecord
4
+ module ConnectionAdapters
5
+ module Elasticsearch
6
+ module Transactions
7
+ extend ActiveSupport::Concern
8
+
9
+ def transaction(*)
10
+ # since ActiveRecord does not have any configuration option to support transactions,
11
+ # this will be always false
12
+ # return super if supports_transactions?
13
+ #
14
+ # So, transactions are silently swallowed...
15
+ yield
16
+ end
17
+
18
+ # Begins the transaction (and turns off auto-committing).
19
+ def begin_db_transaction(*)
20
+ _throw_transaction_exception!(:begin_db_transaction)
21
+ end
22
+
23
+ # Commits the transaction (and turns on auto-committing).
24
+ def commit_db_transaction(*)
25
+ _throw_transaction_exception!(:commit_db_transaction)
26
+ end
27
+
28
+ # rollback transaction
29
+ def exec_rollback_db_transaction(*)
30
+ _throw_transaction_exception!(:exec_rollback_db_transaction)
31
+ end
32
+
33
+ def create_savepoint(*)
34
+ _throw_transaction_exception!(:create_savepoint)
35
+ end
36
+
37
+ def exec_rollback_to_savepoint(*)
38
+ _throw_transaction_exception!(:exec_rollback_to_savepoint)
39
+ end
40
+
41
+ def release_savepoint(*)
42
+ _throw_transaction_exception!(:release_savepoint)
43
+ end
44
+
45
+ private
46
+
47
+ def _throw_transaction_exception!(method_name)
48
+ return unless ElasticsearchRecord.error_on_transaction
49
+ raise NotImplementedError, "'##{method_name}' is not supported by Elasticsearch.\nTry to prevent transactions or set the 'ElasticsearchRecord.error_on_transaction' to false!"
50
+ end
51
+ end
52
+ end
53
+ end
54
+ end
@@ -3,13 +3,6 @@
3
3
  module ActiveRecord
4
4
  module ConnectionAdapters
5
5
  module Elasticsearch
6
-
7
- class UnsupportedImplementationError < StandardError
8
- def initialize(method_name)
9
- super "Unsupported implementation of method: #{method_name}."
10
- end
11
- end
12
-
13
6
  module UnsupportedImplementation
14
7
  extend ActiveSupport::Concern
15
8
 
@@ -13,6 +13,7 @@ require 'active_record/connection_adapters/elasticsearch/schema_dumper'
13
13
  require 'active_record/connection_adapters/elasticsearch/schema_statements'
14
14
  require 'active_record/connection_adapters/elasticsearch/type'
15
15
  require 'active_record/connection_adapters/elasticsearch/table_statements'
16
+ require 'active_record/connection_adapters/elasticsearch/transactions'
16
17
 
17
18
  require 'arel/visitors/elasticsearch'
18
19
  require 'arel/collectors/elasticsearch_query'
@@ -25,6 +26,12 @@ module ActiveRecord # :nodoc:
25
26
  def elasticsearch_connection(config)
26
27
  config = config.symbolize_keys
27
28
 
29
+ # move 'username' to 'user'
30
+ config[:user] = config.delete(:username) if config[:username]
31
+
32
+ # append 'port' to 'host'
33
+ config[:host] += ":#{config.delete(:port)}" if config[:port] && config[:host]
34
+
28
35
  # move 'host' to 'hosts'
29
36
  config[:hosts] = config.delete(:host) if config[:host]
30
37
 
@@ -45,7 +52,7 @@ module ActiveRecord # :nodoc:
45
52
 
46
53
  # defines the Elasticsearch 'base' structure, which is always included but cannot be resolved through mappings ...
47
54
  BASE_STRUCTURE = [
48
- { 'name' => '_id', 'type' => 'keyword', 'virtual' => true, 'enabled' => true, 'meta' => { 'primary_key' => 'true' } },
55
+ { 'name' => '_id', 'type' => 'keyword', 'meta' => { 'primary_key' => 'true' } },
49
56
  { 'name' => '_index', 'type' => 'keyword', 'virtual' => true },
50
57
  { 'name' => '_score', 'type' => 'float', 'virtual' => true },
51
58
  { 'name' => '_type', 'type' => 'keyword', 'virtual' => true },
@@ -57,6 +64,7 @@ module ActiveRecord # :nodoc:
57
64
  include Elasticsearch::DatabaseStatements
58
65
  include Elasticsearch::SchemaStatements
59
66
  include Elasticsearch::TableStatements
67
+ include Elasticsearch::Transactions
60
68
 
61
69
  class << self
62
70
  def base_structure_keys
@@ -69,7 +77,7 @@ module ActiveRecord # :nodoc:
69
77
  client.ping unless config[:ping] == false
70
78
  client
71
79
  rescue ::Elastic::Transport::Transport::Errors::Unauthorized
72
- raise ActiveRecord::DatabaseConnectionError.username_error(config[:username])
80
+ raise ActiveRecord::DatabaseConnectionError.username_error(config[:user])
73
81
  rescue ::Elastic::Transport::Transport::ServerError => error
74
82
  raise ::ActiveRecord::ConnectionNotEstablished, error.message
75
83
  end
@@ -135,7 +143,7 @@ module ActiveRecord # :nodoc:
135
143
 
136
144
  # define native types - which will be used for schema-dumping
137
145
  NATIVE_DATABASE_TYPES = {
138
- primary_key: { name: 'long' },
146
+ primary_key: { name: 'long' }, # maybe this hae to changed to 'keyword'
139
147
  string: { name: 'keyword' },
140
148
  blob: { name: 'binary' },
141
149
  datetime: { name: 'date' },
@@ -172,6 +180,12 @@ module ActiveRecord # :nodoc:
172
180
  @config[:migrations_paths] || ['db/migrate_elasticsearch']
173
181
  end
174
182
 
183
+ # Does this adapter support transactions in general?
184
+ # HINT: This is +NOT* an official setting and only introduced to ElasticsearchRecord
185
+ def supports_transactions?
186
+ false
187
+ end
188
+
175
189
  # Does this adapter support explain?
176
190
  def supports_explain?
177
191
  false
@@ -28,6 +28,9 @@ module Arel # :nodoc: all
28
28
  when :refresh
29
29
  # change the refresh state
30
30
  @refresh = args[0]
31
+ when :timeout
32
+ # change the timeout
33
+ @timeout = args[0]
31
34
  when :index
32
35
  # change the index name
33
36
  @index = args[0]
@@ -138,12 +138,11 @@ module Arel # :nodoc: all
138
138
 
139
139
  # CUSTOM node by elasticsearch_record
140
140
  def visit_Query(o)
141
- # in some cases we don't have a kind, but where conditions.
142
- # in this case we force the kind as +:bool+.
143
- kind = :bool if o.wheres.present? && o.kind.blank?
144
-
145
- # resolve kind, if not already set
146
- kind ||= o.kind.present? ? visit(o.kind.expr) : nil
141
+ # resolves the query kind.
142
+ # PLEASE NOTE: in some cases there is no kind, but an existing +where+ conditions.
143
+ # This will then be treat as +:bool+.
144
+ kind = o.kind.present? ? visit(o.kind.expr).presence : nil
145
+ kind ||= :bool if o.wheres.present?
147
146
 
148
147
  # check for existing kind - we cannot create a node if we don't have any kind
149
148
  return unless kind
@@ -423,6 +422,7 @@ module Arel # :nodoc: all
423
422
  o
424
423
  end
425
424
 
425
+ # alias for RAW returns
426
426
  alias :visit_Integer :visit_Struct_Raw
427
427
  alias :visit_Symbol :visit_Struct_Raw
428
428
  alias :visit_Hash :visit_Struct_Raw
@@ -430,7 +430,6 @@ module Arel # :nodoc: all
430
430
  alias :visit_String :visit_Struct_Raw
431
431
  alias :visit_Arel_Nodes_SqlLiteral :visit_Struct_Raw
432
432
 
433
-
434
433
  # used by insert / update statements.
435
434
  # does not claim / assign any values!
436
435
  # returns a Hash of key => value pairs
@@ -453,6 +452,7 @@ module Arel # :nodoc: all
453
452
  o.name
454
453
  end
455
454
 
455
+ # alias for ATTRIBUTE returns
456
456
  alias :visit_Arel_Attributes_Attribute :visit_Struct_Attribute
457
457
  alias :visit_Arel_Nodes_UnqualifiedColumn :visit_Struct_Attribute
458
458
  alias :visit_ActiveModel_Attribute_FromUser :visit_Struct_Attribute
@@ -473,6 +473,7 @@ module Arel # :nodoc: all
473
473
  o.value
474
474
  end
475
475
 
476
+ # alias for BIND returns
476
477
  alias :visit_ActiveModel_Attribute :visit_Struct_BindValue
477
478
  alias :visit_ActiveRecord_Relation_QueryAttribute :visit_Struct_BindValue
478
479
 
@@ -484,6 +485,7 @@ module Arel # :nodoc: all
484
485
  collect(o)
485
486
  end
486
487
 
488
+ # alias for ARRAY returns
487
489
  alias :visit_Set :visit_Array
488
490
  end
489
491
  end
@@ -89,8 +89,7 @@ module Arel # :nodoc: all
89
89
  # prepare query
90
90
  claim(:type, ::ElasticsearchRecord::Query::TYPE_INDEX_UPDATE_SETTING)
91
91
 
92
- # special overcomplicated blocks to assign a hash of settings directly to the body...
93
- # todo: refactor this in future versions
92
+ # overcomplicated blocks to assign a hash of settings directly to the body
94
93
  assign(:__query__, {}) do
95
94
  assign(:body, {}) do
96
95
  resolve(o.items, :visit_TableSettingDefinition)
@@ -8,7 +8,7 @@ module ElasticsearchRecord
8
8
  # this through +_read_attribute(:id)+.
9
9
  # To also have the ability of accessing this attribute through the default, this flag can be enabled.
10
10
  # @attribute! Boolean
11
- class_attribute :delegate_id_attribute, instance_writer: false, default: false
11
+ class_attribute :delegate_id_attribute, default: false
12
12
 
13
13
  # Elasticsearch's default value for queries without a +size+ is forced to +10+.
14
14
  # To provide a similar behaviour as SQL, this can be automatically set to the +max_result_window+ value.
@@ -45,7 +45,7 @@ module ElasticsearchRecord
45
45
 
46
46
  # overwrite to provide a Elasticsearch version of returning a 'primary_key' was attribute.
47
47
  # Elasticsearch uses the static +_id+ column as primary_key, but also supports an additional +id+ column.
48
- # To provide functionality of returning the +id_Was+ attribute, this method must also support it
48
+ # To provide functionality of returning the +id_was+ attribute, this method must also support it
49
49
  # with enabled +delegate_id_attribute+.
50
50
  def id_was
51
51
  delegate_id_attribute? && has_attribute?('id') ? attribute_was('id') : super
@@ -69,6 +69,19 @@ module ElasticsearchRecord
69
69
  super
70
70
  end
71
71
 
72
+ # resets a possible active +delegate_id_attribute?+ to false during block execution.
73
+ # Unfortunately this is required, since a lot of rails-code forces 'accessors' on the primary_key-field through the
74
+ # +id+-getter & setter methods. This will then fail to set the doc-_id and instead set the +id+-attribute ...
75
+ def undelegate_id_attribute_with(&block)
76
+ return block.call unless self.delegate_id_attribute?
77
+
78
+ self.delegate_id_attribute = false
79
+ result = block.call
80
+ self.delegate_id_attribute = true
81
+
82
+ result
83
+ end
84
+
72
85
  module PrependClassMethods
73
86
  # returns the table_name.
74
87
  # Has to be prepended to provide automated compatibility to other gems.
@@ -8,8 +8,8 @@ module ElasticsearchRecord
8
8
 
9
9
  module VERSION
10
10
  MAJOR = 1
11
- MINOR = 5
12
- TINY = 2
11
+ MINOR = 6
12
+ TINY = 0
13
13
  PRE = nil
14
14
 
15
15
  STRING = [MAJOR, MINOR, TINY, PRE].compact.join(".")
@@ -46,7 +46,7 @@ module ElasticsearchRecord
46
46
 
47
47
  # final coloring
48
48
  name = color(name, name_color(payload[:name]), true)
49
- query = color(query, gate_color(payload[:gate]), true) if colorize_logging
49
+ query = color(query, gate_color(payload[:gate], payload[:name]), true) if colorize_logging
50
50
 
51
51
  debug " #{name} #{query.presence || '-/-'}"
52
52
  end
@@ -61,7 +61,7 @@ module ElasticsearchRecord
61
61
  end
62
62
  end
63
63
 
64
- def gate_color(gate)
64
+ def gate_color(gate, name)
65
65
  case gate
66
66
  # SELECTS
67
67
  when 'core.get', 'core.mget', 'core.search', 'core.msearch', 'core.count', 'core.exists', 'sql.query'
@@ -77,7 +77,11 @@ module ElasticsearchRecord
77
77
  YELLOW
78
78
  # MIXINS
79
79
  when /indices\.\w+/, 'core.bulk', 'core.index'
80
- WHITE
80
+ if name.end_with?('Pit Delete')
81
+ RED
82
+ else
83
+ WHITE
84
+ end
81
85
  else
82
86
  MAGENTA
83
87
  end
@@ -8,9 +8,6 @@ module ElasticsearchRecord
8
8
  @klass = klass
9
9
  end
10
10
 
11
- # undelegated schema methods: clone rename create
12
- # those should not be quick-accessible, since they might end in heavily broken index
13
-
14
11
  # delegated dangerous methods (created with exclamation mark)
15
12
  # not able to provide individual arguments - always the defaults will be used!
16
13
  #
@@ -26,6 +23,21 @@ module ElasticsearchRecord
26
23
  end
27
24
  end
28
25
 
26
+ # delegated dangerous methods with args
27
+ #
28
+ # @example
29
+ # create!(:new_table_name, settings: , mappings:, alias: , ...)
30
+ # clone!(:new_table_name)
31
+ # rename!(:new_table_name)
32
+ # backup!(to: :backup_name)
33
+ # restore!(from: :backup_name)
34
+ # reindex!(:new_table_name)
35
+ %w(create clone rename backup restore reindex).each do |method|
36
+ define_method("#{method}!") do |*args|
37
+ _connection.send("#{method}_table", _index_name, *args)
38
+ end
39
+ end
40
+
29
41
  # delegated dangerous methods with confirm parameter (created with exclamation mark)
30
42
  # a exception will be raised, if +confirm:true+ is missing.
31
43
  #
@@ -146,12 +158,51 @@ module ElasticsearchRecord
146
158
  # Shortcut for meta_exists
147
159
  # @return [Boolean]
148
160
 
161
+ # @!method create!(force: false, copy_from: nil, if_not_exists: false, **options)
162
+ # Shortcut for create_table
163
+ # @param [Boolean] force
164
+ # @param [nil, String] copy_from
165
+ # @param [Hash] options
166
+ # @return [Boolean] acknowledged status
167
+
168
+ # @!method clone!(target_name, **options)
169
+ # Shortcut for clone_table
170
+ # @param [String] target_name
171
+ # @param [Hash] options
172
+ # @return [Boolean]
173
+
174
+ # @!method rename!(target_name, timeout: nil, **options)
175
+ # Shortcut for rename_table
176
+ # @param [String] target_name
177
+ # @param [String (frozen)] timeout
178
+ # @param [Hash] options
179
+
180
+ # @!method backup!(to: nil, close: true)
181
+ # Shortcut for backup_table
182
+ # @param [String] to
183
+ # @param [Boolean] close
184
+ # @return [String] backup_name
185
+
186
+ # @!method restore!(from:, timeout: nil, open: true, drop_backup: false)
187
+ # Shortcut for restore_table
188
+ # @param [String] from
189
+ # @param [String (frozen)] timeout
190
+ # @param [Boolean] open
191
+ # @return [Boolean] acknowledged status
192
+
193
+ # @!method reindex!(target_name, **options)
194
+ # Shortcut for reindex_table
195
+ # @param [String] target_name
196
+ # @param [Hash] options
197
+ # @return [Hash] reindex stats
198
+
149
199
  # fast insert/update data.
200
+ # IMPORTANT: Any 'doc'-id must by provided with underscore '_' ( +:_id+ )
150
201
  #
151
202
  # @example
152
203
  # index([{name: 'Hans', age: 34}, {name: 'Peter', age: 22}])
153
204
  #
154
- # index({id: 5, name: 'Georg', age: 87})
205
+ # index({_id: 5, name: 'Georg', age: 87})
155
206
  #
156
207
  # @param [Array<Hash>,Hash] data
157
208
  # @param [Hash] options
@@ -160,6 +211,7 @@ module ElasticsearchRecord
160
211
  end
161
212
 
162
213
  # fast insert new data.
214
+ # IMPORTANT: Any 'doc'-id must by provided with underscore '_' ( +:_id+ )
163
215
  #
164
216
  # @example
165
217
  # insert([{name: 'Hans', age: 34}, {name: 'Peter', age: 22}])
@@ -173,11 +225,12 @@ module ElasticsearchRecord
173
225
  end
174
226
 
175
227
  # fast update existing data.
228
+ # IMPORTANT: Any 'doc'-id must by provided with underscore '_' ( +:_id+ )
176
229
  #
177
230
  # @example
178
- # update([{id: 1, name: 'Hansi'}, {id: 2, name: 'Peter Parker', age: 42}])
231
+ # update([{_id: 1, name: 'Hansi'}, {_id: 2, name: 'Peter Parker', age: 42}])
179
232
  #
180
- # update({id: 3, name: 'Georg McCain'})
233
+ # update({_id: 3, name: 'Georg McCain'})
181
234
  #
182
235
  # @param [Array<Hash>,Hash] data
183
236
  # @param [Hash] options
@@ -186,13 +239,14 @@ module ElasticsearchRecord
186
239
  end
187
240
 
188
241
  # fast delete data.
242
+ # IMPORTANT: Any 'doc'-id must by provided with underscore '_' ( +:_id+ )
189
243
  #
190
244
  # @example
191
245
  # delete([1,2,3,5])
192
246
  #
193
247
  # delete(3)
194
248
  #
195
- # delete({id: 2})
249
+ # delete({_id: 2})
196
250
  #
197
251
  # @param [Array<Hash>,Hash] data
198
252
  # @param [Hash] options
@@ -202,12 +256,12 @@ module ElasticsearchRecord
202
256
  if data[0].is_a?(Hash)
203
257
  bulk(data, :delete, **options)
204
258
  else
205
- bulk(data.map { |id| { id: id } }, :delete, **options)
259
+ bulk(data.map { |id| { _id: id } }, :delete, **options)
206
260
  end
207
261
  end
208
262
 
209
263
  # bulk handle provided data (single Hash or multiple Array<Hash>).
210
- # @param [Hash,Array<Hash>] data - the data to insert/update/delete ...
264
+ # @param [Hash,Array<Hash<Symbol=>Object>>] data - the data to insert/update/delete ...
211
265
  # @param [Symbol] operation
212
266
  # @param [Boolean, Symbol] refresh
213
267
  def bulk(data, operation = :index, refresh: true, **options)
@@ -215,7 +269,11 @@ module ElasticsearchRecord
215
269
 
216
270
  _connection.api(:core, :bulk, {
217
271
  index: _index_name,
218
- body: data.map { |item| { operation => { _id: item[:id], data: item.except(:id) } } },
272
+ body: if operation == :update
273
+ data.map { |item| { operation => { _id: (item[:_id].presence || item['_id']), data: { doc: item.except(:_id, '_id') } } } }
274
+ else
275
+ data.map { |item| { operation => { _id: (item[:_id].presence || item['_id']), data: item.except(:_id, '_id') } } }
276
+ end,
219
277
  refresh: refresh
220
278
  }, "BULK #{operation.to_s.upcase}", **options)
221
279
  end
@@ -52,11 +52,8 @@ module ElasticsearchRecord
52
52
  # @return [Array<String>]
53
53
  def searchable_column_names
54
54
  @searchable_column_names ||= columns.select(&:enabled?).reduce([]) { |m, column|
55
- m << column.name
56
- m += column.field_names
57
- m += column.property_names
58
- m.uniq
59
- }
55
+ m + [column.name] + column.field_names + column.property_names
56
+ }.uniq
60
57
  end
61
58
 
62
59
  # clears schema-related instance variables.
@@ -11,7 +11,7 @@ module ElasticsearchRecord
11
11
  # values is not a "key=>values"-Hash, but a +ActiveModel::Attribute+ - so the casted values gets resolved here
12
12
  values = values.transform_values(&:value)
13
13
 
14
- # resolve & update a auto_increment value
14
+ # resolve & update a auto_increment value, if configured
15
15
  _insert_with_auto_increment(values) do |arguments|
16
16
  # build new query
17
17
  query = ElasticsearchRecord::Query.new(
@@ -61,21 +61,20 @@ module ElasticsearchRecord
61
61
 
62
62
  private
63
63
 
64
- # WARNING: BETA!!!
65
64
  # Resolves the +auto_increment+ status from the tables +_meta+ attributes.
66
65
  def _insert_with_auto_increment(values)
67
- # check, if the primary_key's values is provided.
68
- # so, no need to resolve a +auto_increment+ value, but provide
69
- if values[self.primary_key].present?
70
- # resolve id from values
71
- id = values[self.primary_key]
72
-
66
+ # check, if the primary_key's value is provided.
67
+ # so, no need to resolve a +auto_increment+ value, but provide the id directly
68
+ if (id = values[self.primary_key]).present?
73
69
  yield({id: id})
74
70
  elsif auto_increment?
71
+ # future increments: uuid (+uuidv6 ?), hex, radix(2-36), integer
72
+ # allocated through: primary_key_type
73
+
75
74
  ids = [
76
- # we try to resolve the current-auto-increment value from the tables meta
75
+ # try to resolve the current-auto-increment value from the tables meta
77
76
  connection.table_metas(self.table_name).dig('auto_increment').to_i + 1,
78
- # for secure reasons, we also resolve the current maximum value for the primary key
77
+ # for secure reasons: also resolve the current maximum value for the primary key
79
78
  self.unscoped.all.maximum(self.primary_key).to_i + 1
80
79
  ]
81
80
 
@@ -92,5 +91,14 @@ module ElasticsearchRecord
92
91
  end
93
92
  end
94
93
  end
94
+
95
+ # overwrite to provide a Elasticsearch version:
96
+ # Creates a record with values matching those of the instance attributes
97
+ # and returns its id.
98
+ def _create_record(*args)
99
+ undelegate_id_attribute_with do
100
+ super
101
+ end
102
+ end
95
103
  end
96
104
  end
@@ -86,9 +86,9 @@ module ElasticsearchRecord
86
86
  # @!attribute Boolean
87
87
  attr_reader :refresh
88
88
 
89
- # defines the query body - in most cases this is a hash
90
- # @!attribute Hash
91
- # attr_reader :body
89
+ # defines the query timeout
90
+ # @!attribute Integer|String
91
+ attr_reader :timeout
92
92
 
93
93
  # defines the query arguments to be passed to the API
94
94
  # @!attribute Hash
@@ -98,11 +98,12 @@ module ElasticsearchRecord
98
98
  # @!attribute Array
99
99
  attr_reader :columns
100
100
 
101
- def initialize(index: nil, type: TYPE_UNDEFINED, status: STATUS_VALID, body: nil, refresh: nil, arguments: {}, columns: [])
101
+ def initialize(index: nil, type: TYPE_UNDEFINED, status: STATUS_VALID, body: nil, refresh: nil, timeout: nil, arguments: {}, columns: [])
102
102
  @index = index
103
103
  @type = type
104
104
  @status = status
105
105
  @refresh = refresh
106
+ @timeout = timeout
106
107
  @body = body
107
108
  @arguments = arguments
108
109
  @columns = columns
@@ -163,6 +164,9 @@ module ElasticsearchRecord
163
164
  # set refresh, if defined (also includes false value)
164
165
  args[:refresh] = self.refresh unless self.refresh.nil?
165
166
 
167
+ # set timeout, if present
168
+ args[:timeout] = self.timeout if self.timeout.present?
169
+
166
170
  args
167
171
  end
168
172
 
@@ -125,6 +125,18 @@ module ElasticsearchRecord
125
125
  self
126
126
  end
127
127
  end
128
+
129
+ # overwrite original methods to provide a elasticsearch version:
130
+ # checks against the +#access_id_fielddata?+ to ensure the Elasticsearch Cluster allows access on the +_id+ field.
131
+ def reverse_sql_order(order_query)
132
+ if order_query.empty?
133
+ return [table[primary_key].desc] if primary_key != '_id' || klass.connection.access_id_fielddata?
134
+ raise ActiveRecord::IrreversibleOrderError,
135
+ "Relation has no current order and fielddata access on the _id field is disallowed! However, you can re-enable it by updating the dynamic cluster setting: indices.id_field_data.enabled"
136
+ end
137
+
138
+ super
139
+ end
128
140
  end
129
141
  end
130
142
  end
@@ -102,6 +102,16 @@ module ElasticsearchRecord
102
102
  configure!(:__query__, refresh: value)
103
103
  end
104
104
 
105
+ # sets the query's +timeout+ value.
106
+ # @param [Boolean] value (default: true)
107
+ def timeout(value = true)
108
+ spawn.timeout!(value)
109
+ end
110
+
111
+ def timeout!(value = true)
112
+ configure!(:__query__, timeout: value)
113
+ end
114
+
105
115
  # add a whole query 'node' to the query.
106
116
  # @example
107
117
  # query(:bool, {filter: ...})
@@ -184,35 +194,36 @@ module ElasticsearchRecord
184
194
 
185
195
  # creates a condition on the relation.
186
196
  # There are several possibilities to call this method.
197
+ #
187
198
  # @example
188
199
  # # create a simple 'term' condition on the query[:filter] param
189
- # where({name: 'hans'})
190
- # > query[:filter] << { term: { name: 'hans' } }
200
+ # where({name: 'hans'})
201
+ # #> query[:filter] << { term: { name: 'hans' } }
191
202
  #
192
203
  # # create a simple 'terms' condition on the query[:filter] param
193
- # where({name: ['hans','peter']})
194
- # > query[:filter] << { terms: { name: ['hans','peter'] } }
204
+ # where({name: ['hans','peter']})
205
+ # #> query[:filter] << { terms: { name: ['hans','peter'] } }
195
206
  #
196
- # where(:must_not, term: {name: 'horst'})
197
- # where(:query_string, "(new york OR dublin)", fields: ['name','description'])
207
+ # where(:must_not, term: {name: 'horst'})
208
+ # where(:query_string, "(new york OR dublin)", fields: ['name','description'])
198
209
  #
199
210
  # # nested array
200
211
  # where([ [:filter, {...}], [:must_not, {...}]])
201
- def where(*args)
202
- return none if args[0] == :none
203
-
204
- super
205
- end
206
-
212
+ #
213
+ # # invalidate query
214
+ # where(:none)
215
+ #
207
216
  def where!(opts, *rest)
208
217
  # :nodoc:
209
218
  case opts
210
- # check the first provided parameter +opts+ and validate, if this is an alias for "must, must_not, should or filter"
211
- # if true, we expect the rest[0] to be a hash.
212
- # For this correlation we forward this as RAW-data without check & manipulation
213
219
  when Symbol
214
220
  case opts
221
+ when :none
222
+ none!
215
223
  when :filter, :must, :must_not, :should
224
+ # check the first provided parameter +opts+ and validate, if this is an alias for "must, must_not, should or filter".
225
+ # if true, we expect the rest[0] to be a hash.
226
+ # For this correlation we forward this as RAW-data without check & manipulation
216
227
  send("#{opts}!", *rest)
217
228
  else
218
229
  raise ArgumentError, "Unsupported prefix type '#{opts}'. Allowed types are: :filter, :must, :must_not, :should"
@@ -220,9 +231,7 @@ module ElasticsearchRecord
220
231
  when Array
221
232
  # check if this is a nested array of multiple [<kind>,<data>]
222
233
  if opts[0].is_a?(Array)
223
- opts.each { |item|
224
- where!(*item)
225
- }
234
+ opts.each { |item| where!(*item) }
226
235
  else
227
236
  where!(*opts, *rest)
228
237
  end
@@ -90,7 +90,9 @@ module ElasticsearchRecord
90
90
  #
91
91
  # @param [String] keep_alive - how long to keep alive (for each single request) - default: '1m'
92
92
  # @param [Integer] batch_size - how many results per query (default: 1000 - this means at least 10 queries before reaching the +max_result_window+)
93
- def pit_results(keep_alive: '1m', batch_size: 1000)
93
+ # @param [Boolean] ids_only - resolve ids only from results
94
+ # @return [Integer, Array] either returns the results-array (no block provided) or the total amount of results
95
+ def pit_results(keep_alive: '1m', batch_size: 1000, ids_only: false)
94
96
  raise(ArgumentError, "Batch size cannot be above the 'max_result_window' (#{klass.max_result_window}) !") if batch_size > klass.max_result_window
95
97
 
96
98
  # check if limit or offset values where provided
@@ -105,6 +107,9 @@ module ElasticsearchRecord
105
107
  # see @ https://www.elastic.co/guide/en/elasticsearch/reference/current/paginate-search-results.html
106
108
  relation.order!(_shard_doc: :asc) if relation.order_values.empty? && klass.connection.access_shard_doc?
107
109
 
110
+ # resolve ids only
111
+ relation.reselect!('_id') if ids_only
112
+
108
113
  # clear limit & offset
109
114
  relation.offset!(nil).limit!(nil)
110
115
 
@@ -122,10 +127,16 @@ module ElasticsearchRecord
122
127
  # resolve new data until we got all we need
123
128
  loop do
124
129
  # change pit settings & limit (spawn is required, since a +resolve+ will make the relation immutable)
125
- current_response = relation.spawn.configure!(current_pit_hash).limit!(batch_size).resolve('Pit').response
130
+ current_response = relation.spawn.configure!(current_pit_hash).limit!(batch_size).resolve('Pit Results').response
126
131
 
127
132
  # resolve only data from hits->hits[{_source}]
128
- current_results = current_response['hits']['hits'].map { |result| result['_source'].merge('_id' => result['_id']) }
133
+ current_results = if ids_only
134
+ current_response['hits']['hits'].map { |result| result['_id'] }
135
+ # future with helper
136
+ # current_response['hits']['hits'].map.from_hash('_id')
137
+ else
138
+ current_response['hits']['hits'].map { |result| result['_source'].merge('_id' => result['_id']) }
139
+ end
129
140
  current_results_length = current_results.length
130
141
 
131
142
  # check if we reached the required offset
@@ -171,12 +182,38 @@ module ElasticsearchRecord
171
182
  end
172
183
  end
173
184
 
174
- # return results array
175
- results
185
+ # return results array or total value
186
+ if block_given?
187
+ results_total
188
+ else
189
+ results
190
+ end
176
191
  end
177
192
 
178
193
  alias_method :total_results, :pit_results
179
194
 
195
+ # executes a delete query in a +point_in_time+ scope.
196
+ # this will provide the possibility to delete more than the +max_result_window+ (default: 10000) docs in a batched process.
197
+ # @param [String] keep_alive
198
+ # @param [Integer] batch_size
199
+ # @param [Boolean] refresh index after delete finished (default: true)
200
+ # @return [Integer] total amount of deleted docs
201
+ def pit_delete(keep_alive: '1m', batch_size: 1000, refresh: true)
202
+ delete_count = select('_id').pit_results(keep_alive: keep_alive, batch_size: batch_size, ids_only: true) do |ids|
203
+ # skip empty results
204
+ next unless ids.any?
205
+
206
+ # delete all IDs, but do not refresh index, yet
207
+ klass.connection.api(:core, :bulk, { index: klass.table_name, body: ids.map { |id| { delete: { _id: id } } }, refresh: false }, "#{klass} Pit Delete")
208
+ end
209
+
210
+ # refresh index
211
+ klass.connection.refresh_table(klass.table_name) if refresh
212
+
213
+ # return total count
214
+ delete_count
215
+ end
216
+
180
217
  # returns the RAW response for the current query
181
218
  # @return [Array]
182
219
  def response
@@ -49,7 +49,7 @@ module ElasticsearchRecord
49
49
  end
50
50
 
51
51
  # Returns the RAW +_source+ data from each hit - aka. +rows+.
52
- # PLEASE NOTE: The array will only contain the RAW data from each +_source+ (meta info like '_score' is not included)
52
+ # PLEASE NOTE: The array will only contain the RAW data from each +_source+ (meta info like '_id' or '_score' are not included)
53
53
  # @return [Array]
54
54
  def results
55
55
  return [] unless response['hits']
@@ -55,6 +55,16 @@ module ElasticsearchRecord
55
55
 
56
56
  autoload :ElasticsearchDatabaseTasks, 'elasticsearch_record/tasks/elasticsearch_database_tasks'
57
57
  end
58
+
59
+ ##
60
+ # :singleton-method:
61
+ # Specifies if a exception should be raised while using transactions.
62
+ # Since ActiveRecord does not have any configuration option to support transactions and
63
+ # Elasticsearch does **NOT** support transactions, it may be risky to ignore them.
64
+ # As default, transactional are 'silently swallowed' to not break any existing applications...
65
+ # However enabling this flag will surely fail transactional tests ...
66
+ singleton_class.attr_accessor :error_on_transaction
67
+ self.error_on_transaction = false
58
68
  end
59
69
 
60
70
  ActiveSupport.on_load(:active_record) do
metadata CHANGED
@@ -1,14 +1,14 @@
1
1
  --- !ruby/object:Gem::Specification
2
2
  name: elasticsearch_record
3
3
  version: !ruby/object:Gem::Version
4
- version: 1.5.2
4
+ version: 1.6.0
5
5
  platform: ruby
6
6
  authors:
7
7
  - Tobias Gonsior
8
8
  autorequire:
9
9
  bindir: bin
10
10
  cert_chain: []
11
- date: 2023-07-12 00:00:00.000000000 Z
11
+ date: 2023-08-11 00:00:00.000000000 Z
12
12
  dependencies:
13
13
  - !ruby/object:Gem::Dependency
14
14
  name: activerecord
@@ -145,6 +145,7 @@ files:
145
145
  - lib/active_record/connection_adapters/elasticsearch/schema_dumper.rb
146
146
  - lib/active_record/connection_adapters/elasticsearch/schema_statements.rb
147
147
  - lib/active_record/connection_adapters/elasticsearch/table_statements.rb
148
+ - lib/active_record/connection_adapters/elasticsearch/transactions.rb
148
149
  - lib/active_record/connection_adapters/elasticsearch/type.rb
149
150
  - lib/active_record/connection_adapters/elasticsearch/type/format_string.rb
150
151
  - lib/active_record/connection_adapters/elasticsearch/type/multicast_value.rb