algolia 3.8.2 → 3.10.1

Sign up to get free protection for your applications and to get access to all the features.
checksums.yaml CHANGED
@@ -1,7 +1,7 @@
1
1
  ---
2
2
  SHA256:
3
- metadata.gz: bbf3a0681721633040ecb0140b17558a30960ea7269bc92b811ee7e3061b0c01
4
- data.tar.gz: a6192d821fed7c69849d2c0464235d3c83bf47377ac94d270a459e96849033d8
3
+ metadata.gz: 6f82021f2d31e75f1397a47f600dff3afd13a8a56abf48cb7262f4ec310e3b5d
4
+ data.tar.gz: 17579ca61f3d73d5966c80c70d7013e3881194eff166c3935c6c69f929bce150
5
5
  SHA512:
6
- metadata.gz: 7bfcf91d318232cffaa4af72e6979cb0fd561d4c299c25e497a17c54809749249c787d4741803d43435dbb7362ea16cfd45c8a5c240ae981d9f0e8607ed784cf
7
- data.tar.gz: 29b4a760a9a5ae9d6f409cada459e5db1e69e219590aef04ad9ca1e0069578660cb841ee6980c41b1ad086c208f0b6a79f82c07e206dad6e307af02a3028298f
6
+ metadata.gz: d47e0f3298ab4b2147630956290d964698b257f64a826442dfa45dcc8acff56f521cf37225a34acaad75621a70262086614a093723209897694842b27c96ed98
7
+ data.tar.gz: 23cac9419c2d97abafccacfa181ec386e068b9330cc367185259dcdb644536139e12161d742e59104460cc16816807525c13c0af8a35f9dc70c1f6df16fdfd05
@@ -21,9 +21,9 @@ jobs:
21
21
  - name: Install Ruby
22
22
  uses: ruby/setup-ruby@v1
23
23
  with:
24
- ruby-version: 3.2.2
24
+ ruby-version: 3.3.6
25
25
  bundler-cache: true
26
26
 
27
- - uses: rubygems/release-gem@612653d273a73bdae1df8453e090060bb4db5f31
27
+ - uses: rubygems/release-gem@a25424ba2ba8b387abc8ef40807c2c85b96cbe32
28
28
  with:
29
29
  await-release: false
data/CHANGELOG.md CHANGED
@@ -1,3 +1,29 @@
1
+ ## [3.10.1](https://github.com/algolia/algoliasearch-client-ruby/compare/3.10.0...3.10.1)
2
+
3
+ - [6fb57f9ba](https://github.com/algolia/api-clients-automation/commit/6fb57f9ba) fix(clients): lock version ([#4228](https://github.com/algolia/api-clients-automation/pull/4228)) by [@millotp](https://github.com/millotp/)
4
+ - [3f5ceb540](https://github.com/algolia/api-clients-automation/commit/3f5ceb540) fix(ruby): handle unknown attributes in index_exists ([#4231](https://github.com/algolia/api-clients-automation/pull/4231)) by [@millotp](https://github.com/millotp/)
5
+ - [cd59f445e](https://github.com/algolia/api-clients-automation/commit/cd59f445e) fix(specs): enable watcher for push ([#4229](https://github.com/algolia/api-clients-automation/pull/4229)) by [@shortcuts](https://github.com/shortcuts/)
6
+ - [baf7d6f4d](https://github.com/algolia/api-clients-automation/commit/baf7d6f4d) fix(specs): add `ignoreConjugations` to `AlternativesAsExact` ([#4230](https://github.com/algolia/api-clients-automation/pull/4230)) by [@shortcuts](https://github.com/shortcuts/)
7
+
8
+ ## [3.10.0](https://github.com/algolia/algoliasearch-client-ruby/compare/3.9.0...3.10.0)
9
+
10
+ - [866d859f8](https://github.com/algolia/api-clients-automation/commit/866d859f8) fix(specs): different summaries for saveObject/addOrUpdate methods ([#4223](https://github.com/algolia/api-clients-automation/pull/4223)) by [@kai687](https://github.com/kai687/)
11
+ - [baf16c689](https://github.com/algolia/api-clients-automation/commit/baf16c689) feat(specs): add `watch` to `pushTask` ingestion ([#4224](https://github.com/algolia/api-clients-automation/pull/4224)) by [@shortcuts](https://github.com/shortcuts/)
12
+
13
+ ## [3.9.0](https://github.com/algolia/algoliasearch-client-ruby/compare/3.8.2...3.9.0)
14
+
15
+ - [068fdacb5](https://github.com/algolia/api-clients-automation/commit/068fdacb5) feat(specs): add info and link about indexing rate limits ([#4136](https://github.com/algolia/api-clients-automation/pull/4136)) by [@kai687](https://github.com/kai687/)
16
+ - [9e0235697](https://github.com/algolia/api-clients-automation/commit/9e0235697) fix(specs): `nb_api_calls` in `getLogs` response is optional ([#4142](https://github.com/algolia/api-clients-automation/pull/4142)) by [@shortcuts](https://github.com/shortcuts/)
17
+ - [56fd73fb6](https://github.com/algolia/api-clients-automation/commit/56fd73fb6) chore(deps): dependencies 2024-11-25 ([#4145](https://github.com/algolia/api-clients-automation/pull/4145)) by [@algolia-bot](https://github.com/algolia-bot/)
18
+ - [b728c5f25](https://github.com/algolia/api-clients-automation/commit/b728c5f25) fix(specs): `consequence` is required when saving rules ([#4146](https://github.com/algolia/api-clients-automation/pull/4146)) by [@shortcuts](https://github.com/shortcuts/)
19
+ - [afd94fac8](https://github.com/algolia/api-clients-automation/commit/afd94fac8) fix(specs): `saveRule` response type ([#4170](https://github.com/algolia/api-clients-automation/pull/4170)) by [@shortcuts](https://github.com/shortcuts/)
20
+ - [2325c61b8](https://github.com/algolia/api-clients-automation/commit/2325c61b8) feat(clients): allow batch size on objects helper ([#4172](https://github.com/algolia/api-clients-automation/pull/4172)) by [@shortcuts](https://github.com/shortcuts/)
21
+ - [aae74cb38](https://github.com/algolia/api-clients-automation/commit/aae74cb38) fix(specs): remove SFCC source type ([#4190](https://github.com/algolia/api-clients-automation/pull/4190)) by [@millotp](https://github.com/millotp/)
22
+ - [b4809e789](https://github.com/algolia/api-clients-automation/commit/b4809e789) fix(ruby): expose static helper ([#4191](https://github.com/algolia/api-clients-automation/pull/4191)) by [@millotp](https://github.com/millotp/)
23
+ - [254052857](https://github.com/algolia/api-clients-automation/commit/254052857) fix(specs): add sourceType to listTasks ([#4193](https://github.com/algolia/api-clients-automation/pull/4193)) by [@millotp](https://github.com/millotp/)
24
+ - [106d64313](https://github.com/algolia/api-clients-automation/commit/106d64313) feat(generators): allow per-spec timeouts ([#4173](https://github.com/algolia/api-clients-automation/pull/4173)) by [@shortcuts](https://github.com/shortcuts/)
25
+ - [9e1e60f9e](https://github.com/algolia/api-clients-automation/commit/9e1e60f9e) chore(deps): dependencies 2024-12-09 ([#4197](https://github.com/algolia/api-clients-automation/pull/4197)) by [@algolia-bot](https://github.com/algolia-bot/)
26
+
1
27
  ## [3.8.2](https://github.com/algolia/algoliasearch-client-ruby/compare/3.8.1...3.8.2)
2
28
 
3
29
  - [f97e44ce0](https://github.com/algolia/api-clients-automation/commit/f97e44ce0) fix(cts): add tests for HTML error ([#4097](https://github.com/algolia/api-clients-automation/pull/4097)) by [@millotp](https://github.com/millotp/)
data/Gemfile.lock CHANGED
@@ -1,7 +1,7 @@
1
1
  PATH
2
2
  remote: .
3
3
  specs:
4
- algolia (3.8.2)
4
+ algolia (3.10.1)
5
5
  base64 (>= 0.2.0, < 1)
6
6
  faraday (>= 1.0.1, < 3.0)
7
7
  faraday-net_http_persistent (>= 0.15, < 3)
@@ -12,21 +12,23 @@ GEM
12
12
  specs:
13
13
  base64 (0.2.0)
14
14
  connection_pool (2.4.1)
15
- faraday (2.11.0)
16
- faraday-net_http (>= 2.0, < 3.4)
15
+ faraday (2.12.2)
16
+ faraday-net_http (>= 2.0, < 3.5)
17
+ json
17
18
  logger
18
- faraday-net_http (3.3.0)
19
- net-http
20
- faraday-net_http_persistent (2.1.0)
19
+ faraday-net_http (3.4.0)
20
+ net-http (>= 0.5.0)
21
+ faraday-net_http_persistent (2.3.0)
21
22
  faraday (~> 2.5)
22
- net-http-persistent (~> 4.0)
23
- logger (1.6.1)
24
- net-http (0.4.1)
23
+ net-http-persistent (>= 4.0.4, < 5)
24
+ json (2.9.0)
25
+ logger (1.6.2)
26
+ net-http (0.6.0)
25
27
  uri
26
- net-http-persistent (4.0.4)
28
+ net-http-persistent (4.0.5)
27
29
  connection_pool (~> 2.2)
28
30
  rake (13.2.1)
29
- uri (0.13.1)
31
+ uri (1.0.2)
30
32
 
31
33
  PLATFORMS
32
34
  aarch64-linux
@@ -34,7 +36,7 @@ PLATFORMS
34
36
 
35
37
  DEPENDENCIES
36
38
  algolia!
37
- bundler
39
+ bundler (>= 2.4.10)
38
40
  rake
39
41
 
40
42
  BUNDLED WITH
data/algolia.gemspec CHANGED
@@ -30,6 +30,6 @@ Gem::Specification.new do |s|
30
30
 
31
31
  s.add_dependency 'net-http-persistent'
32
32
 
33
- s.add_development_dependency 'bundler'
33
+ s.add_development_dependency 'bundler', '>= 2.4.10'
34
34
  s.add_development_dependency 'rake'
35
35
  end
@@ -21,6 +21,18 @@ module Algolia
21
21
  region = nil
22
22
  end
23
23
 
24
+ if opts.nil? || opts[:connect_timeout].nil?
25
+ opts[:connect_timeout] = 2000
26
+ end
27
+
28
+ if opts.nil? || opts[:read_timeout].nil?
29
+ opts[:read_timeout] = 5000
30
+ end
31
+
32
+ if opts.nil? || opts[:write_timeout].nil?
33
+ opts[:write_timeout] = 30000
34
+ end
35
+
24
36
  if !region.nil? && (!region.is_a?(String) || !regions.include?(region))
25
37
  raise "`region` must be one of the following: #{regions.join(", ")}"
26
38
  end
@@ -21,6 +21,18 @@ module Algolia
21
21
  region = nil
22
22
  end
23
23
 
24
+ if opts.nil? || opts[:connect_timeout].nil?
25
+ opts[:connect_timeout] = 2000
26
+ end
27
+
28
+ if opts.nil? || opts[:read_timeout].nil?
29
+ opts[:read_timeout] = 5000
30
+ end
31
+
32
+ if opts.nil? || opts[:write_timeout].nil?
33
+ opts[:write_timeout] = 30000
34
+ end
35
+
24
36
  if !region.nil? && (!region.is_a?(String) || !regions.include?(region))
25
37
  raise "`region` must be one of the following: #{regions.join(", ")}"
26
38
  end
@@ -21,6 +21,18 @@ module Algolia
21
21
  region = nil
22
22
  end
23
23
 
24
+ if opts.nil? || opts[:connect_timeout].nil?
25
+ opts[:connect_timeout] = 25000
26
+ end
27
+
28
+ if opts.nil? || opts[:read_timeout].nil?
29
+ opts[:read_timeout] = 25000
30
+ end
31
+
32
+ if opts.nil? || opts[:write_timeout].nil?
33
+ opts[:write_timeout] = 25000
34
+ end
35
+
24
36
  if region.nil? || !region.is_a?(String) || !regions.include?(region)
25
37
  raise "`region` is required and must be one of the following: #{regions.join(", ")}"
26
38
  end
@@ -1847,6 +1859,7 @@ module Algolia
1847
1859
  # @param action [Array<ActionType>] Actions for filtering the list of tasks.
1848
1860
  # @param enabled [Boolean] Whether to filter the list of tasks by the `enabled` status.
1849
1861
  # @param source_id [Array<String>] Source IDs for filtering the list of tasks.
1862
+ # @param source_type [Array<SourceType>] Filters the tasks with the specified source type.
1850
1863
  # @param destination_id [Array<String>] Destination IDs for filtering the list of tasks.
1851
1864
  # @param trigger_type [Array<TriggerType>] Type of task trigger for filtering the list of tasks.
1852
1865
  # @param sort [TaskSortKeys] Property by which to sort the list of tasks. (default to 'createdAt')
@@ -1859,6 +1872,7 @@ module Algolia
1859
1872
  action = nil,
1860
1873
  enabled = nil,
1861
1874
  source_id = nil,
1875
+ source_type = nil,
1862
1876
  destination_id = nil,
1863
1877
  trigger_type = nil,
1864
1878
  sort = nil,
@@ -1872,6 +1886,7 @@ module Algolia
1872
1886
  query_params[:action] = @api_client.build_collection_param(action, :csv) unless action.nil?
1873
1887
  query_params[:enabled] = enabled unless enabled.nil?
1874
1888
  query_params[:sourceID] = @api_client.build_collection_param(source_id, :csv) unless source_id.nil?
1889
+ query_params[:sourceType] = @api_client.build_collection_param(source_type, :csv) unless source_type.nil?
1875
1890
  unless destination_id.nil?
1876
1891
  query_params[:destinationID] = @api_client.build_collection_param(destination_id, :csv)
1877
1892
  end
@@ -1907,6 +1922,7 @@ module Algolia
1907
1922
  # @param action [Array<ActionType>] Actions for filtering the list of tasks.
1908
1923
  # @param enabled [Boolean] Whether to filter the list of tasks by the `enabled` status.
1909
1924
  # @param source_id [Array<String>] Source IDs for filtering the list of tasks.
1925
+ # @param source_type [Array<SourceType>] Filters the tasks with the specified source type.
1910
1926
  # @param destination_id [Array<String>] Destination IDs for filtering the list of tasks.
1911
1927
  # @param trigger_type [Array<TriggerType>] Type of task trigger for filtering the list of tasks.
1912
1928
  # @param sort [TaskSortKeys] Property by which to sort the list of tasks. (default to 'createdAt')
@@ -1919,6 +1935,7 @@ module Algolia
1919
1935
  action = nil,
1920
1936
  enabled = nil,
1921
1937
  source_id = nil,
1938
+ source_type = nil,
1922
1939
  destination_id = nil,
1923
1940
  trigger_type = nil,
1924
1941
  sort = nil,
@@ -1931,6 +1948,7 @@ module Algolia
1931
1948
  action,
1932
1949
  enabled,
1933
1950
  source_id,
1951
+ source_type,
1934
1952
  destination_id,
1935
1953
  trigger_type,
1936
1954
  sort,
@@ -2114,9 +2132,10 @@ module Algolia
2114
2132
  # - editSettings
2115
2133
  # @param task_id [String] Unique identifier of a task. (required)
2116
2134
  # @param push_task_payload [PushTaskPayload] Request body of a Search API `batch` request that will be pushed in the Connectors pipeline. (required)
2135
+ # @param watch [Boolean] When provided, the push operation will be synchronous and the API will wait for the ingestion to be finished before responding.
2117
2136
  # @param request_options: The request options to send along with the query, they will be merged with the transporter base parameters (headers, query params, timeouts, etc.). (optional)
2118
2137
  # @return [Http::Response] the response
2119
- def push_task_with_http_info(task_id, push_task_payload, request_options = {})
2138
+ def push_task_with_http_info(task_id, push_task_payload, watch = nil, request_options = {})
2120
2139
  # verify the required parameter 'task_id' is set
2121
2140
  if @api_client.config.client_side_validation && task_id.nil?
2122
2141
  raise ArgumentError, "Parameter `task_id` is required when calling `push_task`."
@@ -2128,6 +2147,7 @@ module Algolia
2128
2147
 
2129
2148
  path = "/2/tasks/{taskID}/push".sub("{" + "taskID" + "}", Transport.encode_uri(task_id.to_s))
2130
2149
  query_params = {}
2150
+ query_params[:watch] = watch unless watch.nil?
2131
2151
  query_params = query_params.merge(request_options[:query_params]) unless request_options[:query_params].nil?
2132
2152
  header_params = {}
2133
2153
  header_params = header_params.merge(request_options[:header_params]) unless request_options[:header_params].nil?
@@ -2153,11 +2173,12 @@ module Algolia
2153
2173
  # - editSettings
2154
2174
  # @param task_id [String] Unique identifier of a task. (required)
2155
2175
  # @param push_task_payload [PushTaskPayload] Request body of a Search API `batch` request that will be pushed in the Connectors pipeline. (required)
2176
+ # @param watch [Boolean] When provided, the push operation will be synchronous and the API will wait for the ingestion to be finished before responding.
2156
2177
  # @param request_options: The request options to send along with the query, they will be merged with the transporter base parameters (headers, query params, timeouts, etc.). (optional)
2157
- # @return [RunResponse]
2158
- def push_task(task_id, push_task_payload, request_options = {})
2159
- response = push_task_with_http_info(task_id, push_task_payload, request_options)
2160
- @api_client.deserialize(response.body, request_options[:debug_return_type] || "Ingestion::RunResponse")
2178
+ # @return [WatchResponse]
2179
+ def push_task(task_id, push_task_payload, watch = nil, request_options = {})
2180
+ response = push_task_with_http_info(task_id, push_task_payload, watch, request_options)
2181
+ @api_client.deserialize(response.body, request_options[:debug_return_type] || "Ingestion::WatchResponse")
2161
2182
  end
2162
2183
 
2163
2184
  # Runs all tasks linked to a source, only available for Shopify sources. It will create 1 run per task.
@@ -2636,10 +2657,10 @@ module Algolia
2636
2657
  # - editSettings
2637
2658
  # @param source_id [String] Unique identifier of a source. (required)
2638
2659
  # @param request_options: The request options to send along with the query, they will be merged with the transporter base parameters (headers, query params, timeouts, etc.). (optional)
2639
- # @return [SourceWatchResponse]
2660
+ # @return [WatchResponse]
2640
2661
  def trigger_docker_source_discover(source_id, request_options = {})
2641
2662
  response = trigger_docker_source_discover_with_http_info(source_id, request_options)
2642
- @api_client.deserialize(response.body, request_options[:debug_return_type] || "Ingestion::SourceWatchResponse")
2663
+ @api_client.deserialize(response.body, request_options[:debug_return_type] || "Ingestion::WatchResponse")
2643
2664
  end
2644
2665
 
2645
2666
  # Try a transformation before creating it.
@@ -3114,10 +3135,10 @@ module Algolia
3114
3135
  # - editSettings
3115
3136
  # @param source_create [SourceCreate]
3116
3137
  # @param request_options: The request options to send along with the query, they will be merged with the transporter base parameters (headers, query params, timeouts, etc.). (optional)
3117
- # @return [SourceWatchResponse]
3138
+ # @return [WatchResponse]
3118
3139
  def validate_source(source_create = nil, request_options = {})
3119
3140
  response = validate_source_with_http_info(source_create, request_options)
3120
- @api_client.deserialize(response.body, request_options[:debug_return_type] || "Ingestion::SourceWatchResponse")
3141
+ @api_client.deserialize(response.body, request_options[:debug_return_type] || "Ingestion::WatchResponse")
3121
3142
  end
3122
3143
 
3123
3144
  # Validates an update of a source payload to ensure it can be created and that the data source can be reached by Algolia.
@@ -3168,10 +3189,10 @@ module Algolia
3168
3189
  # @param source_id [String] Unique identifier of a source. (required)
3169
3190
  # @param source_update [SourceUpdate] (required)
3170
3191
  # @param request_options: The request options to send along with the query, they will be merged with the transporter base parameters (headers, query params, timeouts, etc.). (optional)
3171
- # @return [SourceWatchResponse]
3192
+ # @return [WatchResponse]
3172
3193
  def validate_source_before_update(source_id, source_update, request_options = {})
3173
3194
  response = validate_source_before_update_with_http_info(source_id, source_update, request_options)
3174
- @api_client.deserialize(response.body, request_options[:debug_return_type] || "Ingestion::SourceWatchResponse")
3195
+ @api_client.deserialize(response.body, request_options[:debug_return_type] || "Ingestion::WatchResponse")
3175
3196
  end
3176
3197
 
3177
3198
  end
@@ -21,6 +21,18 @@ module Algolia
21
21
  region = nil
22
22
  end
23
23
 
24
+ if opts.nil? || opts[:connect_timeout].nil?
25
+ opts[:connect_timeout] = 2000
26
+ end
27
+
28
+ if opts.nil? || opts[:read_timeout].nil?
29
+ opts[:read_timeout] = 5000
30
+ end
31
+
32
+ if opts.nil? || opts[:write_timeout].nil?
33
+ opts[:write_timeout] = 30000
34
+ end
35
+
24
36
  if !region.nil? && (!region.is_a?(String) || !regions.include?(region))
25
37
  raise "`region` must be one of the following: #{regions.join(", ")}"
26
38
  end
@@ -21,6 +21,18 @@ module Algolia
21
21
  region = nil
22
22
  end
23
23
 
24
+ if opts.nil? || opts[:connect_timeout].nil?
25
+ opts[:connect_timeout] = 2000
26
+ end
27
+
28
+ if opts.nil? || opts[:read_timeout].nil?
29
+ opts[:read_timeout] = 5000
30
+ end
31
+
32
+ if opts.nil? || opts[:write_timeout].nil?
33
+ opts[:write_timeout] = 30000
34
+ end
35
+
24
36
  if region.nil? || !region.is_a?(String) || !regions.include?(region)
25
37
  raise "`region` is required and must be one of the following: #{regions.join(", ")}"
26
38
  end
@@ -21,6 +21,18 @@ module Algolia
21
21
  region = nil
22
22
  end
23
23
 
24
+ if opts.nil? || opts[:connect_timeout].nil?
25
+ opts[:connect_timeout] = 2000
26
+ end
27
+
28
+ if opts.nil? || opts[:read_timeout].nil?
29
+ opts[:read_timeout] = 5000
30
+ end
31
+
32
+ if opts.nil? || opts[:write_timeout].nil?
33
+ opts[:write_timeout] = 30000
34
+ end
35
+
24
36
  if region.nil? || !region.is_a?(String) || !regions.include?(region)
25
37
  raise "`region` is required and must be one of the following: #{regions.join(", ")}"
26
38
  end
@@ -95,7 +95,7 @@ module Algolia
95
95
  @api_client.deserialize(response.body, request_options[:debug_return_type] || "Search::AddApiKeyResponse")
96
96
  end
97
97
 
98
- # If a record with the specified object ID exists, the existing record is replaced. Otherwise, a new record is added to the index. To update _some_ attributes of an existing record, use the [`partial` operation](#tag/Records/operation/partialUpdateObject) instead. To add, update, or replace multiple records, use the [`batch` operation](#tag/Records/operation/batch).
98
+ # If a record with the specified object ID exists, the existing record is replaced. Otherwise, a new record is added to the index. If you want to use auto-generated object IDs, use the [`saveObject` operation](#tag/Records/operation/saveObject). To update _some_ attributes of an existing record, use the [`partial` operation](#tag/Records/operation/partialUpdateObject) instead. To add, update, or replace multiple records, use the [`batch` operation](#tag/Records/operation/batch).
99
99
  #
100
100
  # Required API Key ACLs:
101
101
  # - addObject
@@ -140,7 +140,7 @@ module Algolia
140
140
  @api_client.call_api(:PUT, path, new_options)
141
141
  end
142
142
 
143
- # If a record with the specified object ID exists, the existing record is replaced. Otherwise, a new record is added to the index. To update _some_ attributes of an existing record, use the [`partial` operation](#tag/Records/operation/partialUpdateObject) instead. To add, update, or replace multiple records, use the [`batch` operation](#tag/Records/operation/batch).
143
+ # If a record with the specified object ID exists, the existing record is replaced. Otherwise, a new record is added to the index. If you want to use auto-generated object IDs, use the [`saveObject` operation](#tag/Records/operation/saveObject). To update _some_ attributes of an existing record, use the [`partial` operation](#tag/Records/operation/partialUpdateObject) instead. To add, update, or replace multiple records, use the [`batch` operation](#tag/Records/operation/batch).
144
144
  #
145
145
  # Required API Key ACLs:
146
146
  # - addObject
@@ -252,7 +252,7 @@ module Algolia
252
252
  @api_client.deserialize(response.body, request_options[:debug_return_type] || "Search::CreatedAtResponse")
253
253
  end
254
254
 
255
- # Adds, updates, or deletes records in one index with a single API request. Batching index updates reduces latency and increases data integrity. - Actions are applied in the order they're specified. - Actions are equivalent to the individual API requests of the same name.
255
+ # Adds, updates, or deletes records in one index with a single API request. Batching index updates reduces latency and increases data integrity. - Actions are applied in the order they're specified. - Actions are equivalent to the individual API requests of the same name. This operation is subject to [indexing rate limits](https://support.algolia.com/hc/en-us/articles/4406975251089-Is-there-a-rate-limit-for-indexing-on-Algolia).
256
256
 
257
257
  # @param index_name [String] Name of the index on which to perform the operation. (required)
258
258
  # @param batch_write_params [BatchWriteParams] (required)
@@ -287,7 +287,7 @@ module Algolia
287
287
  @api_client.call_api(:POST, path, new_options)
288
288
  end
289
289
 
290
- # Adds, updates, or deletes records in one index with a single API request. Batching index updates reduces latency and increases data integrity. - Actions are applied in the order they're specified. - Actions are equivalent to the individual API requests of the same name.
290
+ # Adds, updates, or deletes records in one index with a single API request. Batching index updates reduces latency and increases data integrity. - Actions are applied in the order they're specified. - Actions are equivalent to the individual API requests of the same name. This operation is subject to [indexing rate limits](https://support.algolia.com/hc/en-us/articles/4406975251089-Is-there-a-rate-limit-for-indexing-on-Algolia).
291
291
 
292
292
  # @param index_name [String] Name of the index on which to perform the operation. (required)
293
293
  # @param batch_write_params [BatchWriteParams] (required)
@@ -458,7 +458,7 @@ module Algolia
458
458
  @api_client.deserialize(response.body, request_options[:debug_return_type] || "Search::BrowseResponse")
459
459
  end
460
460
 
461
- # Deletes only the records from an index while keeping settings, synonyms, and rules.
461
+ # Deletes only the records from an index while keeping settings, synonyms, and rules. This operation is resource-intensive and subject to [indexing rate limits](https://support.algolia.com/hc/en-us/articles/4406975251089-Is-there-a-rate-limit-for-indexing-on-Algolia).
462
462
  #
463
463
  # Required API Key ACLs:
464
464
  # - deleteIndex
@@ -490,7 +490,7 @@ module Algolia
490
490
  @api_client.call_api(:POST, path, new_options)
491
491
  end
492
492
 
493
- # Deletes only the records from an index while keeping settings, synonyms, and rules.
493
+ # Deletes only the records from an index while keeping settings, synonyms, and rules. This operation is resource-intensive and subject to [indexing rate limits](https://support.algolia.com/hc/en-us/articles/4406975251089-Is-there-a-rate-limit-for-indexing-on-Algolia).
494
494
  #
495
495
  # Required API Key ACLs:
496
496
  # - deleteIndex
@@ -816,7 +816,7 @@ module Algolia
816
816
  @api_client.deserialize(response.body, request_options[:debug_return_type] || "Search::DeleteApiKeyResponse")
817
817
  end
818
818
 
819
- # This operation doesn't accept empty queries or filters. It's more efficient to get a list of object IDs with the [`browse` operation](#tag/Search/operation/browse), and then delete the records using the [`batch` operation](#tag/Records/operation/batch).
819
+ # This operation doesn't accept empty filters. This operation is resource-intensive. You should only use it if you can't get the object IDs of the records you want to delete. It's more efficient to get a list of object IDs with the [`browse` operation](#tag/Search/operation/browse), and then delete the records using the [`batch` operation](#tag/Records/operation/batch). This operation is subject to [indexing rate limits](https://support.algolia.com/hc/en-us/articles/4406975251089-Is-there-a-rate-limit-for-indexing-on-Algolia).
820
820
  #
821
821
  # Required API Key ACLs:
822
822
  # - deleteIndex
@@ -853,7 +853,7 @@ module Algolia
853
853
  @api_client.call_api(:POST, path, new_options)
854
854
  end
855
855
 
856
- # This operation doesn't accept empty queries or filters. It's more efficient to get a list of object IDs with the [`browse` operation](#tag/Search/operation/browse), and then delete the records using the [`batch` operation](#tag/Records/operation/batch).
856
+ # This operation doesn't accept empty filters. This operation is resource-intensive. You should only use it if you can't get the object IDs of the records you want to delete. It's more efficient to get a list of object IDs with the [`browse` operation](#tag/Search/operation/browse), and then delete the records using the [`batch` operation](#tag/Records/operation/batch). This operation is subject to [indexing rate limits](https://support.algolia.com/hc/en-us/articles/4406975251089-Is-there-a-rate-limit-for-indexing-on-Algolia).
857
857
  #
858
858
  # Required API Key ACLs:
859
859
  # - deleteIndex
@@ -1951,7 +1951,7 @@ module Algolia
1951
1951
  @api_client.deserialize(response.body, request_options[:debug_return_type] || "Search::ListUserIdsResponse")
1952
1952
  end
1953
1953
 
1954
- # Adds, updates, or deletes records in multiple indices with a single API request. - Actions are applied in the order they are specified. - Actions are equivalent to the individual API requests of the same name.
1954
+ # Adds, updates, or deletes records in multiple indices with a single API request. - Actions are applied in the order they are specified. - Actions are equivalent to the individual API requests of the same name. This operation is subject to [indexing rate limits](https://support.algolia.com/hc/en-us/articles/4406975251089-Is-there-a-rate-limit-for-indexing-on-Algolia).
1955
1955
 
1956
1956
  # @param batch_params [BatchParams] (required)
1957
1957
  # @param request_options: The request options to send along with the query, they will be merged with the transporter base parameters (headers, query params, timeouts, etc.). (optional)
@@ -1981,7 +1981,7 @@ module Algolia
1981
1981
  @api_client.call_api(:POST, path, new_options)
1982
1982
  end
1983
1983
 
1984
- # Adds, updates, or deletes records in multiple indices with a single API request. - Actions are applied in the order they are specified. - Actions are equivalent to the individual API requests of the same name.
1984
+ # Adds, updates, or deletes records in multiple indices with a single API request. - Actions are applied in the order they are specified. - Actions are equivalent to the individual API requests of the same name. This operation is subject to [indexing rate limits](https://support.algolia.com/hc/en-us/articles/4406975251089-Is-there-a-rate-limit-for-indexing-on-Algolia).
1985
1985
 
1986
1986
  # @param batch_params [BatchParams] (required)
1987
1987
  # @param request_options: The request options to send along with the query, they will be merged with the transporter base parameters (headers, query params, timeouts, etc.). (optional)
@@ -1991,7 +1991,7 @@ module Algolia
1991
1991
  @api_client.deserialize(response.body, request_options[:debug_return_type] || "Search::MultipleBatchResponse")
1992
1992
  end
1993
1993
 
1994
- # Copies or moves (renames) an index within the same Algolia application. - Existing destination indices are overwritten, except for their analytics data. - If the destination index doesn't exist yet, it'll be created. **Copy** - Copying a source index that doesn't exist creates a new index with 0 records and default settings. - The API keys of the source index are merged with the existing keys in the destination index. - You can't copy the `enableReRanking`, `mode`, and `replicas` settings. - You can't copy to a destination index that already has replicas. - Be aware of the [size limits](https://www.algolia.com/doc/guides/scaling/algolia-service-limits/#application-record-and-index-limits). - Related guide: [Copy indices](https://www.algolia.com/doc/guides/sending-and-managing-data/manage-indices-and-apps/manage-indices/how-to/copy-indices/) **Move** - Moving a source index that doesn't exist is ignored without returning an error. - When moving an index, the analytics data keeps its original name, and a new set of analytics data is started for the new name. To access the original analytics in the dashboard, create an index with the original name. - If the destination index has replicas, moving will overwrite the existing index and copy the data to the replica indices. - Related guide: [Move indices](https://www.algolia.com/doc/guides/sending-and-managing-data/manage-indices-and-apps/manage-indices/how-to/move-indices/).
1994
+ # Copies or moves (renames) an index within the same Algolia application. - Existing destination indices are overwritten, except for their analytics data. - If the destination index doesn't exist yet, it'll be created. - This operation is resource-intensive. **Copy** - Copying a source index that doesn't exist creates a new index with 0 records and default settings. - The API keys of the source index are merged with the existing keys in the destination index. - You can't copy the `enableReRanking`, `mode`, and `replicas` settings. - You can't copy to a destination index that already has replicas. - Be aware of the [size limits](https://www.algolia.com/doc/guides/scaling/algolia-service-limits/#application-record-and-index-limits). - Related guide: [Copy indices](https://www.algolia.com/doc/guides/sending-and-managing-data/manage-indices-and-apps/manage-indices/how-to/copy-indices/) **Move** - Moving a source index that doesn't exist is ignored without returning an error. - When moving an index, the analytics data keeps its original name, and a new set of analytics data is started for the new name. To access the original analytics in the dashboard, create an index with the original name. - If the destination index has replicas, moving will overwrite the existing index and copy the data to the replica indices. - Related guide: [Move indices](https://www.algolia.com/doc/guides/sending-and-managing-data/manage-indices-and-apps/manage-indices/how-to/move-indices/). This operation is subject to [indexing rate limits](https://support.algolia.com/hc/en-us/articles/4406975251089-Is-there-a-rate-limit-for-indexing-on-Algolia).
1995
1995
  #
1996
1996
  # Required API Key ACLs:
1997
1997
  # - addObject
@@ -2028,7 +2028,7 @@ module Algolia
2028
2028
  @api_client.call_api(:POST, path, new_options)
2029
2029
  end
2030
2030
 
2031
- # Copies or moves (renames) an index within the same Algolia application. - Existing destination indices are overwritten, except for their analytics data. - If the destination index doesn't exist yet, it'll be created. **Copy** - Copying a source index that doesn't exist creates a new index with 0 records and default settings. - The API keys of the source index are merged with the existing keys in the destination index. - You can't copy the `enableReRanking`, `mode`, and `replicas` settings. - You can't copy to a destination index that already has replicas. - Be aware of the [size limits](https://www.algolia.com/doc/guides/scaling/algolia-service-limits/#application-record-and-index-limits). - Related guide: [Copy indices](https://www.algolia.com/doc/guides/sending-and-managing-data/manage-indices-and-apps/manage-indices/how-to/copy-indices/) **Move** - Moving a source index that doesn't exist is ignored without returning an error. - When moving an index, the analytics data keeps its original name, and a new set of analytics data is started for the new name. To access the original analytics in the dashboard, create an index with the original name. - If the destination index has replicas, moving will overwrite the existing index and copy the data to the replica indices. - Related guide: [Move indices](https://www.algolia.com/doc/guides/sending-and-managing-data/manage-indices-and-apps/manage-indices/how-to/move-indices/).
2031
+ # Copies or moves (renames) an index within the same Algolia application. - Existing destination indices are overwritten, except for their analytics data. - If the destination index doesn't exist yet, it'll be created. - This operation is resource-intensive. **Copy** - Copying a source index that doesn't exist creates a new index with 0 records and default settings. - The API keys of the source index are merged with the existing keys in the destination index. - You can't copy the `enableReRanking`, `mode`, and `replicas` settings. - You can't copy to a destination index that already has replicas. - Be aware of the [size limits](https://www.algolia.com/doc/guides/scaling/algolia-service-limits/#application-record-and-index-limits). - Related guide: [Copy indices](https://www.algolia.com/doc/guides/sending-and-managing-data/manage-indices-and-apps/manage-indices/how-to/copy-indices/) **Move** - Moving a source index that doesn't exist is ignored without returning an error. - When moving an index, the analytics data keeps its original name, and a new set of analytics data is started for the new name. To access the original analytics in the dashboard, create an index with the original name. - If the destination index has replicas, moving will overwrite the existing index and copy the data to the replica indices. - Related guide: [Move indices](https://www.algolia.com/doc/guides/sending-and-managing-data/manage-indices-and-apps/manage-indices/how-to/move-indices/). This operation is subject to [indexing rate limits](https://support.algolia.com/hc/en-us/articles/4406975251089-Is-there-a-rate-limit-for-indexing-on-Algolia).
2032
2032
  #
2033
2033
  # Required API Key ACLs:
2034
2034
  # - addObject
@@ -2041,7 +2041,7 @@ module Algolia
2041
2041
  @api_client.deserialize(response.body, request_options[:debug_return_type] || "Search::UpdatedAtResponse")
2042
2042
  end
2043
2043
 
2044
- # Adds new attributes to a record, or updates existing ones. - If a record with the specified object ID doesn't exist, a new record is added to the index **if** `createIfNotExists` is true. - If the index doesn't exist yet, this method creates a new index. - You can use any first-level attribute but not nested attributes. If you specify a nested attribute, the engine treats it as a replacement for its first-level ancestor. To update an attribute without pushing the entire record, you can use these built-in operations. These operations can be helpful if you don't have access to your initial data. - Increment: increment a numeric attribute - Decrement: decrement a numeric attribute - Add: append a number or string element to an array attribute - Remove: remove all matching number or string elements from an array attribute made of numbers or strings - AddUnique: add a number or string element to an array attribute made of numbers or strings only if it's not already present - IncrementFrom: increment a numeric integer attribute only if the provided value matches the current value, and otherwise ignore the whole object update. For example, if you pass an IncrementFrom value of 2 for the version attribute, but the current value of the attribute is 1, the engine ignores the update. If the object doesn't exist, the engine only creates it if you pass an IncrementFrom value of 0. - IncrementSet: increment a numeric integer attribute only if the provided value is greater than the current value, and otherwise ignore the whole object update. For example, if you pass an IncrementSet value of 2 for the version attribute, and the current value of the attribute is 1, the engine updates the object. If the object doesn't exist yet, the engine only creates it if you pass an IncrementSet value greater than 0. You can specify an operation by providing an object with the attribute to update as the key and its value being an object with the following properties: - _operation: the operation to apply on the attribute - value: the right-hand side argument to the operation, for example, increment or decrement step, value to add or remove.
2044
+ # Adds new attributes to a record, or updates existing ones. - If a record with the specified object ID doesn't exist, a new record is added to the index **if** `createIfNotExists` is true. - If the index doesn't exist yet, this method creates a new index. - You can use any first-level attribute but not nested attributes. If you specify a nested attribute, this operation replaces its first-level ancestor. To update an attribute without pushing the entire record, you can use these built-in operations. These operations can be helpful if you don't have access to your initial data. - Increment: increment a numeric attribute - Decrement: decrement a numeric attribute - Add: append a number or string element to an array attribute - Remove: remove all matching number or string elements from an array attribute made of numbers or strings - AddUnique: add a number or string element to an array attribute made of numbers or strings only if it's not already present - IncrementFrom: increment a numeric integer attribute only if the provided value matches the current value, and otherwise ignore the whole object update. For example, if you pass an IncrementFrom value of 2 for the version attribute, but the current value of the attribute is 1, the engine ignores the update. If the object doesn't exist, the engine only creates it if you pass an IncrementFrom value of 0. - IncrementSet: increment a numeric integer attribute only if the provided value is greater than the current value, and otherwise ignore the whole object update. For example, if you pass an IncrementSet value of 2 for the version attribute, and the current value of the attribute is 1, the engine updates the object. If the object doesn't exist yet, the engine only creates it if you pass an IncrementSet value greater than 0. You can specify an operation by providing an object with the attribute to update as the key and its value being an object with the following properties: - _operation: the operation to apply on the attribute - value: the right-hand side argument to the operation, for example, increment or decrement step, value to add or remove. This operation is subject to [indexing rate limits](https://support.algolia.com/hc/en-us/articles/4406975251089-Is-there-a-rate-limit-for-indexing-on-Algolia).
2045
2045
  #
2046
2046
  # Required API Key ACLs:
2047
2047
  # - addObject
@@ -2093,7 +2093,7 @@ module Algolia
2093
2093
  @api_client.call_api(:POST, path, new_options)
2094
2094
  end
2095
2095
 
2096
- # Adds new attributes to a record, or updates existing ones. - If a record with the specified object ID doesn't exist, a new record is added to the index **if** `createIfNotExists` is true. - If the index doesn't exist yet, this method creates a new index. - You can use any first-level attribute but not nested attributes. If you specify a nested attribute, the engine treats it as a replacement for its first-level ancestor. To update an attribute without pushing the entire record, you can use these built-in operations. These operations can be helpful if you don't have access to your initial data. - Increment: increment a numeric attribute - Decrement: decrement a numeric attribute - Add: append a number or string element to an array attribute - Remove: remove all matching number or string elements from an array attribute made of numbers or strings - AddUnique: add a number or string element to an array attribute made of numbers or strings only if it's not already present - IncrementFrom: increment a numeric integer attribute only if the provided value matches the current value, and otherwise ignore the whole object update. For example, if you pass an IncrementFrom value of 2 for the version attribute, but the current value of the attribute is 1, the engine ignores the update. If the object doesn't exist, the engine only creates it if you pass an IncrementFrom value of 0. - IncrementSet: increment a numeric integer attribute only if the provided value is greater than the current value, and otherwise ignore the whole object update. For example, if you pass an IncrementSet value of 2 for the version attribute, and the current value of the attribute is 1, the engine updates the object. If the object doesn't exist yet, the engine only creates it if you pass an IncrementSet value greater than 0. You can specify an operation by providing an object with the attribute to update as the key and its value being an object with the following properties: - _operation: the operation to apply on the attribute - value: the right-hand side argument to the operation, for example, increment or decrement step, value to add or remove.
2096
+ # Adds new attributes to a record, or updates existing ones. - If a record with the specified object ID doesn't exist, a new record is added to the index **if** `createIfNotExists` is true. - If the index doesn't exist yet, this method creates a new index. - You can use any first-level attribute but not nested attributes. If you specify a nested attribute, this operation replaces its first-level ancestor. To update an attribute without pushing the entire record, you can use these built-in operations. These operations can be helpful if you don't have access to your initial data. - Increment: increment a numeric attribute - Decrement: decrement a numeric attribute - Add: append a number or string element to an array attribute - Remove: remove all matching number or string elements from an array attribute made of numbers or strings - AddUnique: add a number or string element to an array attribute made of numbers or strings only if it's not already present - IncrementFrom: increment a numeric integer attribute only if the provided value matches the current value, and otherwise ignore the whole object update. For example, if you pass an IncrementFrom value of 2 for the version attribute, but the current value of the attribute is 1, the engine ignores the update. If the object doesn't exist, the engine only creates it if you pass an IncrementFrom value of 0. - IncrementSet: increment a numeric integer attribute only if the provided value is greater than the current value, and otherwise ignore the whole object update. For example, if you pass an IncrementSet value of 2 for the version attribute, and the current value of the attribute is 1, the engine updates the object. If the object doesn't exist yet, the engine only creates it if you pass an IncrementSet value greater than 0. You can specify an operation by providing an object with the attribute to update as the key and its value being an object with the following properties: - _operation: the operation to apply on the attribute - value: the right-hand side argument to the operation, for example, increment or decrement step, value to add or remove. This operation is subject to [indexing rate limits](https://support.algolia.com/hc/en-us/articles/4406975251089-Is-there-a-rate-limit-for-indexing-on-Algolia).
2097
2097
  #
2098
2098
  # Required API Key ACLs:
2099
2099
  # - addObject
@@ -2255,7 +2255,7 @@ module Algolia
2255
2255
  @api_client.deserialize(response.body, request_options[:debug_return_type] || "Search::AddApiKeyResponse")
2256
2256
  end
2257
2257
 
2258
- # Adds a record to an index or replace it. - If the record doesn't have an object ID, a new record with an auto-generated object ID is added to your index. - If a record with the specified object ID exists, the existing record is replaced. - If a record with the specified object ID doesn't exist, a new record is added to your index. - If you add a record to an index that doesn't exist yet, a new index is created. To update _some_ attributes of a record, use the [`partial` operation](#tag/Records/operation/partialUpdateObject). To add, update, or replace multiple records, use the [`batch` operation](#tag/Records/operation/batch).
2258
+ # Adds a record to an index or replaces it. - If the record doesn't have an object ID, a new record with an auto-generated object ID is added to your index. - If a record with the specified object ID exists, the existing record is replaced. - If a record with the specified object ID doesn't exist, a new record is added to your index. - If you add a record to an index that doesn't exist yet, a new index is created. To update _some_ attributes of a record, use the [`partial` operation](#tag/Records/operation/partialUpdateObject). To add, update, or replace multiple records, use the [`batch` operation](#tag/Records/operation/batch). This operation is subject to [indexing rate limits](https://support.algolia.com/hc/en-us/articles/4406975251089-Is-there-a-rate-limit-for-indexing-on-Algolia).
2259
2259
  #
2260
2260
  # Required API Key ACLs:
2261
2261
  # - addObject
@@ -2292,7 +2292,7 @@ module Algolia
2292
2292
  @api_client.call_api(:POST, path, new_options)
2293
2293
  end
2294
2294
 
2295
- # Adds a record to an index or replace it. - If the record doesn't have an object ID, a new record with an auto-generated object ID is added to your index. - If a record with the specified object ID exists, the existing record is replaced. - If a record with the specified object ID doesn't exist, a new record is added to your index. - If you add a record to an index that doesn't exist yet, a new index is created. To update _some_ attributes of a record, use the [`partial` operation](#tag/Records/operation/partialUpdateObject). To add, update, or replace multiple records, use the [`batch` operation](#tag/Records/operation/batch).
2295
+ # Adds a record to an index or replaces it. - If the record doesn't have an object ID, a new record with an auto-generated object ID is added to your index. - If a record with the specified object ID exists, the existing record is replaced. - If a record with the specified object ID doesn't exist, a new record is added to your index. - If you add a record to an index that doesn't exist yet, a new index is created. To update _some_ attributes of a record, use the [`partial` operation](#tag/Records/operation/partialUpdateObject). To add, update, or replace multiple records, use the [`batch` operation](#tag/Records/operation/batch). This operation is subject to [indexing rate limits](https://support.algolia.com/hc/en-us/articles/4406975251089-Is-there-a-rate-limit-for-indexing-on-Algolia).
2296
2296
  #
2297
2297
  # Required API Key ACLs:
2298
2298
  # - addObject
@@ -2360,13 +2360,13 @@ module Algolia
2360
2360
  # @param rule [Rule] (required)
2361
2361
  # @param forward_to_replicas [Boolean] Whether changes are applied to replica indices.
2362
2362
  # @param request_options: The request options to send along with the query, they will be merged with the transporter base parameters (headers, query params, timeouts, etc.). (optional)
2363
- # @return [UpdatedRuleResponse]
2363
+ # @return [UpdatedAtResponse]
2364
2364
  def save_rule(index_name, object_id, rule, forward_to_replicas = nil, request_options = {})
2365
2365
  response = save_rule_with_http_info(index_name, object_id, rule, forward_to_replicas, request_options)
2366
- @api_client.deserialize(response.body, request_options[:debug_return_type] || "Search::UpdatedRuleResponse")
2366
+ @api_client.deserialize(response.body, request_options[:debug_return_type] || "Search::UpdatedAtResponse")
2367
2367
  end
2368
2368
 
2369
- # Create or update multiple rules. If a rule with the specified object ID doesn't exist, Algolia creates a new one. Otherwise, existing rules are replaced.
2369
+ # Create or update multiple rules. If a rule with the specified object ID doesn't exist, Algolia creates a new one. Otherwise, existing rules are replaced. This operation is subject to [indexing rate limits](https://support.algolia.com/hc/en-us/articles/4406975251089-Is-there-a-rate-limit-for-indexing-on-Algolia).
2370
2370
  #
2371
2371
  # Required API Key ACLs:
2372
2372
  # - editSettings
@@ -2413,7 +2413,7 @@ module Algolia
2413
2413
  @api_client.call_api(:POST, path, new_options)
2414
2414
  end
2415
2415
 
2416
- # Create or update multiple rules. If a rule with the specified object ID doesn't exist, Algolia creates a new one. Otherwise, existing rules are replaced.
2416
+ # Create or update multiple rules. If a rule with the specified object ID doesn't exist, Algolia creates a new one. Otherwise, existing rules are replaced. This operation is subject to [indexing rate limits](https://support.algolia.com/hc/en-us/articles/4406975251089-Is-there-a-rate-limit-for-indexing-on-Algolia).
2417
2417
  #
2418
2418
  # Required API Key ACLs:
2419
2419
  # - editSettings
@@ -2495,7 +2495,7 @@ module Algolia
2495
2495
  @api_client.deserialize(response.body, request_options[:debug_return_type] || "Search::SaveSynonymResponse")
2496
2496
  end
2497
2497
 
2498
- # If a synonym with the `objectID` doesn't exist, Algolia adds a new one. Otherwise, existing synonyms are replaced.
2498
+ # If a synonym with the `objectID` doesn't exist, Algolia adds a new one. Otherwise, existing synonyms are replaced. This operation is subject to [indexing rate limits](https://support.algolia.com/hc/en-us/articles/4406975251089-Is-there-a-rate-limit-for-indexing-on-Algolia).
2499
2499
  #
2500
2500
  # Required API Key ACLs:
2501
2501
  # - editSettings
@@ -2542,7 +2542,7 @@ module Algolia
2542
2542
  @api_client.call_api(:POST, path, new_options)
2543
2543
  end
2544
2544
 
2545
- # If a synonym with the `objectID` doesn't exist, Algolia adds a new one. Otherwise, existing synonyms are replaced.
2545
+ # If a synonym with the `objectID` doesn't exist, Algolia adds a new one. Otherwise, existing synonyms are replaced. This operation is subject to [indexing rate limits](https://support.algolia.com/hc/en-us/articles/4406975251089-Is-there-a-rate-limit-for-indexing-on-Algolia).
2546
2546
  #
2547
2547
  # Required API Key ACLs:
2548
2548
  # - editSettings
@@ -3288,7 +3288,7 @@ module Algolia
3288
3288
  #
3289
3289
  # @return [String]
3290
3290
  #
3291
- def generate_secured_api_key(parent_api_key, restrictions = {})
3291
+ def self.generate_secured_api_key(parent_api_key, restrictions = {})
3292
3292
  restrictions = restrictions.to_hash
3293
3293
  if restrictions.key?(:searchParams)
3294
3294
  # merge searchParams with the root of the restrictions
@@ -3310,13 +3310,24 @@ module Algolia
3310
3310
  Base64.encode64("#{hmac}#{url_encoded_restrictions}").gsub("\n", "")
3311
3311
  end
3312
3312
 
3313
+ # Helper: Generates a secured API key based on the given `parent_api_key` and given `restrictions`.
3314
+ #
3315
+ # @param parent_api_key [String] Parent API key used the generate the secured key
3316
+ # @param restrictions [SecuredApiKeyRestrictions] Restrictions to apply on the secured key
3317
+ #
3318
+ # @return [String]
3319
+ #
3320
+ def generate_secured_api_key(parent_api_key, restrictions = {})
3321
+ self.class.generate_secured_api_key(parent_api_key, restrictions)
3322
+ end
3323
+
3313
3324
  # Helper: Retrieves the remaining validity of the previous generated `secured_api_key`, the `validUntil` parameter must have been provided.
3314
3325
  #
3315
3326
  # @param secured_api_key [String]
3316
3327
  #
3317
3328
  # @return [Integer]
3318
3329
  #
3319
- def get_secured_api_key_remaining_validity(secured_api_key)
3330
+ def self.get_secured_api_key_remaining_validity(secured_api_key)
3320
3331
  now = Time.now.to_i
3321
3332
  decoded_key = Base64.decode64(secured_api_key)
3322
3333
  regex = "validUntil=(\\d+)"
@@ -3331,22 +3342,33 @@ module Algolia
3331
3342
  valid_until - now
3332
3343
  end
3333
3344
 
3345
+ # Helper: Retrieves the remaining validity of the previous generated `secured_api_key`, the `validUntil` parameter must have been provided.
3346
+ #
3347
+ # @param secured_api_key [String]
3348
+ #
3349
+ # @return [Integer]
3350
+ #
3351
+ def get_secured_api_key_remaining_validity(secured_api_key)
3352
+ self.class.get_secured_api_key_remaining_validity(secured_api_key)
3353
+ end
3354
+
3334
3355
  # Helper: Saves the given array of objects in the given index. The `chunked_batch` helper is used under the hood, which creates a `batch` requests with at most 1000 objects in it.
3335
3356
  #
3336
3357
  # @param index_name [String]: The `index_name` to save `objects` in.
3337
3358
  # @param objects [Array]: The array of `objects` to store in the given Algolia `indexName`.
3338
3359
  # @param wait_for_tasks [Boolean]: Whether or not we should wait until every `batch` tasks has been processed, this operation may slow the total execution time of this method but is more reliable.
3360
+ # @param batch_size [int] The size of the chunk of `objects`. The number of `batch` calls will be equal to `length(objects) / batchSize`. Defaults to 1000.
3339
3361
  # @param request_options: The request options to send along with the query, they will be merged with the transporter base parameters (headers, query params, timeouts, etc.). (optional)
3340
3362
  #
3341
3363
  # @return [BatchResponse]
3342
3364
  #
3343
- def save_objects(index_name, objects, wait_for_tasks = false, request_options = {})
3365
+ def save_objects(index_name, objects, wait_for_tasks = false, batch_size = 1000, request_options = {})
3344
3366
  chunked_batch(
3345
3367
  index_name,
3346
3368
  objects,
3347
3369
  Search::Action::ADD_OBJECT,
3348
3370
  wait_for_tasks,
3349
- 1000,
3371
+ batch_size,
3350
3372
  request_options
3351
3373
  )
3352
3374
  end
@@ -3356,17 +3378,18 @@ module Algolia
3356
3378
  # @param index_name [String]: The `index_name` to delete `object_ids` from.
3357
3379
  # @param object_ids [Array]: The object_ids to delete.
3358
3380
  # @param wait_for_tasks [Boolean]: Whether or not we should wait until every `batch` tasks has been processed, this operation may slow the total execution time of this method but is more reliable.
3381
+ # @param batch_size [int] The size of the chunk of `objects`. The number of `batch` calls will be equal to `length(objects) / batchSize`. Defaults to 1000.
3359
3382
  # @param request_options: The request options to send along with the query, they will be merged with the transporter base parameters (headers, query params, timeouts, etc.). (optional)
3360
3383
  #
3361
3384
  # @return [BatchResponse]
3362
3385
  #
3363
- def delete_objects(index_name, object_ids, wait_for_tasks = false, request_options = {})
3386
+ def delete_objects(index_name, object_ids, wait_for_tasks = false, batch_size = 1000, request_options = {})
3364
3387
  chunked_batch(
3365
3388
  index_name,
3366
3389
  object_ids.map { |id| {"objectID" => id} },
3367
3390
  Search::Action::DELETE_OBJECT,
3368
3391
  wait_for_tasks,
3369
- 1000,
3392
+ batch_size,
3370
3393
  request_options
3371
3394
  )
3372
3395
  end
@@ -3377,17 +3400,25 @@ module Algolia
3377
3400
  # @param objects [Array]: The objects to partially update.
3378
3401
  # @param create_if_not_exists [Boolean]: To be provided if non-existing objects are passed, otherwise, the call will fail.
3379
3402
  # @param wait_for_tasks [Boolean] Whether or not we should wait until every `batch` tasks has been processed, this operation may slow the total execution time of this method but is more reliable.
3403
+ # @param batch_size [int] The size of the chunk of `objects`. The number of `batch` calls will be equal to `length(objects) / batchSize`. Defaults to 1000.
3380
3404
  # @param request_options: The request options to send along with the query, they will be merged with the transporter base parameters (headers, query params, timeouts, etc.). (optional)
3381
3405
  #
3382
3406
  # @return [BatchResponse]
3383
3407
  #
3384
- def partial_update_objects(index_name, objects, create_if_not_exists, wait_for_tasks = false, request_options = {})
3408
+ def partial_update_objects(
3409
+ index_name,
3410
+ objects,
3411
+ create_if_not_exists,
3412
+ wait_for_tasks = false,
3413
+ batch_size = 1000,
3414
+ request_options = {}
3415
+ )
3385
3416
  chunked_batch(
3386
3417
  index_name,
3387
3418
  objects,
3388
3419
  create_if_not_exists ? Search::Action::PARTIAL_UPDATE_OBJECT : Search::Action::PARTIAL_UPDATE_OBJECT_NO_CREATE,
3389
3420
  wait_for_tasks,
3390
- 1000,
3421
+ batch_size,
3391
3422
  request_options
3392
3423
  )
3393
3424
  end
@@ -3502,13 +3533,16 @@ module Algolia
3502
3533
  def index_exists?(index_name)
3503
3534
  begin
3504
3535
  get_settings(index_name)
3505
- rescue AlgoliaHttpError => e
3506
- return false if e.code == 404
3536
+ rescue Exception => e
3537
+ if e.is_a?(AlgoliaHttpError)
3538
+ return false if e.code == 404
3507
3539
 
3508
- raise e
3540
+ raise e
3541
+ end
3509
3542
  end
3510
3543
 
3511
3544
  true
3512
3545
  end
3546
+
3513
3547
  end
3514
3548
  end
@@ -25,9 +25,9 @@ module Algolia
25
25
  @app_id = app_id
26
26
  @api_key = api_key
27
27
  @client_side_validation = opts[:client_side_validation].nil? ? true : opts[:client_side_validation]
28
- @write_timeout = opts[:write_timeout] || 30_000
29
- @read_timeout = opts[:read_timeout] || 5_000
30
28
  @connect_timeout = opts[:connect_timeout] || 2_000
29
+ @read_timeout = opts[:read_timeout] || 5_000
30
+ @write_timeout = opts[:write_timeout] || 30_000
31
31
  @compression_type = opts[:compression_type] || "none"
32
32
  @requester = opts[:requester]
33
33
 
@@ -14,22 +14,10 @@ module Algolia
14
14
  GA4_BIGQUERY_EXPORT = "ga4BigqueryExport".freeze
15
15
  JSON = "json".freeze
16
16
  SHOPIFY = "shopify".freeze
17
- SFCC = "sfcc".freeze
18
17
  PUSH = "push".freeze
19
18
 
20
19
  def self.all_vars
21
- @all_vars ||= [
22
- BIGCOMMERCE,
23
- BIGQUERY,
24
- COMMERCETOOLS,
25
- CSV,
26
- DOCKER,
27
- GA4_BIGQUERY_EXPORT,
28
- JSON,
29
- SHOPIFY,
30
- SFCC,
31
- PUSH
32
- ].freeze
20
+ @all_vars ||= [BIGCOMMERCE, BIGQUERY, COMMERCETOOLS, CSV, DOCKER, GA4_BIGQUERY_EXPORT, JSON, SHOPIFY, PUSH].freeze
33
21
  end
34
22
 
35
23
  # Builds the enum from string
@@ -5,11 +5,11 @@ require "time"
5
5
 
6
6
  module Algolia
7
7
  module Ingestion
8
- class SourceWatchResponse
8
+ class WatchResponse
9
9
  # Universally unique identifier (UUID) of a task run.
10
10
  attr_accessor :run_id
11
11
 
12
- # depending on the source type, the validation returns sampling data of your source (JSON, CSV, BigQuery).
12
+ # when used with discovering or validating sources, the sampled data of your source is returned.
13
13
  attr_accessor :data
14
14
 
15
15
  # in case of error, observability events will be added to the response, if any.
@@ -56,7 +56,7 @@ module Algolia
56
56
  if (!attributes.is_a?(Hash))
57
57
  raise(
58
58
  ArgumentError,
59
- "The input argument (attributes) must be a hash in `Algolia::SourceWatchResponse` initialize method"
59
+ "The input argument (attributes) must be a hash in `Algolia::WatchResponse` initialize method"
60
60
  )
61
61
  end
62
62
 
@@ -65,7 +65,7 @@ module Algolia
65
65
  if (!self.class.attribute_map.key?(k.to_sym))
66
66
  raise(
67
67
  ArgumentError,
68
- "`#{k}` is not a valid attribute in `Algolia::SourceWatchResponse`. Please check the name to make sure it's valid. List of attributes: " +
68
+ "`#{k}` is not a valid attribute in `Algolia::WatchResponse`. Please check the name to make sure it's valid. List of attributes: " +
69
69
  self.class.attribute_map.keys.inspect
70
70
  )
71
71
  end
@@ -9,9 +9,10 @@ module Algolia
9
9
  IGNORE_PLURALS = "ignorePlurals".freeze
10
10
  SINGLE_WORD_SYNONYM = "singleWordSynonym".freeze
11
11
  MULTI_WORDS_SYNONYM = "multiWordsSynonym".freeze
12
+ IGNORE_CONJUGATIONS = "ignoreConjugations".freeze
12
13
 
13
14
  def self.all_vars
14
- @all_vars ||= [IGNORE_PLURALS, SINGLE_WORD_SYNONYM, MULTI_WORDS_SYNONYM].freeze
15
+ @all_vars ||= [IGNORE_PLURALS, SINGLE_WORD_SYNONYM, MULTI_WORDS_SYNONYM, IGNORE_CONJUGATIONS].freeze
15
16
  end
16
17
 
17
18
  # Builds the enum from string
@@ -9,9 +9,10 @@ module Algolia
9
9
  IGNORE_PLURALS = "ignorePlurals".freeze
10
10
  SINGLE_WORD_SYNONYM = "singleWordSynonym".freeze
11
11
  MULTI_WORDS_SYNONYM = "multiWordsSynonym".freeze
12
+ IGNORE_CONJUGATIONS = "ignoreConjugations".freeze
12
13
 
13
14
  def self.all_vars
14
- @all_vars ||= [IGNORE_PLURALS, SINGLE_WORD_SYNONYM, MULTI_WORDS_SYNONYM].freeze
15
+ @all_vars ||= [IGNORE_PLURALS, SINGLE_WORD_SYNONYM, MULTI_WORDS_SYNONYM, IGNORE_CONJUGATIONS].freeze
15
16
  end
16
17
 
17
18
  # Builds the enum from string
@@ -181,8 +181,6 @@ module Algolia
181
181
 
182
182
  if attributes.key?(:nb_api_calls)
183
183
  self.nb_api_calls = attributes[:nb_api_calls]
184
- else
185
- self.nb_api_calls = nil
186
184
  end
187
185
 
188
186
  if attributes.key?(:processing_time_ms)
@@ -94,6 +94,8 @@ module Algolia
94
94
 
95
95
  if attributes.key?(:consequence)
96
96
  self.consequence = attributes[:consequence]
97
+ else
98
+ self.consequence = nil
97
99
  end
98
100
 
99
101
  if attributes.key?(:description)
@@ -1,5 +1,5 @@
1
1
  # Code generated by OpenAPI Generator (https://openapi-generator.tech), manual changes will be lost - read more on https://github.com/algolia/api-clients-automation. DO NOT EDIT.
2
2
 
3
3
  module Algolia
4
- VERSION = "3.8.2".freeze
4
+ VERSION = "3.10.1".freeze
5
5
  end
metadata CHANGED
@@ -1,14 +1,14 @@
1
1
  --- !ruby/object:Gem::Specification
2
2
  name: algolia
3
3
  version: !ruby/object:Gem::Version
4
- version: 3.8.2
4
+ version: 3.10.1
5
5
  platform: ruby
6
6
  authors:
7
7
  - https://alg.li/support
8
8
  autorequire:
9
9
  bindir: bin
10
10
  cert_chain: []
11
- date: 2024-11-19 00:00:00.000000000 Z
11
+ date: 2024-12-12 00:00:00.000000000 Z
12
12
  dependencies:
13
13
  - !ruby/object:Gem::Dependency
14
14
  name: faraday
@@ -90,14 +90,14 @@ dependencies:
90
90
  requirements:
91
91
  - - ">="
92
92
  - !ruby/object:Gem::Version
93
- version: '0'
93
+ version: 2.4.10
94
94
  type: :development
95
95
  prerelease: false
96
96
  version_requirements: !ruby/object:Gem::Requirement
97
97
  requirements:
98
98
  - - ">="
99
99
  - !ruby/object:Gem::Version
100
- version: '0'
100
+ version: 2.4.10
101
101
  - !ruby/object:Gem::Dependency
102
102
  name: rake
103
103
  requirement: !ruby/object:Gem::Requirement
@@ -337,7 +337,6 @@ files:
337
337
  - lib/algolia/models/ingestion/source_update_input.rb
338
338
  - lib/algolia/models/ingestion/source_update_response.rb
339
339
  - lib/algolia/models/ingestion/source_update_shopify.rb
340
- - lib/algolia/models/ingestion/source_watch_response.rb
341
340
  - lib/algolia/models/ingestion/streaming_input.rb
342
341
  - lib/algolia/models/ingestion/streaming_trigger.rb
343
342
  - lib/algolia/models/ingestion/streaming_trigger_type.rb
@@ -367,6 +366,7 @@ files:
367
366
  - lib/algolia/models/ingestion/trigger.rb
368
367
  - lib/algolia/models/ingestion/trigger_type.rb
369
368
  - lib/algolia/models/ingestion/trigger_update_input.rb
369
+ - lib/algolia/models/ingestion/watch_response.rb
370
370
  - lib/algolia/models/ingestion/window.rb
371
371
  - lib/algolia/models/insights/add_to_cart_event.rb
372
372
  - lib/algolia/models/insights/added_to_cart_object_ids.rb
@@ -677,7 +677,6 @@ files:
677
677
  - lib/algolia/models/search/update_api_key_response.rb
678
678
  - lib/algolia/models/search/updated_at_response.rb
679
679
  - lib/algolia/models/search/updated_at_with_object_id_response.rb
680
- - lib/algolia/models/search/updated_rule_response.rb
681
680
  - lib/algolia/models/search/user_highlight_result.rb
682
681
  - lib/algolia/models/search/user_hit.rb
683
682
  - lib/algolia/models/search/user_id.rb
@@ -716,7 +715,7 @@ required_rubygems_version: !ruby/object:Gem::Requirement
716
715
  - !ruby/object:Gem::Version
717
716
  version: '0'
718
717
  requirements: []
719
- rubygems_version: 3.4.10
718
+ rubygems_version: 3.5.22
720
719
  signing_key:
721
720
  specification_version: 4
722
721
  summary: A simple Ruby client for the algolia.com REST API
@@ -1,238 +0,0 @@
1
- # Code generated by OpenAPI Generator (https://openapi-generator.tech), manual changes will be lost - read more on https://github.com/algolia/api-clients-automation. DO NOT EDIT.
2
-
3
- require "date"
4
- require "time"
5
-
6
- module Algolia
7
- module Search
8
- class UpdatedRuleResponse
9
- # Unique identifier of a rule object.
10
- attr_accessor :object_id
11
-
12
- # Date and time when the object was updated, in RFC 3339 format.
13
- attr_accessor :updated_at
14
-
15
- # Unique identifier of a task. A successful API response means that a task was added to a queue. It might not run immediately. You can check the task's progress with the [`task` operation](#tag/Indices/operation/getTask) and this `taskID`.
16
- attr_accessor :task_id
17
-
18
- # Attribute mapping from ruby-style variable name to JSON key.
19
- def self.attribute_map
20
- {
21
- :object_id => :objectID,
22
- :updated_at => :updatedAt,
23
- :task_id => :taskID
24
- }
25
- end
26
-
27
- # Returns all the JSON keys this model knows about
28
- def self.acceptable_attributes
29
- attribute_map.values
30
- end
31
-
32
- # Attribute type mapping.
33
- def self.types_mapping
34
- {
35
- :object_id => :"String",
36
- :updated_at => :"String",
37
- :task_id => :"Integer"
38
- }
39
- end
40
-
41
- # List of attributes with nullable: true
42
- def self.openapi_nullable
43
- Set.new(
44
- []
45
- )
46
- end
47
-
48
- # Initializes the object
49
- # @param [Hash] attributes Model attributes in the form of hash
50
- def initialize(attributes = {})
51
- if (!attributes.is_a?(Hash))
52
- raise(
53
- ArgumentError,
54
- "The input argument (attributes) must be a hash in `Algolia::UpdatedRuleResponse` initialize method"
55
- )
56
- end
57
-
58
- # check to see if the attribute exists and convert string to symbol for hash key
59
- attributes = attributes.each_with_object({}) { |(k, v), h|
60
- if (!self.class.attribute_map.key?(k.to_sym))
61
- raise(
62
- ArgumentError,
63
- "`#{k}` is not a valid attribute in `Algolia::UpdatedRuleResponse`. Please check the name to make sure it's valid. List of attributes: " +
64
- self.class.attribute_map.keys.inspect
65
- )
66
- end
67
-
68
- h[k.to_sym] = v
69
- }
70
-
71
- if attributes.key?(:object_id)
72
- self.object_id = attributes[:object_id]
73
- else
74
- self.object_id = nil
75
- end
76
-
77
- if attributes.key?(:updated_at)
78
- self.updated_at = attributes[:updated_at]
79
- else
80
- self.updated_at = nil
81
- end
82
-
83
- if attributes.key?(:task_id)
84
- self.task_id = attributes[:task_id]
85
- else
86
- self.task_id = nil
87
- end
88
- end
89
-
90
- # Checks equality by comparing each attribute.
91
- # @param [Object] Object to be compared
92
- def ==(other)
93
- return true if self.equal?(other)
94
- self.class == other.class &&
95
- object_id == other.object_id &&
96
- updated_at == other.updated_at &&
97
- task_id == other.task_id
98
- end
99
-
100
- # @see the `==` method
101
- # @param [Object] Object to be compared
102
- def eql?(other)
103
- self == other
104
- end
105
-
106
- # Calculates hash code according to all attributes.
107
- # @return [Integer] Hash code
108
- def hash
109
- [object_id, updated_at, task_id].hash
110
- end
111
-
112
- # Builds the object from hash
113
- # @param [Hash] attributes Model attributes in the form of hash
114
- # @return [Object] Returns the model itself
115
- def self.build_from_hash(attributes)
116
- return nil unless attributes.is_a?(Hash)
117
- attributes = attributes.transform_keys(&:to_sym)
118
- transformed_hash = {}
119
- types_mapping.each_pair do |key, type|
120
- if attributes.key?(attribute_map[key]) && attributes[attribute_map[key]].nil?
121
- transformed_hash[key.to_sym] = nil
122
- elsif type =~ /\AArray<(.*)>/i
123
- # check to ensure the input is an array given that the attribute
124
- # is documented as an array but the input is not
125
- if attributes[attribute_map[key]].is_a?(Array)
126
- transformed_hash[key.to_sym] = attributes[attribute_map[key]].map { |v|
127
- _deserialize(::Regexp.last_match(1), v)
128
- }
129
- end
130
- elsif !attributes[attribute_map[key]].nil?
131
- transformed_hash[key.to_sym] = _deserialize(type, attributes[attribute_map[key]])
132
- end
133
- end
134
-
135
- new(transformed_hash)
136
- end
137
-
138
- # Deserializes the data based on type
139
- # @param string type Data type
140
- # @param string value Value to be deserialized
141
- # @return [Object] Deserialized data
142
- def self._deserialize(type, value)
143
- case type.to_sym
144
- when :Time
145
- Time.parse(value)
146
- when :Date
147
- Date.parse(value)
148
- when :String
149
- value.to_s
150
- when :Integer
151
- value.to_i
152
- when :Float
153
- value.to_f
154
- when :Boolean
155
- if value.to_s =~ /\A(true|t|yes|y|1)\z/i
156
- true
157
- else
158
- false
159
- end
160
-
161
- when :Object
162
- # generic object (usually a Hash), return directly
163
- value
164
- when /\AArray<(?<inner_type>.+)>\z/
165
- inner_type = Regexp.last_match[:inner_type]
166
- value.map { |v| _deserialize(inner_type, v) }
167
- when /\AHash<(?<k_type>.+?), (?<v_type>.+)>\z/
168
- k_type = Regexp.last_match[:k_type]
169
- v_type = Regexp.last_match[:v_type]
170
- {}.tap do |hash|
171
- value.each do |k, v|
172
- hash[_deserialize(k_type, k)] = _deserialize(v_type, v)
173
- end
174
- end
175
- # model
176
- else
177
- # models (e.g. Pet) or oneOf
178
- klass = Algolia::Search.const_get(type)
179
- klass.respond_to?(:openapi_any_of) || klass.respond_to?(:openapi_one_of) ? klass.build(value) : klass
180
- .build_from_hash(value)
181
- end
182
- end
183
-
184
- # Returns the string representation of the object
185
- # @return [String] String presentation of the object
186
- def to_s
187
- to_hash.to_s
188
- end
189
-
190
- # to_body is an alias to to_hash (backward compatibility)
191
- # @return [Hash] Returns the object in the form of hash
192
- def to_body
193
- to_hash
194
- end
195
-
196
- def to_json(*_args)
197
- to_hash.to_json
198
- end
199
-
200
- # Returns the object in the form of hash
201
- # @return [Hash] Returns the object in the form of hash
202
- def to_hash
203
- hash = {}
204
- self.class.attribute_map.each_pair do |attr, param|
205
- value = send(attr)
206
- if value.nil?
207
- is_nullable = self.class.openapi_nullable.include?(attr)
208
- next if !is_nullable || (is_nullable && !instance_variable_defined?(:"@#{attr}"))
209
- end
210
-
211
- hash[param] = _to_hash(value)
212
- end
213
-
214
- hash
215
- end
216
-
217
- # Outputs non-array value in the form of hash
218
- # For object, use to_hash. Otherwise, just return the value
219
- # @param [Object] value Any valid value
220
- # @return [Hash] Returns the value in the form of hash
221
- def _to_hash(value)
222
- if value.is_a?(Array)
223
- value.compact.map { |v| _to_hash(v) }
224
- elsif value.is_a?(Hash)
225
- {}.tap do |hash|
226
- value.each { |k, v| hash[k] = _to_hash(v) }
227
- end
228
- elsif value.respond_to?(:to_hash)
229
- value.to_hash
230
- else
231
- value
232
- end
233
- end
234
-
235
- end
236
-
237
- end
238
- end