vectra-client 0.3.4 → 1.0.0

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
checksums.yaml CHANGED
@@ -1,7 +1,7 @@
1
1
  ---
2
2
  SHA256:
3
- metadata.gz: bcc42d23052b076d9efcb085b9490b3d21b0e1b101a8edb3d9fca500d72ac2cd
4
- data.tar.gz: 5abe33ef210d55e3cda6ffa8f80ca1ecadcea8632c46eb9ce78c824f35684c68
3
+ metadata.gz: f9965925f32b1b497e306ba25776f06edaba86b3c8abf8b750c8a449fe68ba5a
4
+ data.tar.gz: 1622d500398e1fb95b146d2792c0f29abdde93888e7b74af8fd1dc674c9bee58
5
5
  SHA512:
6
- metadata.gz: 40aaec0556a2de07028db2e75ad2ebf463f51728336fc508676a567837a45d1be00174ed679601a1aa5f976a05605c10c86817dd99616bc38b63350fa1ed9e63
7
- data.tar.gz: 1a3bfad7f776a429be42f591af896c3c9e9ba55a45eb58dffda8b9fa50190d5248552bfab87038ab53e04183d2cbe5fcade506bf0cc99580b1435c50a30b64b9
6
+ metadata.gz: 1911cce768648f9c48e9c94e13a7d0c51f39e81eb381a0defcbc581eb27b4cfacb56d41e1cfb776f6a2ef4522e344c32d842a550c79d5a7584bb1a84a637e605
7
+ data.tar.gz: bfb7cc7174a739591a8061436f47963dab2c33809db3437558f5addf9cf4dff2b15b93bc72934f1dd04821f2f93884b07f0c9f5886b4d618d9e3eff15ce907a6
data/CHANGELOG.md CHANGED
@@ -1,13 +1,63 @@
1
1
  # Changelog
2
2
 
3
+ ## [v1.0.0](https://github.com/stokry/vectra/tree/v1.0.0) (2026-01-12)
4
+
5
+ ### Added
6
+ - Hybrid search functionality for Qdrant and Weaviate providers
7
+ - Enhanced provider capabilities and error handling
8
+ - Support for advanced filtering and namespace operations
9
+ - Improved vector search performance
10
+
11
+ ### Changed
12
+ - Major API refinements and provider implementations
13
+ - Enhanced test coverage and documentation
14
+
15
+ [Full Changelog](https://github.com/stokry/vectra/compare/v0.4.0...v1.0.0)
16
+
17
+ ## [v0.4.0](https://github.com/stokry/vectra/tree/v0.4.0) (2026-01-12)
18
+
19
+ [Full Changelog](https://github.com/stokry/vectra/compare/v0.3.4...v0.4.0)
20
+
21
+ ### Added
22
+ - **Hybrid Search** - Combine semantic (vector) and keyword (text) search across all providers
23
+ - Full support for Qdrant (prefetch + rescore API)
24
+ - Full support for Weaviate (hybrid GraphQL with BM25)
25
+ - Full support for pgvector (vector similarity + PostgreSQL full-text search)
26
+ - Partial support for Pinecone (requires sparse vectors for true hybrid search)
27
+ - Alpha parameter (0.0 = pure keyword, 1.0 = pure semantic) for fine-tuning balance
28
+ - **Batch Progress Callbacks** - Real-time visibility into batch operations
29
+ - `on_progress` callback with detailed statistics (processed, total, percentage, chunk info)
30
+ - Thread-safe progress tracking with `Concurrent::AtomicFixnum`
31
+ - Support for `upsert_async`, `delete_async`, and `fetch_async` methods
32
+ - **Vector Normalization Helper** - Improve cosine similarity results
33
+ - `Vector.normalize!` instance method (L2 and L1 normalization)
34
+ - `Vector.normalize` class method for non-mutating normalization
35
+ - Automatic handling of zero vectors
36
+ - **Dimension Validation** - Automatic validation of vector dimension consistency
37
+ - Validates all vectors in a batch have the same dimension
38
+ - Detailed error messages with index and expected/actual dimensions
39
+ - Works with both Vector objects and hash vectors
40
+ - **Better Error Messages** - Enhanced error context and debugging
41
+ - Includes error details, field-specific errors, and context
42
+ - Improved error message format: "Main message (details) [Fields: field1, field2]"
43
+ - **Connection Health Check** - Simple health monitoring methods
44
+ - `healthy?` method for quick boolean health check
45
+ - `ping` method with latency measurement and detailed status
46
+ - Automatic error logging when logger is configured
47
+
48
+ ### Changed
49
+ - Improved error handling with more context in error messages
50
+ - Enhanced batch operations with progress tracking capabilities
51
+
52
+ ### Documentation
53
+ - Added comprehensive hybrid search examples and provider support matrix
54
+ - Updated getting started guide with normalization, health checks, and dimension validation
55
+ - Added real-world examples demonstrating new features
56
+
3
57
  ## [v0.3.4](https://github.com/stokry/vectra/tree/v0.3.4) (2026-01-12)
4
58
 
5
59
  [Full Changelog](https://github.com/stokry/vectra/compare/v0.3.3...v0.3.4)
6
60
 
7
- ### Fixed
8
- - Fixed Weaviate provider DELETE request WebMock stub to include query parameters
9
- - Fixed RuboCop offenses: empty string interpolation, redundant else, ambiguous block associations, and style issues
10
-
11
61
  ## [v0.3.3](https://github.com/stokry/vectra/tree/v0.3.3) (2026-01-09)
12
62
 
13
63
  [Full Changelog](https://github.com/stokry/vectra/compare/v0.3.2...v0.3.3)
data/README.md CHANGED
@@ -17,6 +17,7 @@
17
17
  | **Qdrant** | Open Source | ✅ Supported |
18
18
  | **Weaviate** | Open Source | ✅ Supported |
19
19
  | **pgvector** | PostgreSQL | ✅ Supported |
20
+ | **Memory** | In-Memory | ✅ Testing only |
20
21
 
21
22
  ## Installation
22
23
 
@@ -48,12 +49,47 @@ client.upsert(
48
49
  ]
49
50
  )
50
51
 
51
- # Search
52
+ # Search (classic API)
52
53
  results = client.query(vector: [0.1, 0.2, 0.3], top_k: 5)
53
54
  results.each { |match| puts "#{match.id}: #{match.score}" }
54
55
 
56
+ # Search (chainable Query Builder)
57
+ results = client
58
+ .query('docs')
59
+ .vector([0.1, 0.2, 0.3])
60
+ .top_k(5)
61
+ .with_metadata
62
+ .execute
63
+
64
+ results.each do |match|
65
+ puts "#{match.id}: #{match.score}"
66
+ end
67
+
68
+ # Normalize embeddings (for better cosine similarity)
69
+ embedding = openai_response['data'][0]['embedding']
70
+ normalized = Vectra::Vector.normalize(embedding)
71
+ client.upsert(vectors: [{ id: 'doc-1', values: normalized }])
72
+
55
73
  # Delete
56
74
  client.delete(ids: ['doc-1', 'doc-2'])
75
+
76
+ # Health check
77
+ if client.healthy?
78
+ puts "Connection is healthy"
79
+ end
80
+
81
+ # Ping with latency
82
+ status = client.ping
83
+ puts "Provider: #{status[:provider]}, Latency: #{status[:latency_ms]}ms"
84
+
85
+ # Hybrid search (semantic + keyword)
86
+ # Supported by: Qdrant, Weaviate, Pinecone, pgvector
87
+ results = client.hybrid_search(
88
+ index: 'docs',
89
+ vector: embedding,
90
+ text: 'ruby programming',
91
+ alpha: 0.7 # 70% semantic, 30% keyword
92
+ )
57
93
  ```
58
94
 
59
95
  ## Provider Examples
@@ -69,10 +105,16 @@ client = Vectra.qdrant(host: 'http://localhost:6333')
69
105
  client = Vectra.qdrant(host: 'https://your-cluster.qdrant.io', api_key: ENV['QDRANT_API_KEY'])
70
106
 
71
107
  # Weaviate
72
- client = Vectra.weaviate(host: 'http://localhost:8080', api_key: ENV['WEAVIATE_API_KEY'])
108
+ client = Vectra.weaviate(
109
+ api_key: ENV['WEAVIATE_API_KEY'],
110
+ host: 'https://your-weaviate-instance'
111
+ )
73
112
 
74
113
  # pgvector (PostgreSQL)
75
114
  client = Vectra.pgvector(connection_url: 'postgres://user:pass@localhost/mydb')
115
+
116
+ # Memory (in-memory, testing only)
117
+ client = Vectra.memory
76
118
  ```
77
119
 
78
120
  ## Features
data/docs/api/overview.md CHANGED
@@ -111,6 +111,98 @@ Get index statistics.
111
111
  }
112
112
  ```
113
113
 
114
+ ### `hybrid_search(index:, vector:, text:, alpha:, top_k:)`
115
+
116
+ Combine semantic (vector) and keyword (text) search.
117
+
118
+ **Parameters:**
119
+ - `index` (String) - Index/collection name
120
+ - `vector` (Array) - Query vector for semantic search
121
+ - `text` (String) - Text query for keyword search
122
+ - `alpha` (Float) - Balance between semantic and keyword (0.0 = pure keyword, 1.0 = pure semantic)
123
+ - `top_k` (Integer) - Number of results (default: 10)
124
+ - `namespace` (String, optional) - Namespace
125
+ - `filter` (Hash, optional) - Metadata filter
126
+ - `include_values` (Boolean) - Include vector values (default: false)
127
+ - `include_metadata` (Boolean) - Include metadata (default: true)
128
+
129
+ **Example:**
130
+ ```ruby
131
+ results = client.hybrid_search(
132
+ index: 'docs',
133
+ vector: embedding,
134
+ text: 'ruby programming',
135
+ alpha: 0.7 # 70% semantic, 30% keyword
136
+ )
137
+ ```
138
+
139
+ **Provider Support:** Qdrant ✅, Weaviate ✅, pgvector ✅, Pinecone ⚠️
140
+
141
+ ### `healthy?`
142
+
143
+ Quick health check - returns true if provider connection is healthy.
144
+
145
+ **Returns:** Boolean
146
+
147
+ **Example:**
148
+ ```ruby
149
+ if client.healthy?
150
+ client.upsert(...)
151
+ end
152
+ ```
153
+
154
+ ### `ping`
155
+
156
+ Ping provider and get connection health status with latency.
157
+
158
+ **Returns:**
159
+ ```ruby
160
+ {
161
+ healthy: true,
162
+ provider: :pinecone,
163
+ latency_ms: 45.23
164
+ }
165
+ ```
166
+
167
+ **Example:**
168
+ ```ruby
169
+ status = client.ping
170
+ puts "Latency: #{status[:latency_ms]}ms"
171
+ ```
172
+
173
+ ### `Vector.normalize(vector, type: :l2)`
174
+
175
+ Normalize a vector array (non-mutating).
176
+
177
+ **Parameters:**
178
+ - `vector` (Array) - Vector values to normalize
179
+ - `type` (Symbol) - Normalization type: `:l2` (default) or `:l1`
180
+
181
+ **Returns:** Array of normalized values
182
+
183
+ **Example:**
184
+ ```ruby
185
+ embedding = openai_response['data'][0]['embedding']
186
+ normalized = Vectra::Vector.normalize(embedding)
187
+ client.upsert(vectors: [{ id: 'doc-1', values: normalized }])
188
+ ```
189
+
190
+ ### `vector.normalize!(type: :l2)`
191
+
192
+ Normalize vector in-place (mutates the vector).
193
+
194
+ **Parameters:**
195
+ - `type` (Symbol) - Normalization type: `:l2` (default) or `:l1`
196
+
197
+ **Returns:** Self (for method chaining)
198
+
199
+ **Example:**
200
+ ```ruby
201
+ vector = Vectra::Vector.new(id: 'doc-1', values: embedding)
202
+ vector.normalize! # L2 normalization
203
+ client.upsert(vectors: [vector])
204
+ ```
205
+
114
206
  ## Error Handling
115
207
 
116
208
  ```ruby
@@ -37,6 +37,8 @@ class ProductSearchService
37
37
 
38
38
  def search(query:, category: nil, price_range: nil, limit: 20)
39
39
  query_embedding = generate_embedding(query)
40
+ # Normalize for better cosine similarity
41
+ query_embedding = Vectra::Vector.normalize(query_embedding)
40
42
 
41
43
  filter = {}
42
44
  filter[:category] = category if category
@@ -68,9 +70,12 @@ class ProductSearchService
68
70
 
69
71
  def generate_embedding(text)
70
72
  # Use your embedding model (OpenAI, sentence-transformers, etc.)
71
- OpenAI::Client.new.embeddings(
73
+ embedding = OpenAI::Client.new.embeddings(
72
74
  parameters: { model: "text-embedding-ada-002", input: text }
73
75
  )["data"][0]["embedding"]
76
+
77
+ # Normalize embeddings before storing
78
+ Vectra::Vector.normalize(embedding)
74
79
  end
75
80
 
76
81
  def fallback_search(query, category)
@@ -257,14 +262,14 @@ class TenantDocumentService
257
262
  query_embedding = generate_embedding(query)
258
263
 
259
264
  # Ensure tenant isolation via namespace
260
- results = @client.query(
261
- index: "documents",
262
- vector: query_embedding,
263
- top_k: limit,
264
- namespace: "tenant-#{@tenant_id}",
265
- filter: { tenant_id: @tenant_id }, # Double protection
266
- include_metadata: true
267
- )
265
+ results = @client
266
+ .query("documents")
267
+ .vector(query_embedding)
268
+ .top_k(limit)
269
+ .namespace("tenant-#{@tenant_id}")
270
+ .filter(tenant_id: @tenant_id) # Double protection
271
+ .with_metadata
272
+ .execute
268
273
 
269
274
  # Audit log
270
275
  @audit.log_access(
@@ -325,9 +330,11 @@ class DocumentIndexer
325
330
 
326
331
  def index_large_dataset(documents, concurrency: 4)
327
332
  total = documents.size
328
- processed = 0
329
333
  errors = []
330
334
 
335
+ # Create batch client with specified concurrency
336
+ batch_client = Vectra::Batch.new(@client, concurrency: concurrency)
337
+
331
338
  # Convert to vectors
332
339
  vectors = documents.map do |doc|
333
340
  {
@@ -337,25 +344,27 @@ class DocumentIndexer
337
344
  }
338
345
  end
339
346
 
340
- # Process in async batches
341
- result = @batch_client.upsert_async(
347
+ # Process in async batches with progress tracking
348
+ result = batch_client.upsert_async(
342
349
  index: "documents",
343
350
  vectors: vectors,
344
- concurrency: concurrency,
345
- on_progress: proc { |success, failed, total|
346
- processed = success + failed
347
- progress = (processed.to_f / total * 100).round(1)
351
+ chunk_size: 100,
352
+ on_progress: proc { |stats|
353
+ progress = stats[:percentage]
354
+ processed = stats[:processed]
355
+ total = stats[:total]
356
+ chunk = stats[:current_chunk] + 1
357
+ total_chunks = stats[:total_chunks]
358
+
348
359
  puts "Progress: #{progress}% (#{processed}/#{total})"
349
- },
350
- on_error: proc { |error, vector|
351
- errors << { id: vector[:id], error: error.message }
360
+ puts " Chunk #{chunk}/#{total_chunks} | Success: #{stats[:success_count]}, Failed: #{stats[:failed_count]}"
352
361
  }
353
362
  )
354
363
 
355
364
  {
356
- success: result[:success],
357
- failed: result[:failed],
358
- errors: errors,
365
+ success: result[:upserted_count],
366
+ failed: result[:errors].size,
367
+ errors: result[:errors],
359
368
  total: total
360
369
  }
361
370
  end
@@ -475,6 +484,37 @@ module VectraHelper
475
484
  end
476
485
  ```
477
486
 
487
+ ## Testing with the Memory Provider
488
+
489
+ For fast, deterministic tests you can run Vectra entirely in memory without any external services:
490
+
491
+ ```ruby
492
+ # config/initializers/vectra.rb (test environment)
493
+ Vectra.configure do |config|
494
+ config.provider = :memory if Rails.env.test?
495
+ end
496
+
497
+ RSpec.describe ProductSearchService do
498
+ let(:client) { Vectra::Client.new } # uses memory provider in test
499
+
500
+ before do
501
+ client.provider.clear! if client.provider.respond_to?(:clear!)
502
+
503
+ client.upsert(
504
+ index: "products",
505
+ vectors: [
506
+ { id: "p1", values: [0.1, 0.2], metadata: { name: "Test Product" } }
507
+ ]
508
+ )
509
+ end
510
+
511
+ it "returns relevant products" do
512
+ results = client.query(index: "products", vector: [0.1, 0.2], top_k: 5)
513
+ expect(results.ids).to include("p1")
514
+ end
515
+ end
516
+ ```
517
+
478
518
  ## Best Practices
479
519
 
480
520
  ### 1. Always Use Caching for Frequent Queries
@@ -43,17 +43,49 @@ client.upsert(
43
43
  ### Query (Search)
44
44
 
45
45
  ```ruby
46
+ # Classic API
46
47
  results = client.query(
47
48
  vector: [0.1, 0.2, 0.3],
48
49
  top_k: 5,
49
50
  include_metadata: true
50
51
  )
51
52
 
52
- results.matches.each do |match|
53
- puts "ID: #{match['id']}, Score: #{match['score']}"
53
+ results.each do |match|
54
+ puts "ID: #{match.id}, Score: #{match.score}"
55
+ end
56
+
57
+ # Chainable Query Builder
58
+ results = client
59
+ .query("my-index")
60
+ .vector([0.1, 0.2, 0.3])
61
+ .top_k(5)
62
+ .with_metadata
63
+ .execute
64
+
65
+ results.each do |match|
66
+ puts "ID: #{match.id}, Score: #{match.score}"
54
67
  end
55
68
  ```
56
69
 
70
+ ### Normalize Embeddings
71
+
72
+ For better cosine similarity results, normalize your embeddings before upserting:
73
+
74
+ ```ruby
75
+ # Normalize OpenAI embeddings (recommended)
76
+ embedding = openai_response['data'][0]['embedding']
77
+ normalized = Vectra::Vector.normalize(embedding)
78
+ client.upsert(vectors: [{ id: 'doc-1', values: normalized }])
79
+
80
+ # Or normalize in-place
81
+ vector = Vectra::Vector.new(id: 'doc-1', values: embedding)
82
+ vector.normalize! # L2 normalization (default, unit vector)
83
+ client.upsert(vectors: [vector])
84
+
85
+ # L1 normalization (sum of absolute values = 1)
86
+ vector.normalize!(type: :l1)
87
+ ```
88
+
57
89
  ### Delete Vectors
58
90
 
59
91
  ```ruby
@@ -68,6 +100,88 @@ puts "Index dimension: #{stats['dimension']}"
68
100
  puts "Vector count: #{stats['vector_count']}"
69
101
  ```
70
102
 
103
+ ### Health Check & Ping
104
+
105
+ ```ruby
106
+ # Quick health check
107
+ if client.healthy?
108
+ client.upsert(...)
109
+ else
110
+ handle_unhealthy_connection
111
+ end
112
+
113
+ # Ping with latency measurement
114
+ status = client.ping
115
+ puts "Provider: #{status[:provider]}"
116
+ puts "Healthy: #{status[:healthy]}"
117
+ puts "Latency: #{status[:latency_ms]}ms"
118
+
119
+ if status[:error]
120
+ puts "Error: #{status[:error_message]}"
121
+ end
122
+ ```
123
+
124
+ ### Hybrid Search (Semantic + Keyword)
125
+
126
+ Combine the best of both worlds: semantic understanding from vectors and exact keyword matching:
127
+
128
+ ```ruby
129
+ # Hybrid search with 70% semantic, 30% keyword
130
+ results = client.hybrid_search(
131
+ index: 'docs',
132
+ vector: embedding, # Semantic search
133
+ text: 'ruby programming', # Keyword search
134
+ alpha: 0.7, # 0.0 = pure keyword, 1.0 = pure semantic
135
+ top_k: 10
136
+ )
137
+
138
+ results.each do |match|
139
+ puts "#{match.id}: #{match.score}"
140
+ end
141
+
142
+ # Pure semantic (alpha = 1.0)
143
+ results = client.hybrid_search(
144
+ index: 'docs',
145
+ vector: embedding,
146
+ text: 'ruby',
147
+ alpha: 1.0
148
+ )
149
+
150
+ # Pure keyword (alpha = 0.0)
151
+ results = client.hybrid_search(
152
+ index: 'docs',
153
+ vector: embedding,
154
+ text: 'ruby programming',
155
+ alpha: 0.0
156
+ )
157
+ ```
158
+
159
+ **Provider Support:**
160
+ - **Qdrant**: ✅ Full support (prefetch + rescore API)
161
+ - **Weaviate**: ✅ Full support (hybrid GraphQL with BM25)
162
+ - **Pinecone**: ⚠️ Partial support (requires sparse vectors for true hybrid search)
163
+ - **pgvector**: ✅ Full support (combines vector similarity + PostgreSQL full-text search)
164
+
165
+ **Note for pgvector:** Your table needs a text column with a tsvector index:
166
+ ```sql
167
+ CREATE INDEX idx_content_fts ON my_index USING gin(to_tsvector('english', content));
168
+ ```
169
+
170
+ ### Dimension Validation
171
+
172
+ Vectra automatically validates that all vectors in a batch have the same dimension:
173
+
174
+ ```ruby
175
+ # This will raise ValidationError
176
+ vectors = [
177
+ { id: "vec1", values: [0.1, 0.2, 0.3] }, # 3 dimensions
178
+ { id: "vec2", values: [0.4, 0.5] } # 2 dimensions - ERROR!
179
+ ]
180
+
181
+ client.upsert(vectors: vectors)
182
+ # => ValidationError: Inconsistent vector dimensions at index 1: expected 3, got 2
183
+ ```
184
+
71
185
  ## Configuration
72
186
 
73
187
  Create a configuration file (Rails: `config/initializers/vectra.rb`):
@@ -26,13 +26,47 @@ vectors = 10_000.times.map { |i| { id: "vec_#{i}", values: Array.new(384) { rand
26
26
  result = batch.upsert_async(
27
27
  index: 'my-index',
28
28
  vectors: vectors,
29
- chunk_size: 100
29
+ chunk_size: 100,
30
+ on_progress: proc { |stats|
31
+ progress = stats[:percentage]
32
+ processed = stats[:processed]
33
+ total = stats[:total]
34
+ chunk = stats[:current_chunk] + 1
35
+ total_chunks = stats[:total_chunks]
36
+
37
+ puts "Progress: #{progress}% (#{processed}/#{total})"
38
+ puts " Chunk #{chunk}/#{total_chunks} | Success: #{stats[:success_count]}, Failed: #{stats[:failed_count]}"
39
+ }
30
40
  )
31
41
 
32
42
  puts "Upserted: #{result[:upserted_count]} vectors in #{result[:chunks]} chunks"
33
43
  puts "Errors: #{result[:errors].size}" if result[:errors].any?
34
44
  ```
35
45
 
46
+ ### Progress Tracking
47
+
48
+ Monitor batch operations in real-time with progress callbacks:
49
+
50
+ ```ruby
51
+ batch.upsert_async(
52
+ index: 'my-index',
53
+ vectors: large_vector_array,
54
+ chunk_size: 100,
55
+ on_progress: proc { |stats|
56
+ # stats contains:
57
+ # - processed: number of processed vectors
58
+ # - total: total number of vectors
59
+ # - percentage: progress percentage (0-100)
60
+ # - current_chunk: current chunk index (0-based)
61
+ # - total_chunks: total number of chunks
62
+ # - success_count: number of successful chunks
63
+ # - failed_count: number of failed chunks
64
+
65
+ puts "Progress: #{stats[:percentage]}% (#{stats[:processed]}/#{stats[:total]})"
66
+ }
67
+ )
68
+ ```
69
+
36
70
  ### Batch Delete
37
71
 
38
72
  ```ruby
@@ -16,6 +16,7 @@ Vectra supports multiple vector database providers. Choose the one that best fit
16
16
  | [**Qdrant**]({{ site.baseurl }}/providers/qdrant) | Open Source | Self-hosted, Performance |
17
17
  | [**Weaviate**]({{ site.baseurl }}/providers/weaviate) | Open Source | Semantic search, GraphQL |
18
18
  | [**pgvector**]({{ site.baseurl }}/providers/pgvector) | PostgreSQL | SQL integration, ACID |
19
+ | [**Memory**]({{ site.baseurl }}/providers/memory) | In-Memory | Testing, CI, local dev |
19
20
 
20
21
  ## Quick Comparison
21
22
 
@@ -75,6 +76,17 @@ client.upsert(vectors: [...])
75
76
  results = client.query(vector: [...], top_k: 5)
76
77
  ```
77
78
 
79
+ For **tests and CI** you can use the in-memory provider:
80
+
81
+ ```ruby
82
+ # config/initializers/vectra.rb (test environment)
83
+ Vectra.configure do |config|
84
+ config.provider = :memory if Rails.env.test?
85
+ end
86
+
87
+ client = Vectra::Client.new
88
+ ```
89
+
78
90
  ## Next Steps
79
91
 
80
92
  - [Getting Started Guide]({{ site.baseurl }}/guides/getting-started)