@mastra/lance 1.0.1 → 1.0.2-alpha.0

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
package/CHANGELOG.md CHANGED
@@ -1,5 +1,14 @@
1
1
  # @mastra/lance
2
2
 
3
+ ## 1.0.2-alpha.0
4
+
5
+ ### Patch Changes
6
+
7
+ - Add a clear runtime error when `queryVector` is omitted for vector stores that require a vector for queries. Previously, omitting `queryVector` would produce confusing SDK-level errors; now each store throws a structured `MastraError` with `ErrorCategory.USER` explaining that metadata-only queries are not supported by that backend. ([#13286](https://github.com/mastra-ai/mastra/pull/13286))
8
+
9
+ - Updated dependencies [[`df170fd`](https://github.com/mastra-ai/mastra/commit/df170fd139b55f845bfd2de8488b16435bd3d0da), [`ae55343`](https://github.com/mastra-ai/mastra/commit/ae5534397fc006fd6eef3e4f80c235bcdc9289ef), [`c290cec`](https://github.com/mastra-ai/mastra/commit/c290cec5bf9107225de42942b56b487107aa9dce), [`f03e794`](https://github.com/mastra-ai/mastra/commit/f03e794630f812b56e95aad54f7b1993dc003add), [`aa4a5ae`](https://github.com/mastra-ai/mastra/commit/aa4a5aedb80d8d6837bab8cbb2e301215d1ba3e9), [`de3f584`](https://github.com/mastra-ai/mastra/commit/de3f58408752a8d80a295275c7f23fc306cf7f4f), [`d3fb010`](https://github.com/mastra-ai/mastra/commit/d3fb010c98f575f1c0614452667396e2653815f6), [`702ee1c`](https://github.com/mastra-ai/mastra/commit/702ee1c41be67cc532b4dbe89bcb62143508f6f0), [`f495051`](https://github.com/mastra-ai/mastra/commit/f495051eb6496a720f637fc85b6d69941c12554c), [`e622f1d`](https://github.com/mastra-ai/mastra/commit/e622f1d3ab346a8e6aca6d1fe2eac99bd961e50b), [`861f111`](https://github.com/mastra-ai/mastra/commit/861f11189211b20ddb70d8df81a6b901fc78d11e), [`00f43e8`](https://github.com/mastra-ai/mastra/commit/00f43e8e97a80c82b27d5bd30494f10a715a1df9), [`1b6f651`](https://github.com/mastra-ai/mastra/commit/1b6f65127d4a0d6c38d0a1055cb84527db529d6b), [`96a1702`](https://github.com/mastra-ai/mastra/commit/96a1702ce362c50dda20c8b4a228b4ad1a36a17a), [`cb9f921`](https://github.com/mastra-ai/mastra/commit/cb9f921320913975657abb1404855d8c510f7ac5), [`114e7c1`](https://github.com/mastra-ai/mastra/commit/114e7c146ac682925f0fb37376c1be70e5d6e6e5), [`1b6f651`](https://github.com/mastra-ai/mastra/commit/1b6f65127d4a0d6c38d0a1055cb84527db529d6b), [`72df4a8`](https://github.com/mastra-ai/mastra/commit/72df4a8f9bf1a20cfd3d9006a4fdb597ad56d10a)]:
10
+ - @mastra/core@1.8.0-alpha.0
11
+
3
12
  ## 1.0.1
4
13
 
5
14
  ### Patch Changes
@@ -3,7 +3,7 @@ name: mastra-lance
3
3
  description: Documentation for @mastra/lance. Use when working with @mastra/lance APIs, configuration, or implementation.
4
4
  metadata:
5
5
  package: "@mastra/lance"
6
- version: "1.0.1"
6
+ version: "1.0.2-alpha.0"
7
7
  ---
8
8
 
9
9
  ## When to use
@@ -1,5 +1,5 @@
1
1
  {
2
- "version": "1.0.1",
2
+ "version": "1.0.2-alpha.0",
3
3
  "package": "@mastra/lance",
4
4
  "exports": {},
5
5
  "modules": {}
@@ -7,22 +7,22 @@ After generating embeddings, you need to store them in a database that supports
7
7
  **MongoDB**:
8
8
 
9
9
  ```ts
10
- import { MongoDBVector } from "@mastra/mongodb";
10
+ import { MongoDBVector } from '@mastra/mongodb'
11
11
 
12
12
  const store = new MongoDBVector({
13
13
  id: 'mongodb-vector',
14
14
  uri: process.env.MONGODB_URI,
15
15
  dbName: process.env.MONGODB_DATABASE,
16
- });
16
+ })
17
17
  await store.createIndex({
18
- indexName: "myCollection",
18
+ indexName: 'myCollection',
19
19
  dimension: 1536,
20
- });
20
+ })
21
21
  await store.upsert({
22
- indexName: "myCollection",
22
+ indexName: 'myCollection',
23
23
  vectors: embeddings,
24
- metadata: chunks.map((chunk) => ({ text: chunk.text })),
25
- });
24
+ metadata: chunks.map(chunk => ({ text: chunk.text })),
25
+ })
26
26
  ```
27
27
 
28
28
  ### Using MongoDB Atlas Vector search
@@ -32,23 +32,23 @@ For detailed setup instructions and best practices, see the [official MongoDB At
32
32
  **PgVector**:
33
33
 
34
34
  ```ts
35
- import { PgVector } from "@mastra/pg";
35
+ import { PgVector } from '@mastra/pg'
36
36
 
37
37
  const store = new PgVector({
38
38
  id: 'pg-vector',
39
39
  connectionString: process.env.POSTGRES_CONNECTION_STRING,
40
- });
40
+ })
41
41
 
42
42
  await store.createIndex({
43
- indexName: "myCollection",
43
+ indexName: 'myCollection',
44
44
  dimension: 1536,
45
- });
45
+ })
46
46
 
47
47
  await store.upsert({
48
- indexName: "myCollection",
48
+ indexName: 'myCollection',
49
49
  vectors: embeddings,
50
- metadata: chunks.map((chunk) => ({ text: chunk.text })),
51
- });
50
+ metadata: chunks.map(chunk => ({ text: chunk.text })),
51
+ })
52
52
  ```
53
53
 
54
54
  ### Using PostgreSQL with pgvector
@@ -58,50 +58,50 @@ PostgreSQL with the pgvector extension is a good solution for teams already usin
58
58
  **Pinecone**:
59
59
 
60
60
  ```ts
61
- import { PineconeVector } from "@mastra/pinecone";
61
+ import { PineconeVector } from '@mastra/pinecone'
62
62
 
63
63
  const store = new PineconeVector({
64
64
  id: 'pinecone-vector',
65
65
  apiKey: process.env.PINECONE_API_KEY,
66
- });
66
+ })
67
67
  await store.createIndex({
68
- indexName: "myCollection",
68
+ indexName: 'myCollection',
69
69
  dimension: 1536,
70
- });
70
+ })
71
71
  await store.upsert({
72
- indexName: "myCollection",
72
+ indexName: 'myCollection',
73
73
  vectors: embeddings,
74
- metadata: chunks.map((chunk) => ({ text: chunk.text })),
75
- });
74
+ metadata: chunks.map(chunk => ({ text: chunk.text })),
75
+ })
76
76
  ```
77
77
 
78
78
  **Qdrant**:
79
79
 
80
80
  ```ts
81
- import { QdrantVector } from "@mastra/qdrant";
81
+ import { QdrantVector } from '@mastra/qdrant'
82
82
 
83
83
  const store = new QdrantVector({
84
84
  id: 'qdrant-vector',
85
85
  url: process.env.QDRANT_URL,
86
86
  apiKey: process.env.QDRANT_API_KEY,
87
- });
87
+ })
88
88
 
89
89
  await store.createIndex({
90
- indexName: "myCollection",
90
+ indexName: 'myCollection',
91
91
  dimension: 1536,
92
- });
92
+ })
93
93
 
94
94
  await store.upsert({
95
- indexName: "myCollection",
95
+ indexName: 'myCollection',
96
96
  vectors: embeddings,
97
- metadata: chunks.map((chunk) => ({ text: chunk.text })),
98
- });
97
+ metadata: chunks.map(chunk => ({ text: chunk.text })),
98
+ })
99
99
  ```
100
100
 
101
101
  **Chroma**:
102
102
 
103
103
  ```ts
104
- import { ChromaVector } from "@mastra/chroma";
104
+ import { ChromaVector } from '@mastra/chroma'
105
105
 
106
106
  // Running Chroma locally
107
107
  // const store = new ChromaVector()
@@ -112,151 +112,151 @@ const store = new ChromaVector({
112
112
  apiKey: process.env.CHROMA_API_KEY,
113
113
  tenant: process.env.CHROMA_TENANT,
114
114
  database: process.env.CHROMA_DATABASE,
115
- });
115
+ })
116
116
 
117
117
  await store.createIndex({
118
- indexName: "myCollection",
118
+ indexName: 'myCollection',
119
119
  dimension: 1536,
120
- });
120
+ })
121
121
 
122
122
  await store.upsert({
123
- indexName: "myCollection",
123
+ indexName: 'myCollection',
124
124
  vectors: embeddings,
125
- metadata: chunks.map((chunk) => ({ text: chunk.text })),
126
- });
125
+ metadata: chunks.map(chunk => ({ text: chunk.text })),
126
+ })
127
127
  ```
128
128
 
129
129
  **Astra**:
130
130
 
131
131
  ```ts
132
- import { AstraVector } from "@mastra/astra";
132
+ import { AstraVector } from '@mastra/astra'
133
133
 
134
134
  const store = new AstraVector({
135
135
  id: 'astra-vector',
136
136
  token: process.env.ASTRA_DB_TOKEN,
137
137
  endpoint: process.env.ASTRA_DB_ENDPOINT,
138
138
  keyspace: process.env.ASTRA_DB_KEYSPACE,
139
- });
139
+ })
140
140
 
141
141
  await store.createIndex({
142
- indexName: "myCollection",
142
+ indexName: 'myCollection',
143
143
  dimension: 1536,
144
- });
144
+ })
145
145
 
146
146
  await store.upsert({
147
- indexName: "myCollection",
147
+ indexName: 'myCollection',
148
148
  vectors: embeddings,
149
- metadata: chunks.map((chunk) => ({ text: chunk.text })),
150
- });
149
+ metadata: chunks.map(chunk => ({ text: chunk.text })),
150
+ })
151
151
  ```
152
152
 
153
153
  **libSQL**:
154
154
 
155
155
  ```ts
156
- import { LibSQLVector } from "@mastra/core/vector/libsql";
156
+ import { LibSQLVector } from '@mastra/core/vector/libsql'
157
157
 
158
158
  const store = new LibSQLVector({
159
159
  id: 'libsql-vector',
160
160
  url: process.env.DATABASE_URL,
161
161
  authToken: process.env.DATABASE_AUTH_TOKEN, // Optional: for Turso cloud databases
162
- });
162
+ })
163
163
 
164
164
  await store.createIndex({
165
- indexName: "myCollection",
165
+ indexName: 'myCollection',
166
166
  dimension: 1536,
167
- });
167
+ })
168
168
 
169
169
  await store.upsert({
170
- indexName: "myCollection",
170
+ indexName: 'myCollection',
171
171
  vectors: embeddings,
172
- metadata: chunks.map((chunk) => ({ text: chunk.text })),
173
- });
172
+ metadata: chunks.map(chunk => ({ text: chunk.text })),
173
+ })
174
174
  ```
175
175
 
176
176
  **Upstash**:
177
177
 
178
178
  ```ts
179
- import { UpstashVector } from "@mastra/upstash";
179
+ import { UpstashVector } from '@mastra/upstash'
180
180
 
181
181
  // In upstash they refer to the store as an index
182
182
  const store = new UpstashVector({
183
183
  id: 'upstash-vector',
184
184
  url: process.env.UPSTASH_URL,
185
185
  token: process.env.UPSTASH_TOKEN,
186
- });
186
+ })
187
187
 
188
188
  // There is no store.createIndex call here, Upstash creates indexes (known as namespaces in Upstash) automatically
189
189
  // when you upsert if that namespace does not exist yet.
190
190
  await store.upsert({
191
- indexName: "myCollection", // the namespace name in Upstash
191
+ indexName: 'myCollection', // the namespace name in Upstash
192
192
  vectors: embeddings,
193
- metadata: chunks.map((chunk) => ({ text: chunk.text })),
194
- });
193
+ metadata: chunks.map(chunk => ({ text: chunk.text })),
194
+ })
195
195
  ```
196
196
 
197
197
  **Cloudflare**:
198
198
 
199
199
  ```ts
200
- import { CloudflareVector } from "@mastra/vectorize";
200
+ import { CloudflareVector } from '@mastra/vectorize'
201
201
 
202
202
  const store = new CloudflareVector({
203
203
  id: 'cloudflare-vector',
204
204
  accountId: process.env.CF_ACCOUNT_ID,
205
205
  apiToken: process.env.CF_API_TOKEN,
206
- });
206
+ })
207
207
  await store.createIndex({
208
- indexName: "myCollection",
208
+ indexName: 'myCollection',
209
209
  dimension: 1536,
210
- });
210
+ })
211
211
  await store.upsert({
212
- indexName: "myCollection",
212
+ indexName: 'myCollection',
213
213
  vectors: embeddings,
214
- metadata: chunks.map((chunk) => ({ text: chunk.text })),
215
- });
214
+ metadata: chunks.map(chunk => ({ text: chunk.text })),
215
+ })
216
216
  ```
217
217
 
218
218
  **OpenSearch**:
219
219
 
220
220
  ```ts
221
- import { OpenSearchVector } from "@mastra/opensearch";
221
+ import { OpenSearchVector } from '@mastra/opensearch'
222
222
 
223
- const store = new OpenSearchVector({ id: "opensearch", node: process.env.OPENSEARCH_URL });
223
+ const store = new OpenSearchVector({ id: 'opensearch', node: process.env.OPENSEARCH_URL })
224
224
 
225
225
  await store.createIndex({
226
- indexName: "my-collection",
226
+ indexName: 'my-collection',
227
227
  dimension: 1536,
228
- });
228
+ })
229
229
 
230
230
  await store.upsert({
231
- indexName: "my-collection",
231
+ indexName: 'my-collection',
232
232
  vectors: embeddings,
233
- metadata: chunks.map((chunk) => ({ text: chunk.text })),
234
- });
233
+ metadata: chunks.map(chunk => ({ text: chunk.text })),
234
+ })
235
235
  ```
236
236
 
237
237
  **ElasticSearch**:
238
238
 
239
239
  ```ts
240
- import { ElasticSearchVector } from "@mastra/elasticsearch";
240
+ import { ElasticSearchVector } from '@mastra/elasticsearch'
241
241
 
242
242
  const store = new ElasticSearchVector({
243
243
  id: 'elasticsearch-vector',
244
244
  url: process.env.ELASTICSEARCH_URL,
245
245
  auth: {
246
- apiKey : process.env.ELASTICSEARCH_API_KEY
247
- }
248
- });
246
+ apiKey: process.env.ELASTICSEARCH_API_KEY,
247
+ },
248
+ })
249
249
 
250
250
  await store.createIndex({
251
- indexName: "my-collection",
251
+ indexName: 'my-collection',
252
252
  dimension: 1536,
253
- });
253
+ })
254
254
 
255
255
  await store.upsert({
256
- indexName: "my-collection",
256
+ indexName: 'my-collection',
257
257
  vectors: embeddings,
258
- metadata: chunks.map((chunk) => ({ text: chunk.text })),
259
- });
258
+ metadata: chunks.map(chunk => ({ text: chunk.text })),
259
+ })
260
260
  ```
261
261
 
262
262
  ### Using Elasticsearch
@@ -266,7 +266,7 @@ For detailed setup instructions and best practices, see the [official Elasticsea
266
266
  **Couchbase**:
267
267
 
268
268
  ```ts
269
- import { CouchbaseVector } from "@mastra/couchbase";
269
+ import { CouchbaseVector } from '@mastra/couchbase'
270
270
 
271
271
  const store = new CouchbaseVector({
272
272
  id: 'couchbase-vector',
@@ -276,36 +276,36 @@ const store = new CouchbaseVector({
276
276
  bucketName: process.env.COUCHBASE_BUCKET,
277
277
  scopeName: process.env.COUCHBASE_SCOPE,
278
278
  collectionName: process.env.COUCHBASE_COLLECTION,
279
- });
279
+ })
280
280
  await store.createIndex({
281
- indexName: "myCollection",
281
+ indexName: 'myCollection',
282
282
  dimension: 1536,
283
- });
283
+ })
284
284
  await store.upsert({
285
- indexName: "myCollection",
285
+ indexName: 'myCollection',
286
286
  vectors: embeddings,
287
- metadata: chunks.map((chunk) => ({ text: chunk.text })),
288
- });
287
+ metadata: chunks.map(chunk => ({ text: chunk.text })),
288
+ })
289
289
  ```
290
290
 
291
291
  **Lance**:
292
292
 
293
293
  ```ts
294
- import { LanceVectorStore } from "@mastra/lance";
294
+ import { LanceVectorStore } from '@mastra/lance'
295
295
 
296
- const store = await LanceVectorStore.create("/path/to/db");
296
+ const store = await LanceVectorStore.create('/path/to/db')
297
297
 
298
298
  await store.createIndex({
299
- tableName: "myVectors",
300
- indexName: "myCollection",
299
+ tableName: 'myVectors',
300
+ indexName: 'myCollection',
301
301
  dimension: 1536,
302
- });
302
+ })
303
303
 
304
304
  await store.upsert({
305
- tableName: "myVectors",
305
+ tableName: 'myVectors',
306
306
  vectors: embeddings,
307
- metadata: chunks.map((chunk) => ({ text: chunk.text })),
308
- });
307
+ metadata: chunks.map(chunk => ({ text: chunk.text })),
308
+ })
309
309
  ```
310
310
 
311
311
  ### Using LanceDB
@@ -315,26 +315,26 @@ LanceDB is an embedded vector database built on the Lance columnar format, suita
315
315
  **S3 Vectors**:
316
316
 
317
317
  ```ts
318
- import { S3Vectors } from "@mastra/s3vectors";
318
+ import { S3Vectors } from '@mastra/s3vectors'
319
319
 
320
320
  const store = new S3Vectors({
321
321
  id: 's3-vectors',
322
- vectorBucketName: "my-vector-bucket",
322
+ vectorBucketName: 'my-vector-bucket',
323
323
  clientConfig: {
324
- region: "us-east-1",
324
+ region: 'us-east-1',
325
325
  },
326
- nonFilterableMetadataKeys: ["content"],
327
- });
326
+ nonFilterableMetadataKeys: ['content'],
327
+ })
328
328
 
329
329
  await store.createIndex({
330
- indexName: "my-index",
330
+ indexName: 'my-index',
331
331
  dimension: 1536,
332
- });
332
+ })
333
333
  await store.upsert({
334
- indexName: "my-index",
334
+ indexName: 'my-index',
335
335
  vectors: embeddings,
336
- metadata: chunks.map((chunk) => ({ text: chunk.text })),
337
- });
336
+ metadata: chunks.map(chunk => ({ text: chunk.text })),
337
+ })
338
338
  ```
339
339
 
340
340
  ## Using Vector Storage
@@ -348,9 +348,9 @@ Before storing embeddings, you need to create an index with the appropriate dime
348
348
  ```ts
349
349
  // Create an index with dimension 1536 (for text-embedding-3-small)
350
350
  await store.createIndex({
351
- indexName: "myCollection",
351
+ indexName: 'myCollection',
352
352
  dimension: 1536,
353
- });
353
+ })
354
354
  ```
355
355
 
356
356
  The dimension size must match the output dimension of your chosen embedding model. Common dimension sizes are:
@@ -527,13 +527,13 @@ After creating an index, you can store embeddings along with their basic metadat
527
527
  ```ts
528
528
  // Store embeddings with their corresponding metadata
529
529
  await store.upsert({
530
- indexName: "myCollection", // index name
530
+ indexName: 'myCollection', // index name
531
531
  vectors: embeddings, // array of embedding vectors
532
- metadata: chunks.map((chunk) => ({
532
+ metadata: chunks.map(chunk => ({
533
533
  text: chunk.text, // The original text content
534
534
  id: chunk.id, // Optional unique identifier
535
535
  })),
536
- });
536
+ })
537
537
  ```
538
538
 
539
539
  The upsert operation:
@@ -552,9 +552,9 @@ Vector stores support rich metadata (any JSON-serializable fields) for filtering
552
552
  ```ts
553
553
  // Store embeddings with rich metadata for better organization and filtering
554
554
  await store.upsert({
555
- indexName: "myCollection",
555
+ indexName: 'myCollection',
556
556
  vectors: embeddings,
557
- metadata: chunks.map((chunk) => ({
557
+ metadata: chunks.map(chunk => ({
558
558
  // Basic content
559
559
  text: chunk.text,
560
560
  id: chunk.id,
@@ -565,14 +565,14 @@ await store.upsert({
565
565
 
566
566
  // Temporal metadata
567
567
  createdAt: new Date().toISOString(),
568
- version: "1.0",
568
+ version: '1.0',
569
569
 
570
570
  // Custom fields
571
571
  language: chunk.language,
572
572
  author: chunk.author,
573
573
  confidenceScore: chunk.score,
574
574
  })),
575
- });
575
+ })
576
576
  ```
577
577
 
578
578
  Key metadata considerations:
@@ -592,9 +592,9 @@ The most common use case is deleting all vectors for a specific document when a
592
592
  ```ts
593
593
  // Delete all vectors for a specific document
594
594
  await store.deleteVectors({
595
- indexName: "myCollection",
596
- filter: { docId: "document-123" },
597
- });
595
+ indexName: 'myCollection',
596
+ filter: { docId: 'document-123' },
597
+ })
598
598
  ```
599
599
 
600
600
  This is particularly useful when:
@@ -610,22 +610,19 @@ You can also use complex filters to delete vectors matching multiple conditions:
610
610
  ```ts
611
611
  // Delete all vectors for multiple documents
612
612
  await store.deleteVectors({
613
- indexName: "myCollection",
613
+ indexName: 'myCollection',
614
614
  filter: {
615
- docId: { $in: ["doc-1", "doc-2", "doc-3"] },
615
+ docId: { $in: ['doc-1', 'doc-2', 'doc-3'] },
616
616
  },
617
- });
617
+ })
618
618
 
619
619
  // Delete vectors for a specific user's documents
620
620
  await store.deleteVectors({
621
- indexName: "myCollection",
621
+ indexName: 'myCollection',
622
622
  filter: {
623
- $and: [
624
- { userId: "user-123" },
625
- { status: "archived" },
626
- ],
623
+ $and: [{ userId: 'user-123' }, { status: 'archived' }],
627
624
  },
628
- });
625
+ })
629
626
  ```
630
627
 
631
628
  ### Delete by Vector IDs
@@ -635,9 +632,9 @@ If you have specific vector IDs to delete, you can pass them directly:
635
632
  ```ts
636
633
  // Delete specific vectors by their IDs
637
634
  await store.deleteVectors({
638
- indexName: "myCollection",
639
- ids: ["vec-1", "vec-2", "vec-3"],
640
- });
635
+ indexName: 'myCollection',
636
+ ids: ['vec-1', 'vec-2', 'vec-3'],
637
+ })
641
638
  ```
642
639
 
643
640
  ## Best Practices
@@ -35,18 +35,18 @@ bun add @mastra/lance@latest
35
35
  ### Basic Storage Usage
36
36
 
37
37
  ```typescript
38
- import { LanceStorage } from "@mastra/lance";
38
+ import { LanceStorage } from '@mastra/lance'
39
39
 
40
40
  // Connect to a local database
41
- const storage = await LanceStorage.create("my-storage", "/path/to/db");
41
+ const storage = await LanceStorage.create('my-storage', '/path/to/db')
42
42
 
43
43
  // Connect to a LanceDB cloud database
44
- const storage = await LanceStorage.create("my-storage", "db://host:port");
44
+ const storage = await LanceStorage.create('my-storage', 'db://host:port')
45
45
 
46
46
  // Connect to a cloud database with custom options
47
- const storage = await LanceStorage.create("my-storage", "s3://bucket/db", {
48
- storageOptions: { timeout: "60s" },
49
- });
47
+ const storage = await LanceStorage.create('my-storage', 's3://bucket/db', {
48
+ storageOptions: { timeout: '60s' },
49
+ })
50
50
  ```
51
51
 
52
52
  ## Parameters
@@ -76,29 +76,29 @@ The LanceStorage implementation automatically handles schema creation and update
76
76
  When you pass storage to the Mastra class, `init()` is called automatically before any storage operation:
77
77
 
78
78
  ```typescript
79
- import { Mastra } from "@mastra/core";
80
- import { LanceStorage } from "@mastra/lance";
79
+ import { Mastra } from '@mastra/core'
80
+ import { LanceStorage } from '@mastra/lance'
81
81
 
82
- const storage = await LanceStorage.create("my-storage", "/path/to/db");
82
+ const storage = await LanceStorage.create('my-storage', '/path/to/db')
83
83
 
84
84
  const mastra = new Mastra({
85
85
  storage, // init() is called automatically
86
- });
86
+ })
87
87
  ```
88
88
 
89
89
  If you're using storage directly without Mastra, you must call `init()` explicitly to create the tables:
90
90
 
91
91
  ```typescript
92
- import { LanceStorage } from "@mastra/lance";
92
+ import { LanceStorage } from '@mastra/lance'
93
93
 
94
- const storage = await LanceStorage.create("my-storage", "/path/to/db");
94
+ const storage = await LanceStorage.create('my-storage', '/path/to/db')
95
95
 
96
96
  // Required when using storage directly
97
- await storage.init();
97
+ await storage.init()
98
98
 
99
99
  // Access domain-specific stores via getStore()
100
- const memoryStore = await storage.getStore('memory');
101
- const thread = await memoryStore?.getThreadById({ threadId: "..." });
100
+ const memoryStore = await storage.getStore('memory')
101
+ const thread = await memoryStore?.getThreadById({ threadId: '...' })
102
102
  ```
103
103
 
104
104
  > **Warning:** If `init()` is not called, tables won't be created and storage operations will fail silently or throw errors.