endee-llamaindex 0.1.2__py3-none-any.whl → 0.1.5a1__py3-none-any.whl

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
@@ -0,0 +1,589 @@
1
+ Metadata-Version: 2.4
2
+ Name: endee-llamaindex
3
+ Version: 0.1.5a1
4
+ Summary: Vector Database for Fast ANN Searches
5
+ Home-page: https://endee.io
6
+ Author: Endee Labs
7
+ Author-email: vineet@endee.io
8
+ Classifier: Programming Language :: Python :: 3
9
+ Classifier: License :: OSI Approved :: MIT License
10
+ Classifier: Operating System :: OS Independent
11
+ Requires-Python: >=3.6
12
+ Description-Content-Type: text/markdown
13
+ Requires-Dist: llama-index>=0.12.34
14
+ Requires-Dist: endee==0.1.9
15
+ Requires-Dist: fastembed>=0.3.0
16
+ Provides-Extra: gpu
17
+ Requires-Dist: fastembed-gpu>=0.3.0; extra == "gpu"
18
+ Dynamic: author
19
+ Dynamic: author-email
20
+ Dynamic: classifier
21
+ Dynamic: description
22
+ Dynamic: description-content-type
23
+ Dynamic: home-page
24
+ Dynamic: provides-extra
25
+ Dynamic: requires-dist
26
+ Dynamic: requires-python
27
+ Dynamic: summary
28
+
29
+ # Endee LlamaIndex Integration
30
+
31
+ Build powerful RAG applications with Endee vector database and LlamaIndex.
32
+
33
+ ---
34
+
35
+ ## Table of Contents
36
+
37
+ 1. [Installation](#1-installation)
38
+ 2. [Testing locally](#testing-locally)
39
+ 3. [Setting up Credentials](#2-setting-up-endee-and-openai-credentials)
40
+ 4. [Creating Sample Documents](#3-creating-sample-documents)
41
+ 5. [Setting up Endee with LlamaIndex](#4-setting-up-endee-with-llamaindex)
42
+ 6. [Creating a Vector Index](#5-creating-a-vector-index-from-documents)
43
+ 7. [Basic Retrieval](#6-basic-retrieval-with-query-engine)
44
+ 8. [Using Metadata Filters](#7-using-metadata-filters)
45
+ 9. [Advanced Filtering](#8-advanced-filtering-with-multiple-conditions)
46
+ 10. [Custom Retriever Setup](#9-custom-retriever-setup)
47
+ 11. [Custom Retriever with Query Engine](#10-using-a-custom-retriever-with-a-query-engine)
48
+ 12. [Direct VectorStore Querying](#11-direct-vectorstore-querying)
49
+ 13. [Saving and Loading Indexes](#12-saving-and-loading-indexes)
50
+ 14. [Cleanup](#13-cleanup)
51
+
52
+ ---
53
+
54
+ ## 1. Installation
55
+
56
+ Get started by installing the required package.
57
+
58
+ ### Basic Installation (Dense-only search)
59
+
60
+ ```bash
61
+ pip install endee-llamaindex
62
+ ```
63
+
64
+ > **Note:** This will automatically install `endee` and `llama-index` as dependencies.
65
+
66
+ ### Full Installation (with Hybrid Search support)
67
+
68
+ For hybrid search capabilities (dense + sparse vectors), install with the `hybrid` extra:
69
+
70
+ ```bash
71
+ pip install endee-llamaindex[hybrid]
72
+ ```
73
+
74
+ This includes FastEmbed for sparse vector encoding (SPLADE, BM25, etc.).
75
+
76
+ ### GPU-Accelerated Hybrid Search
77
+
78
+ For GPU-accelerated sparse encoding:
79
+
80
+ ```bash
81
+ pip install endee-llamaindex[hybrid-gpu]
82
+ ```
83
+
84
+ ### All Features
85
+
86
+ To install all optional dependencies:
87
+
88
+ ```bash
89
+ pip install endee-llamaindex[all]
90
+ ```
91
+
92
+ ### Installation Options Summary
93
+
94
+ | Installation | Use Case | Includes |
95
+ |--------------|----------|----------|
96
+ | `pip install endee-llamaindex` | Dense vector search only | Core dependencies |
97
+ | `pip install endee-llamaindex[hybrid]` | Dense + sparse hybrid search | + FastEmbed (CPU) |
98
+ | `pip install endee-llamaindex[hybrid-gpu]` | GPU-accelerated hybrid search | + FastEmbed (GPU) |
99
+ | `pip install endee-llamaindex[all]` | All features | All optional deps |
100
+
101
+ ---
102
+
103
+ ## Testing locally
104
+
105
+ From the project root:
106
+
107
+ ```bash
108
+ python -m venv env && source env/bin/activate # optional
109
+ pip install -e .
110
+ pip install pytest sentence-transformers huggingface-hub
111
+ export ENDEE_API_TOKEN="your-endee-api-token" # or set in endee_llamaindex/test_cases/setup_class.py
112
+
113
+ cd endee_llamaindex/test_cases && PYTHONPATH=.. python -m pytest . -v
114
+ ```
115
+
116
+ See [TESTING.md](TESTING.md) for more options and single-test runs.
117
+
118
+ ---
119
+
120
+ ## 2. Setting up Endee and OpenAI credentials
121
+
122
+ Configure your API credentials for Endee and OpenAI.
123
+
124
+ ```python
125
+ import os
126
+ from llama_index.embeddings.openai import OpenAIEmbedding
127
+
128
+ # Set API keys
129
+ os.environ["OPENAI_API_KEY"] = "your-openai-api-key"
130
+ endee_api_token = "your-endee-api-token"
131
+ ```
132
+
133
+ > **Tip:** Store your API keys in environment variables for production use.
134
+
135
+ ---
136
+
137
+ ## 3. Creating Sample Documents
138
+
139
+ Create documents with metadata for filtering and organization.
140
+
141
+ ```python
142
+ from llama_index.core import Document
143
+
144
+ # Create sample documents with different categories and metadata
145
+ documents = [
146
+ Document(
147
+ text="Python is a high-level, interpreted programming language known for its readability and simplicity.",
148
+ metadata={"category": "programming", "language": "python", "difficulty": "beginner"}
149
+ ),
150
+ Document(
151
+ text="JavaScript is a scripting language that enables interactive web pages and is an essential part of web applications.",
152
+ metadata={"category": "programming", "language": "javascript", "difficulty": "intermediate"}
153
+ ),
154
+ Document(
155
+ text="Machine learning is a subset of artificial intelligence that provides systems the ability to automatically learn and improve from experience.",
156
+ metadata={"category": "ai", "field": "machine_learning", "difficulty": "advanced"}
157
+ ),
158
+ Document(
159
+ text="Deep learning is part of a broader family of machine learning methods based on artificial neural networks with representation learning.",
160
+ metadata={"category": "ai", "field": "deep_learning", "difficulty": "advanced"}
161
+ ),
162
+ Document(
163
+ text="Vector databases are specialized database systems designed to store and query high-dimensional vectors for similarity search.",
164
+ metadata={"category": "database", "type": "vector", "difficulty": "intermediate"}
165
+ ),
166
+ Document(
167
+ text="Endee is a vector database that provides secure and private vector search capabilities.",
168
+ metadata={"category": "database", "type": "vector", "product": "endee", "difficulty": "intermediate"}
169
+ )
170
+ ]
171
+
172
+ print(f"Created {len(documents)} sample documents")
173
+ ```
174
+
175
+ **Output:**
176
+ ```
177
+ Created 6 sample documents
178
+ ```
179
+
180
+ ---
181
+
182
+ ## 4. Setting up Endee with LlamaIndex
183
+
184
+ Initialize the Endee vector store and connect it to LlamaIndex.
185
+
186
+ ```python
187
+ from endee_llamaindex import EndeeVectorStore
188
+ from llama_index.core import StorageContext
189
+ import time
190
+
191
+ # Create a unique index name with timestamp to avoid conflicts
192
+ timestamp = int(time.time())
193
+ index_name = f"llamaindex_demo_{timestamp}"
194
+
195
+ # Set up the embedding model
196
+ embed_model = OpenAIEmbedding()
197
+
198
+ # Get the embedding dimension
199
+ dimension = 1536 # OpenAI's default embedding dimension
200
+
201
+ # Initialize the Endee vector store
202
+ vector_store = EndeeVectorStore.from_params(
203
+ api_token=endee_api_token,
204
+ index_name=index_name,
205
+ dimension=dimension,
206
+ space_type="cosine", # Can be "cosine", "l2", or "ip"
207
+ precision="float16" # Options: "binary", "float16", "float32", "int16d", "int8d" (default: "float16")
208
+ )
209
+
210
+ # Create storage context with our vector store
211
+ storage_context = StorageContext.from_defaults(vector_store=vector_store)
212
+
213
+ print(f"Initialized Endee vector store with index: {index_name}")
214
+ ```
215
+
216
+ ### Configuration Options
217
+
218
+ | Parameter | Description | Options |
219
+ |-----------|-------------|---------|
220
+ | `space_type` | Distance metric for similarity | `cosine`, `l2`, `ip` |
221
+ | `dimension` | Vector dimension | Must match embedding model |
222
+ | `precision` | Index precision setting | `"binary"`, `"float16"` (default), `"float32"`, `"int16d"`, `"int8d"` |
223
+ | `batch_size` | Vectors per API call | Default: `100` |
224
+ | `hybrid` | Enable hybrid search (dense + sparse) | Default: `False` |
225
+ | `M` | Optional HNSW M parameter (bi-directional links) | Optional (backend default if not specified) |
226
+ | `ef_con` | Optional HNSW ef_construction parameter | Optional (backend default if not specified) |
227
+
228
+ ---
229
+
230
+ ## 5. Creating a Vector Index from Documents
231
+
232
+ Build a searchable vector index from your documents.
233
+
234
+ ```python
235
+ from llama_index.core import VectorStoreIndex
236
+
237
+ # Create a vector index
238
+ index = VectorStoreIndex.from_documents(
239
+ documents,
240
+ storage_context=storage_context,
241
+ embed_model=embed_model
242
+ )
243
+
244
+ print("Vector index created successfully")
245
+ ```
246
+
247
+ **Output:**
248
+ ```
249
+ Vector index created successfully
250
+ ```
251
+
252
+ ---
253
+
254
+ ## 6. Basic Retrieval with Query Engine
255
+
256
+ Create a query engine and perform semantic search.
257
+
258
+ ```python
259
+ # Create a query engine
260
+ query_engine = index.as_query_engine()
261
+
262
+ # Ask a question
263
+ response = query_engine.query("What is Python?")
264
+
265
+ print("Query: What is Python?")
266
+ print("Response:")
267
+ print(response)
268
+ ```
269
+
270
+ **Example Output:**
271
+ ```
272
+ Query: What is Python?
273
+ Response:
274
+ Python is a high-level, interpreted programming language known for its readability and simplicity.
275
+ ```
276
+
277
+ ---
278
+
279
+ ## 7. Using Metadata Filters
280
+
281
+ Filter search results based on document metadata.
282
+
283
+ ```python
284
+ from llama_index.core.vector_stores.types import MetadataFilters, MetadataFilter, FilterOperator
285
+
286
+ # Create a filtered retriever to only search within AI-related documents
287
+ ai_filter = MetadataFilter(key="category", value="ai", operator=FilterOperator.EQ)
288
+ ai_filters = MetadataFilters(filters=[ai_filter])
289
+
290
+ # Create a filtered query engine
291
+ filtered_query_engine = index.as_query_engine(filters=ai_filters)
292
+
293
+ # Ask a general question but only using AI documents
294
+ response = filtered_query_engine.query("What is learning from data?")
295
+
296
+ print("Filtered Query (AI category only): What is learning from data?")
297
+ print("Response:")
298
+ print(response)
299
+ ```
300
+
301
+ ### Available Filter Operators
302
+
303
+ | Operator | Description | Backend Symbol | Example |
304
+ |----------|-------------|----------------|---------|
305
+ | `FilterOperator.EQ` | Equal to | `$eq` | `rating == 5` |
306
+ | `FilterOperator.IN` | In list | `$in` | `category in ["ai", "ml"]` |
307
+
308
+
309
+ > **Important Notes:**
310
+ > - Currently, the Endee LlamaIndex integration only supports **EQ** and **IN** metadata filters.
311
+ > - Range-style operators (LT, LTE, GT, GTE) are **not** supported in this adapter.
312
+
313
+ ### Filter Examples
314
+
315
+ Here are practical examples showing how to use the supported filter operators:
316
+
317
+ ```python
318
+ from llama_index.core.vector_stores.types import MetadataFilters, MetadataFilter, FilterOperator
319
+
320
+ # Example 1: Equal to (EQ)
321
+ # Find documents with rating equal to 5
322
+ rating_filter = MetadataFilter(key="rating", value=5, operator=FilterOperator.EQ)
323
+ filters = MetadataFilters(filters=[rating_filter])
324
+ # Backend: {"rating": {"$eq": 5}}
325
+
326
+ # Example 2: In list (IN)
327
+ # Find documents in AI or ML categories
328
+ category_filter = MetadataFilter(key="category", value=["ai", "ml"], operator=FilterOperator.IN)
329
+ filters = MetadataFilters(filters=[category_filter])
330
+ # Backend: {"category": {"$in": ["ai", "ml"]}}
331
+
332
+ # Example 3: Combined filters (AND logic)
333
+ # Find AI documents with rating equal to 5
334
+ filters = MetadataFilters(filters=[
335
+ MetadataFilter(key="category", value="ai", operator=FilterOperator.EQ),
336
+ MetadataFilter(key="rating", value=5, operator=FilterOperator.EQ)
337
+ ])
338
+ # Backend: [{"category": {"$eq": "ai"}}, {"rating": {"$eq": 5}}]
339
+
340
+ # Create a query engine with filters
341
+ filtered_engine = index.as_query_engine(filters=filters)
342
+ response = filtered_engine.query("What is machine learning?")
343
+ ```
344
+
345
+ ---
346
+
347
+ ## 8. Advanced Filtering with Multiple Conditions
348
+
349
+ Combine multiple metadata filters for precise results.
350
+
351
+ ```python
352
+ # Create a more complex filter: database category AND intermediate difficulty
353
+ category_filter = MetadataFilter(key="category", value="database", operator=FilterOperator.EQ)
354
+ difficulty_filter = MetadataFilter(key="difficulty", value="intermediate", operator=FilterOperator.EQ)
355
+
356
+ complex_filters = MetadataFilters(filters=[category_filter, difficulty_filter])
357
+
358
+ # Create a query engine with the complex filters
359
+ complex_filtered_engine = index.as_query_engine(filters=complex_filters)
360
+
361
+ # Query with the complex filters
362
+ response = complex_filtered_engine.query("Tell me about databases")
363
+
364
+ print("Complex Filtered Query (database category AND intermediate difficulty): Tell me about databases")
365
+ print("Response:")
366
+ print(response)
367
+ ```
368
+
369
+ > **Note:** Multiple filters are combined with AND logic by default.
370
+
371
+ ---
372
+
373
+ ## 9. Custom Retriever Setup
374
+
375
+ Create a custom retriever for fine-grained control over the retrieval process.
376
+
377
+ ```python
378
+ from llama_index.core.retrievers import VectorIndexRetriever
379
+
380
+ # Create a retriever with custom parameters
381
+ retriever = VectorIndexRetriever(
382
+ index=index,
383
+ similarity_top_k=3, # Return top 3 most similar results
384
+ filters=ai_filters # Use our AI category filter from before
385
+ )
386
+
387
+ # Retrieve nodes for a query
388
+ nodes = retriever.retrieve("What is deep learning?")
389
+
390
+ print(f"Retrieved {len(nodes)} nodes for query: 'What is deep learning?' (with AI category filter)")
391
+ print("\nRetrieved content:")
392
+ for i, node in enumerate(nodes):
393
+ print(f"\nNode {i+1}:")
394
+ print(f"Text: {node.node.text}")
395
+ print(f"Metadata: {node.node.metadata}")
396
+ print(f"Score: {node.score:.4f}")
397
+ ```
398
+
399
+ **Example Output:**
400
+ ```
401
+ Retrieved 2 nodes for query: 'What is deep learning?' (with AI category filter)
402
+
403
+ Node 1:
404
+ Text: Deep learning is part of a broader family of machine learning methods...
405
+ Metadata: {'category': 'ai', 'field': 'deep_learning', 'difficulty': 'advanced'}
406
+ Score: 0.8934
407
+
408
+ Node 2:
409
+ Text: Machine learning is a subset of artificial intelligence...
410
+ Metadata: {'category': 'ai', 'field': 'machine_learning', 'difficulty': 'advanced'}
411
+ Score: 0.7821
412
+ ```
413
+
414
+ ---
415
+
416
+ ## 10. Using a Custom Retriever with a Query Engine
417
+
418
+ Combine your custom retriever with a query engine for enhanced control.
419
+
420
+ ```python
421
+ from llama_index.core.query_engine import RetrieverQueryEngine
422
+
423
+ # Create a query engine with our custom retriever
424
+ custom_query_engine = RetrieverQueryEngine.from_args(
425
+ retriever=retriever,
426
+ verbose=True # Enable verbose mode to see the retrieved nodes
427
+ )
428
+
429
+ # Query using the custom retriever query engine
430
+ response = custom_query_engine.query("Explain the difference between machine learning and deep learning")
431
+
432
+ print("\nFinal Response:")
433
+ print(response)
434
+ ```
435
+
436
+ ---
437
+
438
+ ## 11. Direct VectorStore Querying
439
+
440
+ Query the Endee vector store directly, bypassing the LlamaIndex query engine.
441
+
442
+ ```python
443
+ from llama_index.core.vector_stores.types import VectorStoreQuery
444
+
445
+ # Generate an embedding for our query
446
+ query_text = "What are vector databases?"
447
+ query_embedding = embed_model.get_text_embedding(query_text)
448
+
449
+ # Create a VectorStoreQuery
450
+ vector_store_query = VectorStoreQuery(
451
+ query_embedding=query_embedding,
452
+ similarity_top_k=2,
453
+ filters=MetadataFilters(filters=[MetadataFilter(key="category", value="database", operator=FilterOperator.EQ)])
454
+ )
455
+
456
+ # Execute the query directly on the vector store
457
+ query_result = vector_store.query(vector_store_query)
458
+
459
+ print(f"Direct VectorStore query: '{query_text}'")
460
+ print(f"Retrieved {len(query_result.nodes)} results with database category filter:")
461
+ for i, (node, score) in enumerate(zip(query_result.nodes, query_result.similarities)):
462
+ print(f"\nResult {i+1}:")
463
+ print(f"Text: {node.text}")
464
+ print(f"Metadata: {node.metadata}")
465
+ print(f"Similarity score: {score:.4f}")
466
+ ```
467
+
468
+ > **Tip:** Direct querying is useful when you need raw results without LLM processing.
469
+
470
+ ---
471
+
472
+ ## 12. Saving and Loading Indexes
473
+
474
+ Reconnect to your index in future sessions. Your vectors are stored in the cloud.
475
+
476
+ ```python
477
+ # To reconnect to an existing index in a future session:
478
+ def reconnect_to_index(api_token, index_name):
479
+ # Initialize the vector store with existing index
480
+ vector_store = EndeeVectorStore.from_params(
481
+ api_token=api_token,
482
+ index_name=index_name
483
+ )
484
+
485
+ # Create storage context
486
+ storage_context = StorageContext.from_defaults(vector_store=vector_store)
487
+
488
+ # Load the index
489
+ index = VectorStoreIndex.from_vector_store(
490
+ vector_store,
491
+ embed_model=OpenAIEmbedding()
492
+ )
493
+
494
+ return index
495
+
496
+ # Example usage
497
+ reconnected_index = reconnect_to_index(endee_api_token, index_name)
498
+ query_engine = reconnected_index.as_query_engine()
499
+ response = query_engine.query("What is Endee?")
500
+ print(response)
501
+
502
+ print(f"To reconnect to this index in the future, use:\n")
503
+ print(f"API Token: {endee_api_token}")
504
+ print(f"Index Name: {index_name}")
505
+ ```
506
+
507
+ > **Important:** Save your `index_name` to reconnect to your data later.
508
+
509
+ ---
510
+
511
+ ## 13. Cleanup
512
+
513
+ Delete the index when you're done to free up resources.
514
+
515
+ ```python
516
+ # Uncomment to delete your index
517
+ # endee.delete_index(index_name)
518
+ # print(f"Index {index_name} deleted")
519
+ ```
520
+
521
+ > **Warning:** Deleting an index permanently removes all stored vectors and cannot be undone.
522
+
523
+ ---
524
+
525
+ ## Quick Reference
526
+
527
+ ### EndeeVectorStore Parameters
528
+
529
+ | Parameter | Type | Description | Default |
530
+ |-----------|------|-------------|---------|
531
+ | `api_token` | `str` | Your Endee API token | Required |
532
+ | `index_name` | `str` | Name of the index | Required |
533
+ | `dimension` | `int` | Vector dimension | Required |
534
+ | `space_type` | `str` | Distance metric (`"cosine"`, `"l2"`, `"ip"`) | `"cosine"` |
535
+ | `precision` | `str` | Index precision (`"binary"`, `"float16"`, `"float32"`, `"int16d"`, `"int8d"`) | `"float16"` |
536
+ | `batch_size` | `int` | Vectors per API call | `100` |
537
+ | `hybrid` | `bool` | Enable hybrid search (dense + sparse vectors) | `False` |
538
+ | `sparse_dim` | `int` | Sparse dimension for hybrid index | `None` |
539
+ | `model_name` | `str` | Model name for sparse embeddings (e.g., `'splade_pp'`, `'bert_base'`) | `None` |
540
+ | `M` | `int` | Optional HNSW M parameter (bi-directional links per node) | `None` (backend default) |
541
+ | `ef_con` | `int` | Optional HNSW ef_construction parameter | `None` (backend default) |
542
+
543
+ ### Distance Metrics
544
+
545
+ | Metric | Best For |
546
+ |--------|----------|
547
+ | `cosine` | Text embeddings, normalized vectors |
548
+ | `l2` | Image features, spatial data |
549
+ | `ip` | Recommendation systems, dot product similarity |
550
+
551
+ ### Precision Settings
552
+
553
+ The `precision` parameter controls the vector storage format and affects memory usage and search performance:
554
+
555
+ | Precision | Description | Use Case |
556
+ |-----------|-------------|----------|
557
+ | `"float32"` | Full precision floating point | Maximum accuracy, higher memory usage |
558
+ | `"float16"` | Half precision floating point | Balanced accuracy and memory (default) |
559
+ | `"binary"` | Binary vectors | Extremely compact, best for binary embeddings |
560
+ | `"int8d"` | 8-bit integer quantization | High compression, good accuracy |
561
+ | `"int16d"` | 16-bit integer quantization | Better accuracy than int8d, moderate compression |
562
+
563
+ ### HNSW Parameters (Optional)
564
+
565
+ HNSW (Hierarchical Navigable Small World) parameters control index construction and search quality. These are **optional** - if not provided, the Endee backend uses optimized defaults.
566
+
567
+ | Parameter | Description | Impact |
568
+ |-----------|-------------|--------|
569
+ | `M` | Number of bi-directional links per node | Higher M = better recall, more memory |
570
+ | `ef_con` | Size of dynamic candidate list during construction | Higher ef_con = better quality, slower indexing |
571
+
572
+ **Example with custom HNSW parameters:**
573
+
574
+ ```python
575
+ vector_store = EndeeVectorStore.from_params(
576
+ api_token="your-token",
577
+ index_name="custom_index",
578
+ dimension=384,
579
+ space_type="cosine",
580
+ M=32, # Optional: custom M value
581
+ ef_con=256 # Optional: custom ef_construction
582
+ )
583
+ ```
584
+
585
+ **Note:** Only specify M and ef_con if you need to fine-tune performance. The backend defaults work well for most use cases.
586
+
587
+ ---
588
+
589
+
@@ -0,0 +1,8 @@
1
+ endee_llamaindex/__init__.py,sha256=ctCcicNLMO3LpXPGLwvQifvQLX7TEd8CYgFO6Nd9afc,83
2
+ endee_llamaindex/base.py,sha256=A_ha7LWOche6hdQ7OtKsGA2HADZDoRPq2cQTHgphpJM,31232
3
+ endee_llamaindex/constants.py,sha256=-RMx-48CsOklYnarwae5d-BrixCWQfzPawWB-ZgH6gA,2128
4
+ endee_llamaindex/utils.py,sha256=EIdDGZ8clesbiCJSgowonVBtGrimEwa-YV2qj05GMcE,5263
5
+ endee_llamaindex-0.1.5a1.dist-info/METADATA,sha256=sc9Du_5aUVGEj2Zp6ufampMdhg8-hsTDj9xQED4q288,18825
6
+ endee_llamaindex-0.1.5a1.dist-info/WHEEL,sha256=wUyA8OaulRlbfwMtmQsvNngGrxQHAvkKcvRmdizlJi0,92
7
+ endee_llamaindex-0.1.5a1.dist-info/top_level.txt,sha256=AReiKL0lBXSdKPsQlDusPIH_qbS_txOSUctuCR0rRNQ,17
8
+ endee_llamaindex-0.1.5a1.dist-info/RECORD,,
@@ -1,5 +1,5 @@
1
1
  Wheel-Version: 1.0
2
- Generator: setuptools (80.9.0)
2
+ Generator: setuptools (80.10.2)
3
3
  Root-Is-Purelib: true
4
4
  Tag: py3-none-any
5
5