vectoriz 0.0.3__tar.gz → 0.0.5__tar.gz

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
@@ -0,0 +1,110 @@
1
+ Metadata-Version: 2.4
2
+ Name: vectoriz
3
+ Version: 0.0.5
4
+ Summary: Python library for creating vectorized data from text or files.
5
+ Home-page: https://github.com/PedroHenriqueDevBR/vectoriz
6
+ Author: PedroHenriqueDevBR
7
+ Author-email: pedro.henrique.particular@gmail.com
8
+ Classifier: Programming Language :: Python :: 3.12
9
+ Classifier: Operating System :: OS Independent
10
+ Requires-Python: >=3.12
11
+ Description-Content-Type: text/markdown
12
+ Requires-Dist: faiss-cpu==1.10.0
13
+ Requires-Dist: numpy==2.2.4
14
+ Requires-Dist: sentence-transformers==4.0.2
15
+ Requires-Dist: python-docx==1.1.2
16
+ Dynamic: author
17
+ Dynamic: author-email
18
+ Dynamic: classifier
19
+ Dynamic: description
20
+ Dynamic: description-content-type
21
+ Dynamic: home-page
22
+ Dynamic: requires-dist
23
+ Dynamic: requires-python
24
+ Dynamic: summary
25
+
26
+ # Vectoriz
27
+
28
+ [![PyPI version](https://badge.fury.io/py/vectoriz.svg)](https://pypi.org/project/vectoriz/)
29
+
30
+ [![GitHub license](https://img.shields.io/github/license/PedroHenriqueDevBR/vectoriz)](https://github.com/PedroHenriqueDevBR/vectoriz/blob/main/LICENSE)
31
+
32
+ [![Python Version](https://img.shields.io/badge/python-3.12%2B-blue)](https://www.python.org/downloads/)
33
+
34
+ [![GitHub issues](https://img.shields.io/github/issues/PedroHenriqueDevBR/vectoriz)](https://github.com/PedroHenriqueDevBR/vectoriz/issues)
35
+
36
+ [![GitHub stars](https://img.shields.io/github/stars/PedroHenriqueDevBR/vectoriz)](https://github.com/PedroHenriqueDevBR/vectoriz/stargazers)
37
+
38
+ [![GitHub forks](https://img.shields.io/github/forks/PedroHenriqueDevBR/vectoriz)](https://github.com/PedroHenriqueDevBR/vectoriz/network)
39
+
40
+ Vectoriz is available on PyPI and can be installed via pip:
41
+
42
+ ```bash
43
+ pip install vectoriz
44
+ ```
45
+
46
+ A tool for generating vector embeddings for Retrieval-Augmented Generation (RAG) applications.
47
+
48
+ ## Overview
49
+
50
+ This project provides utilities to create, manage, and optimize vector embeddings for use in RAG systems. It streamlines the process of converting documents and data sources into vector representations suitable for semantic search and retrieval.
51
+
52
+ ## Features
53
+
54
+ - Document processing and chunking
55
+ - Vector embedding generation using various models
56
+ - Vector database integration
57
+ - Optimization tools for RAG performance
58
+ - Easy-to-use API for embedding creation
59
+
60
+ ## Installation
61
+
62
+ ```bash
63
+ git clone https://github.com/PedroHenriqueDevBR/vectoriz.git
64
+ cd vectoriz
65
+ pip install -r requirements.txt
66
+ ```
67
+
68
+ ## Usage
69
+
70
+ ```python
71
+ # initial informations
72
+ index_db_path = "./data/faiss_db.index" # path to save/load index
73
+ np_db_path = "./data/np_db.npz" # path to save/load numpy data
74
+ directory_path = "/home/username/Documents/" # Path where the files (.txt, .docx) are saved
75
+
76
+ # Class instance
77
+ transformer = TokenTransformer()
78
+ files_features = FilesFeature()
79
+
80
+ # Load files and create a argument class (pack with embedings, chunk_names and text_list)
81
+ argument = files_features.load_all_files_from_directory(directory_path)
82
+
83
+ # Created FAISS index to be used in queries
84
+ token_data = transformer.create_index(argument.text_list)
85
+ index = token_data.index
86
+
87
+ # To load files from VectorDB use
88
+ vector_client = VectorDBClient()
89
+ vector_client.load_data(self.index_db_path, self.np_db_path)
90
+ index = vector_client.faiss_index
91
+ argument = vector_client.file_argument
92
+
93
+ # To save data on VectorDB use
94
+ vector_client = VectorDBClient(index, argument)
95
+ vector_client.save_data(index_db_path, np_db_path)
96
+
97
+ # To search information on index
98
+ query = input(">>> ")
99
+ amoount_content = 1
100
+ response = self.transformer.search(query, self.index, self.argument.text_list, amoount_content)
101
+ print(response)
102
+ ```
103
+
104
+ ## Contributing
105
+
106
+ Contributions are welcome! Please feel free to submit a Pull Request.
107
+
108
+ ## License
109
+
110
+ This project is licensed under the MIT License - see the LICENSE file for details.
@@ -0,0 +1,85 @@
1
+ # Vectoriz
2
+
3
+ [![PyPI version](https://badge.fury.io/py/vectoriz.svg)](https://pypi.org/project/vectoriz/)
4
+
5
+ [![GitHub license](https://img.shields.io/github/license/PedroHenriqueDevBR/vectoriz)](https://github.com/PedroHenriqueDevBR/vectoriz/blob/main/LICENSE)
6
+
7
+ [![Python Version](https://img.shields.io/badge/python-3.12%2B-blue)](https://www.python.org/downloads/)
8
+
9
+ [![GitHub issues](https://img.shields.io/github/issues/PedroHenriqueDevBR/vectoriz)](https://github.com/PedroHenriqueDevBR/vectoriz/issues)
10
+
11
+ [![GitHub stars](https://img.shields.io/github/stars/PedroHenriqueDevBR/vectoriz)](https://github.com/PedroHenriqueDevBR/vectoriz/stargazers)
12
+
13
+ [![GitHub forks](https://img.shields.io/github/forks/PedroHenriqueDevBR/vectoriz)](https://github.com/PedroHenriqueDevBR/vectoriz/network)
14
+
15
+ Vectoriz is available on PyPI and can be installed via pip:
16
+
17
+ ```bash
18
+ pip install vectoriz
19
+ ```
20
+
21
+ A tool for generating vector embeddings for Retrieval-Augmented Generation (RAG) applications.
22
+
23
+ ## Overview
24
+
25
+ This project provides utilities to create, manage, and optimize vector embeddings for use in RAG systems. It streamlines the process of converting documents and data sources into vector representations suitable for semantic search and retrieval.
26
+
27
+ ## Features
28
+
29
+ - Document processing and chunking
30
+ - Vector embedding generation using various models
31
+ - Vector database integration
32
+ - Optimization tools for RAG performance
33
+ - Easy-to-use API for embedding creation
34
+
35
+ ## Installation
36
+
37
+ ```bash
38
+ git clone https://github.com/PedroHenriqueDevBR/vectoriz.git
39
+ cd vectoriz
40
+ pip install -r requirements.txt
41
+ ```
42
+
43
+ ## Usage
44
+
45
+ ```python
46
+ # initial informations
47
+ index_db_path = "./data/faiss_db.index" # path to save/load index
48
+ np_db_path = "./data/np_db.npz" # path to save/load numpy data
49
+ directory_path = "/home/username/Documents/" # Path where the files (.txt, .docx) are saved
50
+
51
+ # Class instance
52
+ transformer = TokenTransformer()
53
+ files_features = FilesFeature()
54
+
55
+ # Load files and create a argument class (pack with embedings, chunk_names and text_list)
56
+ argument = files_features.load_all_files_from_directory(directory_path)
57
+
58
+ # Created FAISS index to be used in queries
59
+ token_data = transformer.create_index(argument.text_list)
60
+ index = token_data.index
61
+
62
+ # To load files from VectorDB use
63
+ vector_client = VectorDBClient()
64
+ vector_client.load_data(self.index_db_path, self.np_db_path)
65
+ index = vector_client.faiss_index
66
+ argument = vector_client.file_argument
67
+
68
+ # To save data on VectorDB use
69
+ vector_client = VectorDBClient(index, argument)
70
+ vector_client.save_data(index_db_path, np_db_path)
71
+
72
+ # To search information on index
73
+ query = input(">>> ")
74
+ amoount_content = 1
75
+ response = self.transformer.search(query, self.index, self.argument.text_list, amoount_content)
76
+ print(response)
77
+ ```
78
+
79
+ ## Contributing
80
+
81
+ Contributions are welcome! Please feel free to submit a Pull Request.
82
+
83
+ ## License
84
+
85
+ This project is licensed under the MIT License - see the LICENSE file for details.
@@ -2,7 +2,7 @@ from setuptools import setup, find_packages
2
2
 
3
3
  setup(
4
4
  name="vectoriz",
5
- version="0.0.3",
5
+ version="0.0.5",
6
6
  author="PedroHenriqueDevBR",
7
7
  author_email="pedro.henrique.particular@gmail.com",
8
8
  description="Python library for creating vectorized data from text or files.",
@@ -2,7 +2,7 @@ import os
2
2
  import docx
3
3
  import numpy as np
4
4
  from typing import Optional
5
- from vectoriz.token_transformer import TokenTransformer
5
+ from token_transformer import TokenTransformer
6
6
 
7
7
  class FileArgument:
8
8
  def __init__(
@@ -127,7 +127,7 @@ class FilesFeature:
127
127
  full_text.append(paragraph.text)
128
128
  return "\n".join(full_text)
129
129
 
130
- def load_txt_files_from_directory(self, directory: str) -> FileArgument:
130
+ def load_txt_files_from_directory(self, directory: str, verbose: bool = False) -> FileArgument:
131
131
  """
132
132
  Load all text files from the specified directory and extract their content.
133
133
  This method scans the specified directory for files with the '.txt' extension
@@ -145,16 +145,22 @@ class FilesFeature:
145
145
  argument: FileArgument = FileArgument([], [], [])
146
146
  for file in os.listdir(directory):
147
147
  if not file.endswith(".txt"):
148
+ if verbose:
149
+ print(f"Error file: {file}")
148
150
  continue
149
151
 
150
152
  text = self._extract_txt_content(directory, file)
151
153
  if text is None:
154
+ if verbose:
155
+ print(f"Error file: {file}")
152
156
  continue
153
157
 
154
158
  argument.add_data(file, text)
159
+ if verbose:
160
+ print(f"Loaded txt file: {file}")
155
161
  return argument
156
162
 
157
- def load_docx_files_from_directory(self, directory: str) -> FileArgument:
163
+ def load_docx_files_from_directory(self, directory: str, verbose: bool = False) -> FileArgument:
158
164
  """
159
165
  Load all Word (.docx) files from the specified directory and extract their content.
160
166
 
@@ -174,16 +180,22 @@ class FilesFeature:
174
180
  argument: FileArgument = FileArgument([], [], [])
175
181
  for file in os.listdir(directory):
176
182
  if not file.endswith(".docx"):
183
+ if verbose:
184
+ print(f"Error file: {file}")
177
185
  continue
178
186
 
179
187
  text = self._extract_docx_content(directory, file)
180
188
  if text is None:
189
+ if verbose:
190
+ print(f"Error file: {file}")
181
191
  continue
182
192
 
183
193
  argument.add_data(file, text)
194
+ if verbose:
195
+ print(f"Loaded Word file: {file}")
184
196
  return argument
185
197
 
186
- def load_all_files_from_directory(self, directory: str) -> FileArgument:
198
+ def load_all_files_from_directory(self, directory: str, verbose: bool = False) -> FileArgument:
187
199
  """
188
200
  Load all supported files (.txt and .docx) from the specified directory and its subdirectories.
189
201
 
@@ -199,15 +211,23 @@ class FilesFeature:
199
211
  argument: FileArgument = FileArgument([], [], [])
200
212
  for root, _, files in os.walk(directory):
201
213
  for file in files:
214
+ readed = False
202
215
  if file.endswith(".txt"):
203
216
  text = self._extract_txt_content(root, file)
204
217
  if text is not None:
205
218
  argument.add_data(file, text)
219
+ readed = True
206
220
  elif file.endswith(".docx"):
207
221
  try:
208
222
  text = self._extract_docx_content(root, file)
209
223
  if text is not None:
210
224
  argument.add_data(file, text)
225
+ readed = True
211
226
  except Exception as e:
212
227
  print(f"Error processing {file}: {str(e)}")
228
+
229
+ if verbose and readed:
230
+ print(f"Loaded file: {file}")
231
+ elif verbose and not readed:
232
+ print(f"Error file: {file}")
213
233
  return argument
@@ -73,15 +73,16 @@ class TokenTransformer:
73
73
  def search(
74
74
  self,
75
75
  query: str,
76
- data: TokenData,
76
+ index: faiss.IndexFlatL2,
77
+ texts: list[str],
77
78
  context_amount: int = 1,
78
79
  ) -> str:
79
80
  query_embedding = self._query_to_embeddings(query)
80
- _, I = data.index.search(query_embedding, k=context_amount)
81
+ _, I = index.search(query_embedding, k=context_amount)
81
82
  context = ""
82
83
 
83
84
  for i in I[0]:
84
- context += data.texts[i] + "\n"
85
+ context += texts[i] + "\n"
85
86
 
86
87
  return context.strip()
87
88
 
@@ -3,8 +3,8 @@ import faiss
3
3
  import numpy as np
4
4
  from typing import Optional
5
5
 
6
- from vectoriz.files import FileArgument
7
- from vectoriz.token_transformer import TokenTransformer
6
+ from files import FileArgument
7
+ from token_transformer import TokenTransformer
8
8
 
9
9
 
10
10
  class VectorDBClient:
@@ -54,15 +54,6 @@ class VectorDBClient:
54
54
 
55
55
  class VectorDB:
56
56
 
57
- def __init__(self):
58
- """
59
- Constructor for the class.
60
-
61
- Initializes the following attributes:
62
- - transformer: A TokenTransformer instance for text transformation.
63
- """
64
- self.transformer = TokenTransformer()
65
-
66
57
  def load_saved_data(
67
58
  self, faiss_db_path: str, np_db_path: str
68
59
  ) -> Optional[VectorDBClient]:
@@ -158,13 +149,14 @@ class VectorDB:
158
149
  - 'chunk_names': The chunk names
159
150
  - 'texts': The text content
160
151
  """
152
+ transformer = TokenTransformer()
161
153
  np_db_path = np_db_path if np_db_path.endswith(".npz") else np_db_path + ".npz"
162
154
 
163
155
  embeddings_np: np.ndarray = None
164
156
  if argument.ndarray_data is not None:
165
157
  embeddings_np = argument.ndarray_data
166
158
  else:
167
- embeddings_np = self.transformer.get_np_vectors(argument.embeddings)
159
+ embeddings_np = transformer.get_np_vectors(argument.embeddings)
168
160
 
169
161
  np.savez(
170
162
  np_db_path,
@@ -0,0 +1,110 @@
1
+ Metadata-Version: 2.4
2
+ Name: vectoriz
3
+ Version: 0.0.5
4
+ Summary: Python library for creating vectorized data from text or files.
5
+ Home-page: https://github.com/PedroHenriqueDevBR/vectoriz
6
+ Author: PedroHenriqueDevBR
7
+ Author-email: pedro.henrique.particular@gmail.com
8
+ Classifier: Programming Language :: Python :: 3.12
9
+ Classifier: Operating System :: OS Independent
10
+ Requires-Python: >=3.12
11
+ Description-Content-Type: text/markdown
12
+ Requires-Dist: faiss-cpu==1.10.0
13
+ Requires-Dist: numpy==2.2.4
14
+ Requires-Dist: sentence-transformers==4.0.2
15
+ Requires-Dist: python-docx==1.1.2
16
+ Dynamic: author
17
+ Dynamic: author-email
18
+ Dynamic: classifier
19
+ Dynamic: description
20
+ Dynamic: description-content-type
21
+ Dynamic: home-page
22
+ Dynamic: requires-dist
23
+ Dynamic: requires-python
24
+ Dynamic: summary
25
+
26
+ # Vectoriz
27
+
28
+ [![PyPI version](https://badge.fury.io/py/vectoriz.svg)](https://pypi.org/project/vectoriz/)
29
+
30
+ [![GitHub license](https://img.shields.io/github/license/PedroHenriqueDevBR/vectoriz)](https://github.com/PedroHenriqueDevBR/vectoriz/blob/main/LICENSE)
31
+
32
+ [![Python Version](https://img.shields.io/badge/python-3.12%2B-blue)](https://www.python.org/downloads/)
33
+
34
+ [![GitHub issues](https://img.shields.io/github/issues/PedroHenriqueDevBR/vectoriz)](https://github.com/PedroHenriqueDevBR/vectoriz/issues)
35
+
36
+ [![GitHub stars](https://img.shields.io/github/stars/PedroHenriqueDevBR/vectoriz)](https://github.com/PedroHenriqueDevBR/vectoriz/stargazers)
37
+
38
+ [![GitHub forks](https://img.shields.io/github/forks/PedroHenriqueDevBR/vectoriz)](https://github.com/PedroHenriqueDevBR/vectoriz/network)
39
+
40
+ Vectoriz is available on PyPI and can be installed via pip:
41
+
42
+ ```bash
43
+ pip install vectoriz
44
+ ```
45
+
46
+ A tool for generating vector embeddings for Retrieval-Augmented Generation (RAG) applications.
47
+
48
+ ## Overview
49
+
50
+ This project provides utilities to create, manage, and optimize vector embeddings for use in RAG systems. It streamlines the process of converting documents and data sources into vector representations suitable for semantic search and retrieval.
51
+
52
+ ## Features
53
+
54
+ - Document processing and chunking
55
+ - Vector embedding generation using various models
56
+ - Vector database integration
57
+ - Optimization tools for RAG performance
58
+ - Easy-to-use API for embedding creation
59
+
60
+ ## Installation
61
+
62
+ ```bash
63
+ git clone https://github.com/PedroHenriqueDevBR/vectoriz.git
64
+ cd vectoriz
65
+ pip install -r requirements.txt
66
+ ```
67
+
68
+ ## Usage
69
+
70
+ ```python
71
+ # initial informations
72
+ index_db_path = "./data/faiss_db.index" # path to save/load index
73
+ np_db_path = "./data/np_db.npz" # path to save/load numpy data
74
+ directory_path = "/home/username/Documents/" # Path where the files (.txt, .docx) are saved
75
+
76
+ # Class instance
77
+ transformer = TokenTransformer()
78
+ files_features = FilesFeature()
79
+
80
+ # Load files and create a argument class (pack with embedings, chunk_names and text_list)
81
+ argument = files_features.load_all_files_from_directory(directory_path)
82
+
83
+ # Created FAISS index to be used in queries
84
+ token_data = transformer.create_index(argument.text_list)
85
+ index = token_data.index
86
+
87
+ # To load files from VectorDB use
88
+ vector_client = VectorDBClient()
89
+ vector_client.load_data(self.index_db_path, self.np_db_path)
90
+ index = vector_client.faiss_index
91
+ argument = vector_client.file_argument
92
+
93
+ # To save data on VectorDB use
94
+ vector_client = VectorDBClient(index, argument)
95
+ vector_client.save_data(index_db_path, np_db_path)
96
+
97
+ # To search information on index
98
+ query = input(">>> ")
99
+ amoount_content = 1
100
+ response = self.transformer.search(query, self.index, self.argument.text_list, amoount_content)
101
+ print(response)
102
+ ```
103
+
104
+ ## Contributing
105
+
106
+ Contributions are welcome! Please feel free to submit a Pull Request.
107
+
108
+ ## License
109
+
110
+ This project is licensed under the MIT License - see the LICENSE file for details.
vectoriz-0.0.3/PKG-INFO DELETED
@@ -1,60 +0,0 @@
1
- Metadata-Version: 2.4
2
- Name: vectoriz
3
- Version: 0.0.3
4
- Summary: Python library for creating vectorized data from text or files.
5
- Home-page: https://github.com/PedroHenriqueDevBR/vectoriz
6
- Author: PedroHenriqueDevBR
7
- Author-email: pedro.henrique.particular@gmail.com
8
- Classifier: Programming Language :: Python :: 3.12
9
- Classifier: Operating System :: OS Independent
10
- Requires-Python: >=3.12
11
- Description-Content-Type: text/markdown
12
- Requires-Dist: faiss-cpu==1.10.0
13
- Requires-Dist: numpy==2.2.4
14
- Requires-Dist: sentence-transformers==4.0.2
15
- Requires-Dist: python-docx==1.1.2
16
- Dynamic: author
17
- Dynamic: author-email
18
- Dynamic: classifier
19
- Dynamic: description
20
- Dynamic: description-content-type
21
- Dynamic: home-page
22
- Dynamic: requires-dist
23
- Dynamic: requires-python
24
- Dynamic: summary
25
-
26
- # RAG-vector-creator
27
-
28
- ## Overview
29
- This project implements a RAG (Retrieval-Augmented Generation) system for creating and managing vector embeddings from documents using FAISS and NumPy libraries. It efficiently transforms text data into high-dimensional vector representations that enable semantic search capabilities, similarity matching, and context-aware document retrieval for enhanced question answering applications.
30
-
31
- ## Features
32
-
33
- - Document ingestion and preprocessing
34
- - Vector embedding generation using state-of-the-art models
35
- - Efficient storage and retrieval of embeddings
36
- - Integration with LLM-based generation systems
37
-
38
- ## Installation
39
-
40
- ```bash
41
- pip install -r requirements.txt
42
- python app.py
43
- ```
44
-
45
- ## Build lib
46
-
47
- To build the lib run the commands:
48
-
49
- ```
50
- python setup.py sdist bdist_wheel
51
- ```
52
-
53
- To test the install run:
54
- ```
55
- pip install .
56
- ```
57
-
58
- ## License
59
-
60
- MIT
vectoriz-0.0.3/README.md DELETED
@@ -1,35 +0,0 @@
1
- # RAG-vector-creator
2
-
3
- ## Overview
4
- This project implements a RAG (Retrieval-Augmented Generation) system for creating and managing vector embeddings from documents using FAISS and NumPy libraries. It efficiently transforms text data into high-dimensional vector representations that enable semantic search capabilities, similarity matching, and context-aware document retrieval for enhanced question answering applications.
5
-
6
- ## Features
7
-
8
- - Document ingestion and preprocessing
9
- - Vector embedding generation using state-of-the-art models
10
- - Efficient storage and retrieval of embeddings
11
- - Integration with LLM-based generation systems
12
-
13
- ## Installation
14
-
15
- ```bash
16
- pip install -r requirements.txt
17
- python app.py
18
- ```
19
-
20
- ## Build lib
21
-
22
- To build the lib run the commands:
23
-
24
- ```
25
- python setup.py sdist bdist_wheel
26
- ```
27
-
28
- To test the install run:
29
- ```
30
- pip install .
31
- ```
32
-
33
- ## License
34
-
35
- MIT
@@ -1,60 +0,0 @@
1
- Metadata-Version: 2.4
2
- Name: vectoriz
3
- Version: 0.0.3
4
- Summary: Python library for creating vectorized data from text or files.
5
- Home-page: https://github.com/PedroHenriqueDevBR/vectoriz
6
- Author: PedroHenriqueDevBR
7
- Author-email: pedro.henrique.particular@gmail.com
8
- Classifier: Programming Language :: Python :: 3.12
9
- Classifier: Operating System :: OS Independent
10
- Requires-Python: >=3.12
11
- Description-Content-Type: text/markdown
12
- Requires-Dist: faiss-cpu==1.10.0
13
- Requires-Dist: numpy==2.2.4
14
- Requires-Dist: sentence-transformers==4.0.2
15
- Requires-Dist: python-docx==1.1.2
16
- Dynamic: author
17
- Dynamic: author-email
18
- Dynamic: classifier
19
- Dynamic: description
20
- Dynamic: description-content-type
21
- Dynamic: home-page
22
- Dynamic: requires-dist
23
- Dynamic: requires-python
24
- Dynamic: summary
25
-
26
- # RAG-vector-creator
27
-
28
- ## Overview
29
- This project implements a RAG (Retrieval-Augmented Generation) system for creating and managing vector embeddings from documents using FAISS and NumPy libraries. It efficiently transforms text data into high-dimensional vector representations that enable semantic search capabilities, similarity matching, and context-aware document retrieval for enhanced question answering applications.
30
-
31
- ## Features
32
-
33
- - Document ingestion and preprocessing
34
- - Vector embedding generation using state-of-the-art models
35
- - Efficient storage and retrieval of embeddings
36
- - Integration with LLM-based generation systems
37
-
38
- ## Installation
39
-
40
- ```bash
41
- pip install -r requirements.txt
42
- python app.py
43
- ```
44
-
45
- ## Build lib
46
-
47
- To build the lib run the commands:
48
-
49
- ```
50
- python setup.py sdist bdist_wheel
51
- ```
52
-
53
- To test the install run:
54
- ```
55
- pip install .
56
- ```
57
-
58
- ## License
59
-
60
- MIT
File without changes
File without changes
File without changes
File without changes
File without changes