collections-cache 0.2.7.20250303__py3-none-any.whl → 0.2.8.20250303__py3-none-any.whl

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
@@ -0,0 +1,155 @@
1
+ Metadata-Version: 2.1
2
+ Name: collections-cache
3
+ Version: 0.2.8.20250303
4
+ Summary: collections-cache is a Python package for managing data collections across multiple SQLite databases. It allows efficient storage, retrieval, and updating of key-value pairs, supporting various data types serialized with pickle. The package uses parallel processing for fast access and manipulation of large collections.
5
+ License: MIT
6
+ Author: Luiz-Trindade
7
+ Author-email: luiz.gabriel.m.trindade@gmail.com
8
+ Requires-Python: >=3.12,<4.0
9
+ Classifier: License :: OSI Approved :: MIT License
10
+ Classifier: Programming Language :: Python :: 3
11
+ Classifier: Programming Language :: Python :: 3.12
12
+ Description-Content-Type: text/markdown
13
+
14
+ # collections-cache 🚀
15
+
16
+ `collections-cache` is a fast and scalable key–value caching solution built on top of SQLite. It allows you to store, update, and retrieve data using unique keys, and it supports complex Python data types (thanks to `pickle`). Designed to harness the power of multiple CPU cores, the library shards data across multiple SQLite databases, enabling impressive performance scaling.
17
+
18
+ ---
19
+
20
+ ## Features ✨
21
+
22
+ - **Multiple SQLite Databases**: Distributes your data across several databases to optimize I/O and take advantage of multi-core systems.
23
+ - **Key–Value Store**: Simple and intuitive interface for storing and retrieving data.
24
+ - **Supports Complex Data Types**: Serialize and store lists, dictionaries, objects, and more using `pickle`.
25
+ - **Parallel Processing**: Uses Python’s `multiprocessing` and `concurrent.futures` modules to perform operations in parallel.
26
+ - **Efficient Data Retrieval**: Caches all keys in memory for super-fast lookups.
27
+ - **Cross-Platform**: Runs on Linux, macOS, and Windows.
28
+ - **Performance Scaling**: Benchmarks show near-linear scaling with the number of real CPU cores.
29
+
30
+ ---
31
+
32
+ ## Installation 📦
33
+
34
+ Use [Poetry](https://python-poetry.org/) to install and manage dependencies:
35
+
36
+ 1. Clone the repository:
37
+
38
+ ```bash
39
+ git clone https://github.com/Luiz-Trindade/collections_cache.git
40
+ cd collections-cache
41
+ ```
42
+
43
+ 2. Install the package with Poetry:
44
+
45
+ ```bash
46
+ poetry install
47
+ ```
48
+
49
+ ---
50
+
51
+ ## Usage ⚙️
52
+
53
+ Simply import and start using the main class, `Collection_Cache`, to interact with your collection:
54
+
55
+ ### Basic Example
56
+
57
+ ```python
58
+ from collections_cache import Collection_Cache
59
+
60
+ # Create a new collection named "STORE"
61
+ cache = Collection_Cache("STORE")
62
+
63
+ # Set a key-value pair
64
+ cache.set_key("products", ["apple", "orange", "onion"])
65
+
66
+ # Retrieve the value by key
67
+ products = cache.get_key("products")
68
+ print(products) # Output: ['apple', 'orange', 'onion']
69
+ ```
70
+
71
+ ### Bulk Insertion Example
72
+
73
+ For faster insertions, accumulate your data and use `set_multi_keys`:
74
+
75
+ ```python
76
+ from collections_cache import Collection_Cache
77
+ from random import uniform, randint
78
+ from time import time
79
+
80
+ cache = Collection_Cache("web_cache")
81
+ insertions = 100_000
82
+ data = {}
83
+
84
+ # Generate data
85
+ for i in range(insertions):
86
+ key = str(uniform(0.0, 100.0))
87
+ value = "some text :)" * randint(1, 100)
88
+ data[key] = value
89
+
90
+ # Bulk insert keys using multi-threaded execution
91
+ cache.set_multi_keys(data)
92
+
93
+ print(f"Inserted {len(cache.keys())} keys successfully!")
94
+ ```
95
+
96
+ ---
97
+
98
+ ## API Overview 📚
99
+
100
+ - **`set_key(key, value)`**: Stores a key–value pair. Updates the value if the key already exists.
101
+ - **`set_multi_keys(key_and_value)`**: (Experimental) Inserts multiple key–value pairs in parallel.
102
+ - **`get_key(key)`**: Retrieves the value associated with a given key.
103
+ - **`delete_key(key)`**: Removes a key and its corresponding value.
104
+ - **`keys()`**: Returns a list of all stored keys.
105
+ - **`export_to_json()`**: (Future feature) Exports your collection to a JSON file.
106
+
107
+ ---
108
+
109
+ ## Performance Benchmark 📊
110
+
111
+ On a machine with 4 real CPU cores, benchmarks indicate around **781 insertions per second**. The library is designed to scale nearly linearly with the number of real cores. For example:
112
+ - **6 cores**: ~1,171 insertions per second.
113
+ - **16 cores**: ~3,125 insertions per second.
114
+ - **128 cores**: ~25,000 insertions per second (theoretically).
115
+
116
+ *Note: Actual performance will depend on disk I/O, SQLite contention, and system architecture.*
117
+
118
+ ---
119
+
120
+ ## Development & Contributing 👩‍💻👨‍💻
121
+
122
+ To contribute or run tests:
123
+
124
+ 1. Install development dependencies:
125
+
126
+ ```bash
127
+ poetry install --dev
128
+ ```
129
+
130
+ 2. Run tests using:
131
+
132
+ ```bash
133
+ poetry run pytest
134
+ ```
135
+
136
+ Feel free to submit issues, pull requests, or feature suggestions. Your contributions help make `collections-cache` even better!
137
+
138
+ ---
139
+
140
+ ## License 📄
141
+
142
+ This project is licensed under the MIT License. See the [LICENSE](LICENSE) file for details.
143
+
144
+ ---
145
+
146
+ ## Acknowledgements 🙌
147
+
148
+ - Inspired by the need for efficient, multi-core caching with SQLite.
149
+ - Created by Luiz Trindade.
150
+ - Thanks to all contributors and users who provide feedback to keep improving the library!
151
+
152
+ ---
153
+
154
+ Give `collections-cache` a try and let it power your high-performance caching needs! 🚀
155
+
@@ -0,0 +1,6 @@
1
+ collections_cache/__init__.py,sha256=uUp8lhp-HnZRumnU_8MT6qVq95t0pOzn7oLW7ARbnvc,48
2
+ collections_cache/collections_cache.py,sha256=Yosw2599y3i1c9jDWYQuo25e9fTjU3rKuBTeE6DuV4E,5378
3
+ collections_cache-0.2.8.20250303.dist-info/LICENSE,sha256=RAIL-FmXSiNRgyiVlfhm2SvVI4XDVsN0jDt9207SJ8o,1168
4
+ collections_cache-0.2.8.20250303.dist-info/METADATA,sha256=zxCq5QF59ISG91zQZLs9tczwzs1HcCi8lW3L1EG6D_g,4993
5
+ collections_cache-0.2.8.20250303.dist-info/WHEEL,sha256=sP946D7jFCHeNz5Iq4fL4Lu-PrWrFsgfLXbbkciIZwg,88
6
+ collections_cache-0.2.8.20250303.dist-info/RECORD,,
@@ -1,96 +0,0 @@
1
- Metadata-Version: 2.1
2
- Name: collections-cache
3
- Version: 0.2.7.20250303
4
- Summary: collections-cache is a Python package for managing data collections across multiple SQLite databases. It allows efficient storage, retrieval, and updating of key-value pairs, supporting various data types serialized with pickle. The package uses parallel processing for fast access and manipulation of large collections.
5
- License: MIT
6
- Author: Luiz-Trindade
7
- Author-email: luiz.gabriel.m.trindade@gmail.com
8
- Requires-Python: >=3.12,<4.0
9
- Classifier: License :: OSI Approved :: MIT License
10
- Classifier: Programming Language :: Python :: 3
11
- Classifier: Programming Language :: Python :: 3.12
12
- Description-Content-Type: text/markdown
13
-
14
- # collections-cache
15
-
16
- `collections-cache` is a simple and efficient caching solution built with SQLite databases. It allows storing, updating, and retrieving data using unique keys while supporting complex data types through `pickle`. Designed to scale across multiple CPU cores, it distributes data across multiple SQLite databases for improved performance.
17
-
18
- ## Features
19
-
20
- - **Multiple SQLite databases**: Distributes data across multiple databases for better scalability.
21
- - **Key-value store**: Stores data as key-value pairs.
22
- - **Supports complex data types**: Data is serialized using `pickle`, allowing you to store lists, dictionaries, and other Python objects.
23
- - **Parallel processing**: Utilizes Python’s `multiprocessing` module to handle large collections in parallel across multiple CPU cores.
24
- - **Efficient data retrieval**: Retrieves stored data efficiently based on the key.
25
- - **Cross-platform**: Works on Linux, macOS, and Windows.
26
-
27
- ## Installation
28
-
29
- To install the `collections-cache` package, use [Poetry](https://python-poetry.org/) to manage dependencies.
30
-
31
- 1. Clone the repository:
32
-
33
- ```bash
34
- git clone https://github.com/Luiz-Trindade/collections_cache.git
35
- cd collections-cache
36
- ```
37
-
38
- 2. Install the package with Poetry:
39
-
40
- ```bash
41
- poetry install
42
- ```
43
-
44
- ## Usage
45
-
46
- To use the `collections-cache` package, import the main class `Collection_Cache` and interact with your collection.
47
-
48
- ### Example
49
-
50
- ```python
51
- from collections_cache import Collection_Cache
52
-
53
- # Create a new collection
54
- cache = Collection_Cache("STORE")
55
-
56
- # Set a key-value pair
57
- cache.set_key("products", ["apple", "orange", "onion"])
58
-
59
- # Get the value by key
60
- products = cache.get_key("products")
61
- print(products) # Output: ['apple', 'orange', 'onion']
62
- ```
63
-
64
- ### Methods
65
-
66
- - **`set_key(key, value)`**: Stores a key-value pair in the cache. If the key already exists, its value is updated.
67
- - **`set_multi_keys(key_and_value)`**: Stores a multi key-value pair in the cache. If the key already exists, its value is updated.
68
- - **`get_key(key)`**: Retrieves the value associated with a key.
69
- - **`delete_key(key)`**: Removes an existing key from the cache.
70
- - **`keys`***: Returns all stored keys.
71
-
72
- ## Development
73
-
74
- To contribute or run tests:
75
-
76
- 1. Install development dependencies:
77
-
78
- ```bash
79
- poetry install --dev
80
- ```
81
-
82
- 2. Run tests:
83
-
84
- ```bash
85
- poetry run pytest
86
- ```
87
-
88
- ## License
89
-
90
- This project is licensed under the MIT License – see the [LICENSE](LICENSE) file for details.
91
-
92
- ## Acknowledgements
93
-
94
- - This package was created to demonstrate how to work with SQLite, `pickle`, and Python's `multiprocessing` module.
95
- - Created by: Luiz Trindade.
96
-
@@ -1,6 +0,0 @@
1
- collections_cache/__init__.py,sha256=uUp8lhp-HnZRumnU_8MT6qVq95t0pOzn7oLW7ARbnvc,48
2
- collections_cache/collections_cache.py,sha256=Yosw2599y3i1c9jDWYQuo25e9fTjU3rKuBTeE6DuV4E,5378
3
- collections_cache-0.2.7.20250303.dist-info/LICENSE,sha256=RAIL-FmXSiNRgyiVlfhm2SvVI4XDVsN0jDt9207SJ8o,1168
4
- collections_cache-0.2.7.20250303.dist-info/METADATA,sha256=tR7_JtHQSBsV8LWagsi8L2ZlMEx6xDzjULPazZPnw0Q,3417
5
- collections_cache-0.2.7.20250303.dist-info/WHEEL,sha256=sP946D7jFCHeNz5Iq4fL4Lu-PrWrFsgfLXbbkciIZwg,88
6
- collections_cache-0.2.7.20250303.dist-info/RECORD,,