collections-cache 0.3.3.20250303__tar.gz → 0.3.4.20250419__tar.gz
This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
- {collections_cache-0.3.3.20250303 → collections_cache-0.3.4.20250419}/PKG-INFO +28 -13
- {collections_cache-0.3.3.20250303 → collections_cache-0.3.4.20250419}/README.md +25 -11
- {collections_cache-0.3.3.20250303 → collections_cache-0.3.4.20250419}/collections_cache/collections_cache.py +1 -0
- {collections_cache-0.3.3.20250303 → collections_cache-0.3.4.20250419}/pyproject.toml +1 -1
- {collections_cache-0.3.3.20250303 → collections_cache-0.3.4.20250419}/LICENSE +0 -0
- {collections_cache-0.3.3.20250303 → collections_cache-0.3.4.20250419}/collections_cache/__init__.py +0 -0
@@ -1,6 +1,6 @@
|
|
1
|
-
Metadata-Version: 2.
|
1
|
+
Metadata-Version: 2.3
|
2
2
|
Name: collections-cache
|
3
|
-
Version: 0.3.
|
3
|
+
Version: 0.3.4.20250419
|
4
4
|
Summary: collections-cache is a Python package for managing data collections across multiple SQLite databases. It allows efficient storage, retrieval, and updating of key-value pairs, supporting various data types serialized with pickle. The package uses parallel processing for fast access and manipulation of large collections.
|
5
5
|
License: MIT
|
6
6
|
Author: Luiz-Trindade
|
@@ -9,6 +9,7 @@ Requires-Python: >=3.12,<4.0
|
|
9
9
|
Classifier: License :: OSI Approved :: MIT License
|
10
10
|
Classifier: Programming Language :: Python :: 3
|
11
11
|
Classifier: Programming Language :: Python :: 3.12
|
12
|
+
Classifier: Programming Language :: Python :: 3.13
|
12
13
|
Description-Content-Type: text/markdown
|
13
14
|
|
14
15
|
# collections-cache 🚀
|
@@ -95,6 +96,31 @@ print(f"Inserted {len(cache.keys())} keys successfully!")
|
|
95
96
|
|
96
97
|
---
|
97
98
|
|
99
|
+
## Performance Benchmark 📊
|
100
|
+
|
101
|
+
After optimizing SQLite settings (including setting `synchronous = OFF`), the library has shown a significant performance improvement. The insertion performance has been accelerated dramatically, allowing for much faster data insertions and better scalability.
|
102
|
+
|
103
|
+
### Benchmark Results
|
104
|
+
|
105
|
+
For **100,000 insertions**:
|
106
|
+
|
107
|
+
- **Previous performance**: ~797 insertions per second.
|
108
|
+
- **Optimized performance**: **~6,657 insertions per second** after disabling SQLite's synchronization (`synchronous = OFF`), reducing the total insertion time from 125 seconds to **15.02 seconds**.
|
109
|
+
|
110
|
+
### Performance Scaling
|
111
|
+
|
112
|
+
With the optimized configuration, the library scales nearly linearly with the number of CPU cores. For example:
|
113
|
+
|
114
|
+
- **4 cores**: ~6,657 insertions per second.
|
115
|
+
- **8 cores**: ~13,300 insertions per second.
|
116
|
+
- **16 cores**: ~26,600 insertions per second.
|
117
|
+
- **32 cores**: ~53,200 insertions per second.
|
118
|
+
- **128 cores**: ~212,000 insertions per second (theoretically).
|
119
|
+
|
120
|
+
*Note: Actual performance may vary depending on system architecture, disk I/O, and specific workload, but benchmarks indicate a substantial increase in insertion rate as the number of CPU cores increases.*
|
121
|
+
|
122
|
+
---
|
123
|
+
|
98
124
|
## API Overview 📚
|
99
125
|
|
100
126
|
- **`set_key(key, value)`**: Stores a key–value pair. Updates the value if the key already exists.
|
@@ -106,17 +132,6 @@ print(f"Inserted {len(cache.keys())} keys successfully!")
|
|
106
132
|
|
107
133
|
---
|
108
134
|
|
109
|
-
## Performance Benchmark 📊
|
110
|
-
|
111
|
-
On a machine with 4 real CPU cores, benchmarks indicate around **781 insertions per second**. The library is designed to scale nearly linearly with the number of real cores. For example:
|
112
|
-
- **6 cores**: ~1,171 insertions per second.
|
113
|
-
- **16 cores**: ~3,125 insertions per second.
|
114
|
-
- **128 cores**: ~25,000 insertions per second (theoretically).
|
115
|
-
|
116
|
-
*Note: Actual performance will depend on disk I/O, SQLite contention, and system architecture.*
|
117
|
-
|
118
|
-
---
|
119
|
-
|
120
135
|
## Development & Contributing 👩💻👨💻
|
121
136
|
|
122
137
|
To contribute or run tests:
|
@@ -82,6 +82,31 @@ print(f"Inserted {len(cache.keys())} keys successfully!")
|
|
82
82
|
|
83
83
|
---
|
84
84
|
|
85
|
+
## Performance Benchmark 📊
|
86
|
+
|
87
|
+
After optimizing SQLite settings (including setting `synchronous = OFF`), the library has shown a significant performance improvement. The insertion performance has been accelerated dramatically, allowing for much faster data insertions and better scalability.
|
88
|
+
|
89
|
+
### Benchmark Results
|
90
|
+
|
91
|
+
For **100,000 insertions**:
|
92
|
+
|
93
|
+
- **Previous performance**: ~797 insertions per second.
|
94
|
+
- **Optimized performance**: **~6,657 insertions per second** after disabling SQLite's synchronization (`synchronous = OFF`), reducing the total insertion time from 125 seconds to **15.02 seconds**.
|
95
|
+
|
96
|
+
### Performance Scaling
|
97
|
+
|
98
|
+
With the optimized configuration, the library scales nearly linearly with the number of CPU cores. For example:
|
99
|
+
|
100
|
+
- **4 cores**: ~6,657 insertions per second.
|
101
|
+
- **8 cores**: ~13,300 insertions per second.
|
102
|
+
- **16 cores**: ~26,600 insertions per second.
|
103
|
+
- **32 cores**: ~53,200 insertions per second.
|
104
|
+
- **128 cores**: ~212,000 insertions per second (theoretically).
|
105
|
+
|
106
|
+
*Note: Actual performance may vary depending on system architecture, disk I/O, and specific workload, but benchmarks indicate a substantial increase in insertion rate as the number of CPU cores increases.*
|
107
|
+
|
108
|
+
---
|
109
|
+
|
85
110
|
## API Overview 📚
|
86
111
|
|
87
112
|
- **`set_key(key, value)`**: Stores a key–value pair. Updates the value if the key already exists.
|
@@ -93,17 +118,6 @@ print(f"Inserted {len(cache.keys())} keys successfully!")
|
|
93
118
|
|
94
119
|
---
|
95
120
|
|
96
|
-
## Performance Benchmark 📊
|
97
|
-
|
98
|
-
On a machine with 4 real CPU cores, benchmarks indicate around **781 insertions per second**. The library is designed to scale nearly linearly with the number of real cores. For example:
|
99
|
-
- **6 cores**: ~1,171 insertions per second.
|
100
|
-
- **16 cores**: ~3,125 insertions per second.
|
101
|
-
- **128 cores**: ~25,000 insertions per second (theoretically).
|
102
|
-
|
103
|
-
*Note: Actual performance will depend on disk I/O, SQLite contention, and system architecture.*
|
104
|
-
|
105
|
-
---
|
106
|
-
|
107
121
|
## Development & Contributing 👩💻👨💻
|
108
122
|
|
109
123
|
To contribute or run tests:
|
@@ -1,6 +1,6 @@
|
|
1
1
|
[tool.poetry]
|
2
2
|
name = "collections-cache"
|
3
|
-
version = "0.3.
|
3
|
+
version = "0.3.4.20250419"
|
4
4
|
description = "collections-cache is a Python package for managing data collections across multiple SQLite databases. It allows efficient storage, retrieval, and updating of key-value pairs, supporting various data types serialized with pickle. The package uses parallel processing for fast access and manipulation of large collections."
|
5
5
|
authors = ["Luiz-Trindade <luiz.gabriel.m.trindade@gmail.com>"]
|
6
6
|
license = "MIT"
|
File without changes
|
{collections_cache-0.3.3.20250303 → collections_cache-0.3.4.20250419}/collections_cache/__init__.py
RENAMED
File without changes
|