dynamicml 1.0.0__tar.gz

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
@@ -0,0 +1,21 @@
1
+ MIT License
2
+
3
+ Copyright (c) 2026 Sunkara Sai Ganesh
4
+
5
+ Permission is hereby granted, free of charge, to any person obtaining a copy
6
+ of this software and associated documentation files (the "Software"), to deal
7
+ in the Software without restriction, including without limitation the rights
8
+ to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
9
+ copies of the Software, and to permit persons to whom the Software is
10
+ furnished to do so, subject to the following conditions:
11
+
12
+ The above copyright notice and this permission notice shall be included in all
13
+ copies or substantial portions of the Software.
14
+
15
+ THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
16
+ IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
17
+ FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
18
+ AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
19
+ LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
20
+ OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
21
+ SOFTWARE.
@@ -0,0 +1,165 @@
1
+ Metadata-Version: 2.4
2
+ Name: dynamicml
3
+ Version: 1.0.0
4
+ Summary: A lightweight binary image classification system built with scikit‑learn, focusing on k‑Nearest Neighbors (kNN) and classical ML models. It features dynamic, model‑aware image preprocessing (HOG, scaling, PCA) that adapts automatically to image characteristics, enabling efficient training and reliable inference for custom datasets.
5
+ Author-email: SUNKARA SAI GANESH <ganiisunkara@gmail.com>, KUDIRELLA SANMUKA SAI <sanmukasaikudirella@gmail.com>, KARROTHU MOURYA <mouryakarrothu@gmail.com>
6
+ Maintainer-email: SUNKARA SAI GANESH <ganiisunkara@gmail.com>, KUDIRELLA SANMUKA SAI <sanmukasaikudirella@gmail.com>, KARROTHU MOURYA <mouryakarrothu@gmail.com>, NETI N V LAKSHMI SUNAYANA <sunayananeti12@gmail.com>
7
+ License: MIT
8
+ Project-URL: Homepage, https://github.com/GaniiKing/dynamicml
9
+ Keywords: machine learning,image classification,knn,computer vision,scikit-learn
10
+ Classifier: Development Status :: 5 - Production/Stable
11
+ Classifier: Intended Audience :: Developers
12
+ Classifier: Intended Audience :: Science/Research
13
+ Classifier: License :: OSI Approved :: MIT License
14
+ Classifier: Operating System :: OS Independent
15
+ Classifier: Programming Language :: Python :: 3
16
+ Classifier: Programming Language :: Python :: 3.8
17
+ Classifier: Programming Language :: Python :: 3.9
18
+ Classifier: Programming Language :: Python :: 3.10
19
+ Classifier: Programming Language :: Python :: 3.11
20
+ Classifier: Topic :: Scientific/Engineering
21
+ Classifier: Topic :: Scientific/Engineering :: Artificial Intelligence
22
+ Classifier: Topic :: Scientific/Engineering :: Image Recognition
23
+ Classifier: Topic :: Software Development :: Libraries
24
+ Classifier: Topic :: Software Development :: Libraries :: Python Modules
25
+ Requires-Python: >=3.8
26
+ Description-Content-Type: text/markdown
27
+ License-File: LICENCE.txt
28
+ Requires-Dist: numpy>=1.21
29
+ Requires-Dist: scipy>=1.7
30
+ Requires-Dist: scikit-learn>=1.2
31
+ Requires-Dist: scikit-image>=0.19
32
+ Requires-Dist: opencv-python>=4.6
33
+ Requires-Dist: joblib>=1.2
34
+ Dynamic: license-file
35
+
36
+ # DynamicML 🚀
37
+ **Dynamic, model‑aware classical machine learning for image classification**
38
+
39
+ **CREATED BY SUNKARA SAI GANESH , KARROTHU MOURYA , KUDIRELLA SANMUKA SAI**
40
+
41
+ DynamicML is a lightweight Python library built on top of **scikit‑learn** that enables **binary image classification using classical machine learning models**, with a strong focus on **dynamic, model‑aware preprocessing**. The library is designed to automatically adapt preprocessing pipelines based on image characteristics and model requirements, making it easy to experiment, benchmark, and deploy classical ML solutions for image data. Typically in just 2-4 lines of code.
42
+
43
+ ---
44
+
45
+ ## ✨ Key Features
46
+
47
+ - ✅ Binary image classification using classical ML models
48
+ - ✅ Dynamic preprocessing tailored to each model
49
+ - ✅ Strong support for k‑Nearest Neighbors (kNN)
50
+ - ✅ Multiple feature extraction strategies (pixels, HOG, PCA)
51
+ - ✅ Modular, extensible, and deployment‑ready
52
+ - ✅ No deep‑learning dependencies (lightweight & fast)
53
+
54
+ ---
55
+
56
+ ## 🎯 Project Goals
57
+
58
+ DynamicML is designed to:
59
+
60
+ - Provide **clear, principled preprocessing pipelines** for image‑based ML
61
+ - Avoid one‑size‑fits‑all preprocessing
62
+ - Make classical ML viable for image classification tasks
63
+ - Enable rapid experimentation and reproducibility
64
+ - Serve as a learning and benchmarking framework for image ML
65
+
66
+ ---
67
+
68
+ ## 📦 Supported Models
69
+
70
+ DynamicML supports most classical classifiers available in **scikit‑learn**, including:
71
+
72
+ ### Linear Models
73
+ - Logistic Regression
74
+ - SGDClassifier
75
+ - Perceptron
76
+ - RidgeClassifier
77
+
78
+ ### Distance‑Based Models
79
+ - k‑Nearest Neighbors (kNN)
80
+ - Radius Neighbors
81
+ - Nearest Centroid
82
+
83
+ ### Support Vector Machines
84
+ - Linear SVM
85
+ - Kernel SVM (RBF, Poly, Sigmoid)
86
+
87
+ ### Probabilistic Models
88
+ - Gaussian Naive Bayes
89
+ - Multinomial Naive Bayes
90
+ - Complement Naive Bayes
91
+ - Bernoulli Naive Bayes
92
+
93
+ ### Tree‑Based Models
94
+ - Decision Tree
95
+ - Random Forest
96
+ - Extra Trees
97
+
98
+ ### Boosting & Ensembles
99
+ - AdaBoost
100
+ - Gradient Boosting
101
+ - HistGradientBoosting
102
+ - Bagging
103
+ - Voting
104
+ - Stacking
105
+
106
+ ### Neural & Bayesian
107
+ - MLPClassifier
108
+ - GaussianProcessClassifier
109
+
110
+ ---
111
+
112
+ ## 🖼️ Image Preprocessing Philosophy
113
+
114
+ DynamicML follows one core principle:
115
+
116
+ > **Preprocessing depends on both the image type and the model type.**
117
+
118
+ Instead of applying the same pipeline to every model, DynamicML dynamically selects preprocessing steps such as:
119
+
120
+ - Image resizing
121
+ - Color conversion (RGB → Grayscale)
122
+ - Feature extraction
123
+ - Feature scaling
124
+ - Dimensionality reduction
125
+
126
+ ---
127
+
128
+ ## 🧠 Feature Extraction Strategies
129
+
130
+ DynamicML supports multiple feature representations:
131
+
132
+ ### 1. Flattened Pixels
133
+ - Best for very small images
134
+ - Used mainly for baselines
135
+
136
+ ### 2. Histogram of Oriented Gradients (HOG)
137
+ - Captures shape and edge information
138
+ - Illumination‑invariant
139
+ - Highly effective for kNN and SVM
140
+
141
+ ### 3. PCA‑Compressed Features
142
+ - Reduces dimensionality
143
+ - Improves speed and generalization
144
+ - Strongly recommended for distance‑based models
145
+
146
+ ### 4. Binary / Thresholded Features
147
+ - Useful for masks, silhouettes, and edge maps
148
+
149
+ ---
150
+
151
+ ## 🔄 Dynamic Preprocessing Logic
152
+
153
+ DynamicML automatically adapts preprocessing based on:
154
+
155
+ - Image size
156
+ - Image type (grayscale, binary, color)
157
+ - Feature dimensionality
158
+ - Model requirements (scaling, PCA, distance metrics)
159
+
160
+ Example logic:
161
+ ```text
162
+ If model is distance‑based → scaling required
163
+ If feature dimension is high → apply PCA
164
+ If image is large → use HOG instead of raw pixels
165
+ If model is tree‑based → skip scaling
@@ -0,0 +1,130 @@
1
+ # DynamicML 🚀
2
+ **Dynamic, model‑aware classical machine learning for image classification**
3
+
4
+ **CREATED BY SUNKARA SAI GANESH , KARROTHU MOURYA , KUDIRELLA SANMUKA SAI**
5
+
6
+ DynamicML is a lightweight Python library built on top of **scikit‑learn** that enables **binary image classification using classical machine learning models**, with a strong focus on **dynamic, model‑aware preprocessing**. The library is designed to automatically adapt preprocessing pipelines based on image characteristics and model requirements, making it easy to experiment, benchmark, and deploy classical ML solutions for image data. Typically in just 2-4 lines of code.
7
+
8
+ ---
9
+
10
+ ## ✨ Key Features
11
+
12
+ - ✅ Binary image classification using classical ML models
13
+ - ✅ Dynamic preprocessing tailored to each model
14
+ - ✅ Strong support for k‑Nearest Neighbors (kNN)
15
+ - ✅ Multiple feature extraction strategies (pixels, HOG, PCA)
16
+ - ✅ Modular, extensible, and deployment‑ready
17
+ - ✅ No deep‑learning dependencies (lightweight & fast)
18
+
19
+ ---
20
+
21
+ ## 🎯 Project Goals
22
+
23
+ DynamicML is designed to:
24
+
25
+ - Provide **clear, principled preprocessing pipelines** for image‑based ML
26
+ - Avoid one‑size‑fits‑all preprocessing
27
+ - Make classical ML viable for image classification tasks
28
+ - Enable rapid experimentation and reproducibility
29
+ - Serve as a learning and benchmarking framework for image ML
30
+
31
+ ---
32
+
33
+ ## 📦 Supported Models
34
+
35
+ DynamicML supports most classical classifiers available in **scikit‑learn**, including:
36
+
37
+ ### Linear Models
38
+ - Logistic Regression
39
+ - SGDClassifier
40
+ - Perceptron
41
+ - RidgeClassifier
42
+
43
+ ### Distance‑Based Models
44
+ - k‑Nearest Neighbors (kNN)
45
+ - Radius Neighbors
46
+ - Nearest Centroid
47
+
48
+ ### Support Vector Machines
49
+ - Linear SVM
50
+ - Kernel SVM (RBF, Poly, Sigmoid)
51
+
52
+ ### Probabilistic Models
53
+ - Gaussian Naive Bayes
54
+ - Multinomial Naive Bayes
55
+ - Complement Naive Bayes
56
+ - Bernoulli Naive Bayes
57
+
58
+ ### Tree‑Based Models
59
+ - Decision Tree
60
+ - Random Forest
61
+ - Extra Trees
62
+
63
+ ### Boosting & Ensembles
64
+ - AdaBoost
65
+ - Gradient Boosting
66
+ - HistGradientBoosting
67
+ - Bagging
68
+ - Voting
69
+ - Stacking
70
+
71
+ ### Neural & Bayesian
72
+ - MLPClassifier
73
+ - GaussianProcessClassifier
74
+
75
+ ---
76
+
77
+ ## 🖼️ Image Preprocessing Philosophy
78
+
79
+ DynamicML follows one core principle:
80
+
81
+ > **Preprocessing depends on both the image type and the model type.**
82
+
83
+ Instead of applying the same pipeline to every model, DynamicML dynamically selects preprocessing steps such as:
84
+
85
+ - Image resizing
86
+ - Color conversion (RGB → Grayscale)
87
+ - Feature extraction
88
+ - Feature scaling
89
+ - Dimensionality reduction
90
+
91
+ ---
92
+
93
+ ## 🧠 Feature Extraction Strategies
94
+
95
+ DynamicML supports multiple feature representations:
96
+
97
+ ### 1. Flattened Pixels
98
+ - Best for very small images
99
+ - Used mainly for baselines
100
+
101
+ ### 2. Histogram of Oriented Gradients (HOG)
102
+ - Captures shape and edge information
103
+ - Illumination‑invariant
104
+ - Highly effective for kNN and SVM
105
+
106
+ ### 3. PCA‑Compressed Features
107
+ - Reduces dimensionality
108
+ - Improves speed and generalization
109
+ - Strongly recommended for distance‑based models
110
+
111
+ ### 4. Binary / Thresholded Features
112
+ - Useful for masks, silhouettes, and edge maps
113
+
114
+ ---
115
+
116
+ ## 🔄 Dynamic Preprocessing Logic
117
+
118
+ DynamicML automatically adapts preprocessing based on:
119
+
120
+ - Image size
121
+ - Image type (grayscale, binary, color)
122
+ - Feature dimensionality
123
+ - Model requirements (scaling, PCA, distance metrics)
124
+
125
+ Example logic:
126
+ ```text
127
+ If model is distance‑based → scaling required
128
+ If feature dimension is high → apply PCA
129
+ If image is large → use HOG instead of raw pixels
130
+ If model is tree‑based → skip scaling
@@ -0,0 +1,165 @@
1
+ Metadata-Version: 2.4
2
+ Name: dynamicml
3
+ Version: 1.0.0
4
+ Summary: A lightweight binary image classification system built with scikit‑learn, focusing on k‑Nearest Neighbors (kNN) and classical ML models. It features dynamic, model‑aware image preprocessing (HOG, scaling, PCA) that adapts automatically to image characteristics, enabling efficient training and reliable inference for custom datasets.
5
+ Author-email: SUNKARA SAI GANESH <ganiisunkara@gmail.com>, KUDIRELLA SANMUKA SAI <sanmukasaikudirella@gmail.com>, KARROTHU MOURYA <mouryakarrothu@gmail.com>
6
+ Maintainer-email: SUNKARA SAI GANESH <ganiisunkara@gmail.com>, KUDIRELLA SANMUKA SAI <sanmukasaikudirella@gmail.com>, KARROTHU MOURYA <mouryakarrothu@gmail.com>, NETI N V LAKSHMI SUNAYANA <sunayananeti12@gmail.com>
7
+ License: MIT
8
+ Project-URL: Homepage, https://github.com/GaniiKing/dynamicml
9
+ Keywords: machine learning,image classification,knn,computer vision,scikit-learn
10
+ Classifier: Development Status :: 5 - Production/Stable
11
+ Classifier: Intended Audience :: Developers
12
+ Classifier: Intended Audience :: Science/Research
13
+ Classifier: License :: OSI Approved :: MIT License
14
+ Classifier: Operating System :: OS Independent
15
+ Classifier: Programming Language :: Python :: 3
16
+ Classifier: Programming Language :: Python :: 3.8
17
+ Classifier: Programming Language :: Python :: 3.9
18
+ Classifier: Programming Language :: Python :: 3.10
19
+ Classifier: Programming Language :: Python :: 3.11
20
+ Classifier: Topic :: Scientific/Engineering
21
+ Classifier: Topic :: Scientific/Engineering :: Artificial Intelligence
22
+ Classifier: Topic :: Scientific/Engineering :: Image Recognition
23
+ Classifier: Topic :: Software Development :: Libraries
24
+ Classifier: Topic :: Software Development :: Libraries :: Python Modules
25
+ Requires-Python: >=3.8
26
+ Description-Content-Type: text/markdown
27
+ License-File: LICENCE.txt
28
+ Requires-Dist: numpy>=1.21
29
+ Requires-Dist: scipy>=1.7
30
+ Requires-Dist: scikit-learn>=1.2
31
+ Requires-Dist: scikit-image>=0.19
32
+ Requires-Dist: opencv-python>=4.6
33
+ Requires-Dist: joblib>=1.2
34
+ Dynamic: license-file
35
+
36
+ # DynamicML 🚀
37
+ **Dynamic, model‑aware classical machine learning for image classification**
38
+
39
+ **CREATED BY SUNKARA SAI GANESH , KARROTHU MOURYA , KUDIRELLA SANMUKA SAI**
40
+
41
+ DynamicML is a lightweight Python library built on top of **scikit‑learn** that enables **binary image classification using classical machine learning models**, with a strong focus on **dynamic, model‑aware preprocessing**. The library is designed to automatically adapt preprocessing pipelines based on image characteristics and model requirements, making it easy to experiment, benchmark, and deploy classical ML solutions for image data. Typically in just 2-4 lines of code.
42
+
43
+ ---
44
+
45
+ ## ✨ Key Features
46
+
47
+ - ✅ Binary image classification using classical ML models
48
+ - ✅ Dynamic preprocessing tailored to each model
49
+ - ✅ Strong support for k‑Nearest Neighbors (kNN)
50
+ - ✅ Multiple feature extraction strategies (pixels, HOG, PCA)
51
+ - ✅ Modular, extensible, and deployment‑ready
52
+ - ✅ No deep‑learning dependencies (lightweight & fast)
53
+
54
+ ---
55
+
56
+ ## 🎯 Project Goals
57
+
58
+ DynamicML is designed to:
59
+
60
+ - Provide **clear, principled preprocessing pipelines** for image‑based ML
61
+ - Avoid one‑size‑fits‑all preprocessing
62
+ - Make classical ML viable for image classification tasks
63
+ - Enable rapid experimentation and reproducibility
64
+ - Serve as a learning and benchmarking framework for image ML
65
+
66
+ ---
67
+
68
+ ## 📦 Supported Models
69
+
70
+ DynamicML supports most classical classifiers available in **scikit‑learn**, including:
71
+
72
+ ### Linear Models
73
+ - Logistic Regression
74
+ - SGDClassifier
75
+ - Perceptron
76
+ - RidgeClassifier
77
+
78
+ ### Distance‑Based Models
79
+ - k‑Nearest Neighbors (kNN)
80
+ - Radius Neighbors
81
+ - Nearest Centroid
82
+
83
+ ### Support Vector Machines
84
+ - Linear SVM
85
+ - Kernel SVM (RBF, Poly, Sigmoid)
86
+
87
+ ### Probabilistic Models
88
+ - Gaussian Naive Bayes
89
+ - Multinomial Naive Bayes
90
+ - Complement Naive Bayes
91
+ - Bernoulli Naive Bayes
92
+
93
+ ### Tree‑Based Models
94
+ - Decision Tree
95
+ - Random Forest
96
+ - Extra Trees
97
+
98
+ ### Boosting & Ensembles
99
+ - AdaBoost
100
+ - Gradient Boosting
101
+ - HistGradientBoosting
102
+ - Bagging
103
+ - Voting
104
+ - Stacking
105
+
106
+ ### Neural & Bayesian
107
+ - MLPClassifier
108
+ - GaussianProcessClassifier
109
+
110
+ ---
111
+
112
+ ## 🖼️ Image Preprocessing Philosophy
113
+
114
+ DynamicML follows one core principle:
115
+
116
+ > **Preprocessing depends on both the image type and the model type.**
117
+
118
+ Instead of applying the same pipeline to every model, DynamicML dynamically selects preprocessing steps such as:
119
+
120
+ - Image resizing
121
+ - Color conversion (RGB → Grayscale)
122
+ - Feature extraction
123
+ - Feature scaling
124
+ - Dimensionality reduction
125
+
126
+ ---
127
+
128
+ ## 🧠 Feature Extraction Strategies
129
+
130
+ DynamicML supports multiple feature representations:
131
+
132
+ ### 1. Flattened Pixels
133
+ - Best for very small images
134
+ - Used mainly for baselines
135
+
136
+ ### 2. Histogram of Oriented Gradients (HOG)
137
+ - Captures shape and edge information
138
+ - Illumination‑invariant
139
+ - Highly effective for kNN and SVM
140
+
141
+ ### 3. PCA‑Compressed Features
142
+ - Reduces dimensionality
143
+ - Improves speed and generalization
144
+ - Strongly recommended for distance‑based models
145
+
146
+ ### 4. Binary / Thresholded Features
147
+ - Useful for masks, silhouettes, and edge maps
148
+
149
+ ---
150
+
151
+ ## 🔄 Dynamic Preprocessing Logic
152
+
153
+ DynamicML automatically adapts preprocessing based on:
154
+
155
+ - Image size
156
+ - Image type (grayscale, binary, color)
157
+ - Feature dimensionality
158
+ - Model requirements (scaling, PCA, distance metrics)
159
+
160
+ Example logic:
161
+ ```text
162
+ If model is distance‑based → scaling required
163
+ If feature dimension is high → apply PCA
164
+ If image is large → use HOG instead of raw pixels
165
+ If model is tree‑based → skip scaling
@@ -0,0 +1,12 @@
1
+ LICENCE.txt
2
+ README.md
3
+ pyproject.toml
4
+ dynamicml.egg-info/PKG-INFO
5
+ dynamicml.egg-info/SOURCES.txt
6
+ dynamicml.egg-info/dependency_links.txt
7
+ dynamicml.egg-info/requires.txt
8
+ dynamicml.egg-info/top_level.txt
9
+ dynamictml/__init__.py
10
+ dynamictml/dataset_loader_g.py
11
+ dynamictml/main_g.py
12
+ dynamictml/testmodels.py
@@ -0,0 +1,6 @@
1
+ numpy>=1.21
2
+ scipy>=1.7
3
+ scikit-learn>=1.2
4
+ scikit-image>=0.19
5
+ opencv-python>=4.6
6
+ joblib>=1.2
@@ -0,0 +1 @@
1
+ dynamictml
File without changes