mct-nightly 2.2.0.20241126.528__py3-none-any.whl → 2.2.0.20241128.546__py3-none-any.whl

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
@@ -1,6 +1,6 @@
1
1
  Metadata-Version: 2.1
2
2
  Name: mct-nightly
3
- Version: 2.2.0.20241126.528
3
+ Version: 2.2.0.20241128.546
4
4
  Summary: A Model Compression Toolkit for neural networks
5
5
  Home-page: UNKNOWN
6
6
  License: UNKNOWN
@@ -77,9 +77,9 @@ MCT supports various quantization methods as appears below.
77
77
 
78
78
  Quantization Method | Complexity | Computational Cost | API | Tutorial
79
79
  -------------------- | -----------|--------------------|---------|--------
80
- PTQ (Post Training Quantization) | Low | Low (~1-10 CPU minutes) | [PyTorch API](https://sony.github.io/model_optimization/docs/api/api_docs/methods/pytorch_post_training_quantization.html) / [Keras API](https://sony.github.io/model_optimization/docs/api/api_docs/methods/keras_post_training_quantization.html) | <a href="https://colab.research.google.com/github/sony/model_optimization/blob/main/tutorials/notebooks/mct_features_notebooks/pytorch/example_pytorch_post_training_quantization.ipynb"><img src="https://img.shields.io/badge/Pytorch-green"/></a> <a href="https://colab.research.google.com/github/sony/model_optimization/blob/main/tutorials/notebooks/mct_features_notebooks/keras/example_keras_post-training_quantization.ipynb"><img src="https://img.shields.io/badge/Keras-green"/></a>
81
- GPTQ (parameters fine-tuning using gradients) | Moderate | Moderate (~1-3 GPU hours) | [PyTorch API](https://sony.github.io/model_optimization/docs/api/api_docs/methods/pytorch_gradient_post_training_quantization.html) / [Keras API](https://sony.github.io/model_optimization/docs/api/api_docs/methods/keras_gradient_post_training_quantization.html) | <a href="https://colab.research.google.com/github/sony/model_optimization/blob/main/tutorials/notebooks/mct_features_notebooks/pytorch/example_pytorch_mobilenet_gptq.ipynb"><img src="https://img.shields.io/badge/PyTorch-green"/></a> <a href="https://colab.research.google.com/github/sony/model_optimization/blob/main/tutorials/notebooks/mct_features_notebooks/keras/example_keras_mobilenet_gptq.ipynb"><img src="https://img.shields.io/badge/Keras-green"/></a>
82
- QAT (Quantization Aware Training) | High | High (~12-36 GPU hours) | [QAT API](https://sony.github.io/model_optimization/docs/api/api_docs/index.html#qat) | <a href="https://colab.research.google.com/github/sony/model_optimization/blob/main/tutorials/notebooks/mct_features_notebooks/keras/example_keras_qat.ipynb"><img src="https://img.shields.io/badge/Keras-green"/></a>
80
+ PTQ (Post Training Quantization) | Low | Low (~1-10 CPU minutes) | [PyTorch API](https://sony.github.io/model_optimization/api/api_docs/methods/pytorch_post_training_quantization.html) / [Keras API](https://sony.github.io/model_optimization/api/api_docs/methods/keras_post_training_quantization.html) | <a href="https://colab.research.google.com/github/sony/model_optimization/blob/main/tutorials/notebooks/mct_features_notebooks/pytorch/example_pytorch_post_training_quantization.ipynb"><img src="https://img.shields.io/badge/Pytorch-green"/></a> <a href="https://colab.research.google.com/github/sony/model_optimization/blob/main/tutorials/notebooks/mct_features_notebooks/keras/example_keras_post-training_quantization.ipynb"><img src="https://img.shields.io/badge/Keras-green"/></a>
81
+ GPTQ (parameters fine-tuning using gradients) | Moderate | Moderate (~1-3 GPU hours) | [PyTorch API](https://sony.github.io/model_optimization/api/api_docs/methods/pytorch_gradient_post_training_quantization.html) / [Keras API](https://sony.github.io/model_optimization/api/api_docs/methods/keras_gradient_post_training_quantization.html) | <a href="https://colab.research.google.com/github/sony/model_optimization/blob/main/tutorials/notebooks/mct_features_notebooks/pytorch/example_pytorch_mobilenet_gptq.ipynb"><img src="https://img.shields.io/badge/PyTorch-green"/></a> <a href="https://colab.research.google.com/github/sony/model_optimization/blob/main/tutorials/notebooks/mct_features_notebooks/keras/example_keras_mobilenet_gptq.ipynb"><img src="https://img.shields.io/badge/Keras-green"/></a>
82
+ QAT (Quantization Aware Training) | High | High (~12-36 GPU hours) | [QAT API](https://sony.github.io/model_optimization/api/api_docs/index.html#qat) | <a href="https://colab.research.google.com/github/sony/model_optimization/blob/main/tutorials/notebooks/mct_features_notebooks/keras/example_keras_qat.ipynb"><img src="https://img.shields.io/badge/Keras-green"/></a>
83
83
 
84
84
  </p>
85
85
  </div>
@@ -87,9 +87,9 @@ QAT (Quantization Aware Training) | High | High (~12-36 GPU hours) | [QAT API](
87
87
  For each flow, **Quantization core** utilizes various algorithms and hyper-parameters for optimal [hardware-aware](https://github.com/sony/model_optimization/blob/main/model_compression_toolkit/target_platform_capabilities/README.md) quantization results.
88
88
  For further details, please see [Supported features and algorithms](#high-level-features-and-techniques).
89
89
 
90
- Required input:
91
- - Floating point model - 32bit model in either .pt or .keras format
92
- - Representative dataset - can be either provided by the user, or generated utilizing the [Data Generation](#data-generation-) capability
90
+ **Required input**: Floating point model - 32bit model in either .pt or .keras format
91
+
92
+ **Optional input**: Representative dataset - can be either provided by the user, or generated utilizing the [Data Generation](#data-generation-) capability
93
93
 
94
94
  <div align="center">
95
95
  <p align="center">
@@ -119,15 +119,16 @@ ________________________________________________________________________________
119
119
  __________________________________________________________________________________________________________
120
120
  ### Data-free quantization (Data Generation) [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/sony/model_optimization/blob/main/tutorials/notebooks/mct_features_notebooks/pytorch/example_pytorch_data_generation.ipynb)
121
121
  Generates synthetic images based on the statistics stored in the model's batch normalization layers, according to your specific needs, for when image data isn’t available. See [Data Generation Library](https://github.com/sony/model_optimization/blob/main/model_compression_toolkit/data_generation/README.md) for more.
122
+ The specifications of the method are detailed in the paper: _"**Data Generation for Hardware-Friendly Post-Training Quantization**"_ [5].
122
123
  __________________________________________________________________________________________________________
123
124
  ### Structured Pruning [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/sony/model_optimization/blob/main/tutorials/notebooks/mct_features_notebooks/pytorch/example_pytorch_pruning_mnist.ipynb)
124
- Reduces model size/complexity and ensures better channels utilization by removing redundant input channels from layers and reconstruction of layer weights. Read more ([Pytorch API](https://sony.github.io/model_optimization/docs/api/api_docs/methods/pytorch_pruning_experimental.html) / [Keras API](https://sony.github.io/model_optimization/docs/api/api_docs/methods/keras_pruning_experimental.html)).
125
+ Reduces model size/complexity and ensures better channels utilization by removing redundant input channels from layers and reconstruction of layer weights. Read more ([Pytorch API](https://sony.github.io/model_optimization/api/api_docs/methods/pytorch_pruning_experimental.html) / [Keras API](https://sony.github.io/model_optimization/api/api_docs/methods/keras_pruning_experimental.html)).
125
126
  __________________________________________________________________________________________________________
126
127
  ### **Debugging and Visualization**
127
128
  **🎛️ Network Editor (Modify Quantization Configurations)** [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/sony/model_optimization/blob/main/tutorials/notebooks/mct_features_notebooks/keras/example_keras_network_editor.ipynb).
128
- Modify your model's quantization configuration for specific layers or apply a custom edit rule (e.g adjust layer's bit-width) using MCT’s network editor
129
+ Modify your model's quantization configuration for specific layers or apply a custom edit rule (e.g adjust layer's bit-width) using MCT’s network editor.
129
130
 
130
- **🖥️ Visualization**. Observe useful information for troubleshooting the quantized model's performance using TensorBoard. [Read more](https://sony.github.io/model_optimization/docs/guidelines/visualization.html).
131
+ **🖥️ Visualization**. Observe useful information for troubleshooting the quantized model's performance using TensorBoard. [Read more](https://sony.github.io/model_optimization/guidelines/visualization.html).
131
132
 
132
133
  **🔑 XQuant (Explainable Quantization)** [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/sony/model_optimization/blob/main/tutorials/notebooks/mct_features_notebooks/pytorch/example_pytorch_xquant.ipynb). Get valuable insights regarding the quality and success of the quantization process of your model. The report includes histograms and similarity metrics between the original float model and the quantized model in key points of the model. The report can be visualized using TensorBoard.
133
134
  __________________________________________________________________________________________________________
@@ -137,15 +138,15 @@ The specifications of the algorithm are detailed in the paper: _"**EPTQ: Enhance
137
138
  More details on how to use EPTQ via MCT can be found in the [GPTQ guidelines](https://github.com/sony/model_optimization/blob/main/model_compression_toolkit/gptq/README.md).
138
139
 
139
140
  ## <div align="center">Resources</div>
140
- * [User Guide](https://sony.github.io/model_optimization/docs/index.html) contains detailed information about MCT and guides you from installation through optimizing models for your edge AI applications.
141
+ * [User Guide](https://sony.github.io/model_optimization/index.html) contains detailed information about MCT and guides you from installation through optimizing models for your edge AI applications.
141
142
 
142
- * MCT's [API Docs](https://sony.github.io/model_optimization/docs/api/api_docs/) is seperated per quantization methods:
143
+ * MCT's [API Docs](https://sony.github.io/model_optimization/api/api_docs/) is separated per quantization methods:
143
144
 
144
- * [Post-training quantization](https://sony.github.io/model_optimization/docs/api/api_docs/index.html#ptq) | PTQ API docs
145
- * [Gradient-based post-training quantization](https://sony.github.io/model_optimization/docs/api/api_docs/index.html#gptq) | GPTQ API docs
146
- * [Quantization-aware training](https://sony.github.io/model_optimization/docs/api/api_docs/index.html#qat) | QAT API docs
145
+ * [Post-training quantization](https://sony.github.io/model_optimization/api/api_docs/index.html#ptq) | PTQ API docs
146
+ * [Gradient-based post-training quantization](https://sony.github.io/model_optimization/api/api_docs/index.html#gptq) | GPTQ API docs
147
+ * [Quantization-aware training](https://sony.github.io/model_optimization/api/api_docs/index.html#qat) | QAT API docs
147
148
 
148
- * [Debug](https://sony.github.io/model_optimization/docs/guidelines/visualization.html) – modify optimization process or generate explainable report
149
+ * [Debug](https://sony.github.io/model_optimization/guidelines/visualization.html) – modify optimization process or generate an explainable report
149
150
 
150
151
  * [Release notes](https://github.com/sony/model_optimization/releases)
151
152
 
@@ -179,25 +180,15 @@ Currently, MCT is being tested on various Python, Pytorch and TensorFlow version
179
180
  <img src="/docsrc/images/PoseEst.png" width="200">
180
181
  <img src="/docsrc/images/ObjDet.png" width="200">
181
182
 
182
- ### Pytorch
183
- We quantized classification networks from the torchvision library.
184
- In the following table we present the ImageNet validation results for these models:
185
-
186
- | Network Name | Float Accuracy | 8Bit Accuracy | Data-Free 8Bit Accuracy |
187
- |---------------------------|-----------------|-----------------|-------------------------|
188
- | MobileNet V2 [3] | 71.886 | 71.444 |71.29|
189
- | ResNet-18 [3] | 69.86 | 69.63 |69.53|
190
- | SqueezeNet 1.1 [3] | 58.128 | 57.678 ||
191
-
192
- ### Keras
193
183
  MCT can quantize an existing 32-bit floating-point model to an 8-bit fixed-point (or less) model without compromising accuracy.
194
- Below is a graph of [MobileNetV2](https://keras.io/api/applications/mobilenet/) accuracy on ImageNet vs average bit-width of weights (X-axis), using
195
- single-precision quantization, mixed-precision quantization, and mixed-precision quantization with GPTQ.
184
+ Below is a graph of [MobileNetV2](https://pytorch.org/vision/main/models/generated/torchvision.models.mobilenet_v2.html) accuracy on ImageNet vs average bit-width of weights (X-axis), using **single-precision** quantization, **mixed-precision** quantization, and mixed-precision quantization with GPTQ.
196
185
 
197
- <img src="https://github.com/sony/model_optimization/raw/main/docsrc/images/mbv2_accuracy_graph.png">
186
+ <p align="center">
187
+ <img src="/docsrc/images/torch_mobilenetv2.png" width="800">
198
188
 
199
189
  For more results, please see [1]
200
190
 
191
+
201
192
  ### Pruning Results
202
193
 
203
194
  Results for applying pruning to reduce the parameters of the following models by 50%:
@@ -209,19 +200,20 @@ Results for applying pruning to reduce the parameters of the following models by
209
200
 
210
201
  ## <div align="center">Troubleshooting and Community</div>
211
202
 
212
- If you encountered large accuracy degradation with MCT, check out the [Quantization Troubleshooting](https://github.com/sony/model_optimization/tree/main/quantization_troubleshooting.md)
213
- for common pitfalls and some tools to improve quantized model's accuracy.
203
+ If you encountered a large accuracy degradation with MCT, check out the [Quantization Troubleshooting](https://github.com/sony/model_optimization/tree/main/quantization_troubleshooting.md)
204
+ for common pitfalls and some tools to improve the quantized model's accuracy.
214
205
 
215
206
  Check out the [FAQ](https://github.com/sony/model_optimization/tree/main/FAQ.md) for common issues.
216
207
 
217
- You are welcome to ask questions and get support on our [issues section](https://github.com/sony/model_optimization/issues) and manage community discussions under [discussions section](https://github.com/sony/model_optimization/discussions).
208
+ You are welcome to ask questions and get support on our [issues section](https://github.com/sony/model_optimization/issues) and manage community discussions under the [discussions section](https://github.com/sony/model_optimization/discussions).
218
209
 
219
210
 
220
211
  ## <div align="center">Contributions</div>
221
- MCT aims at keeping a more up-to-date fork and welcomes contributions from anyone.
212
+ We'd love your input! MCT would not be possible without help from our community, and welcomes contributions from anyone!
222
213
 
223
214
  *Checkout our [Contribution guide](https://github.com/sony/model_optimization/blob/main/CONTRIBUTING.md) for more details.
224
215
 
216
+ Thank you 🙏 to all our contributors!
225
217
 
226
218
  ## <div align="center">License</div>
227
219
  MCT is licensed under Apache License Version 2.0. By contributing to the project, you agree to the license and copyright terms therein and release your contribution under these terms.
@@ -236,6 +228,8 @@ MCT is licensed under Apache License Version 2.0. By contributing to the project
236
228
 
237
229
  [3] [TORCHVISION.MODELS](https://pytorch.org/vision/stable/models.html)
238
230
 
239
- [4] Gordon, O., Cohen, E., Habi, H. V., & Netzer, A., 2024. [EPTQ: Enhanced Post-Training Quantization via Hessian-guided Network-wise Optimization. arXiv preprint](https://arxiv.org/abs/2309.11531)
231
+ [4] Gordon, O., Cohen, E., Habi, H. V., & Netzer, A., 2024. [EPTQ: Enhanced Post-Training Quantization via Hessian-guided Network-wise Optimization, European Conference on Computer Vision Workshop 2024, Computational Aspects of Deep Learning (CADL)](https://arxiv.org/abs/2309.11531)
232
+
233
+ [5] Dikstein, L., Lapid, A., Netzer, A., & Habi, H. V., 2024. [Data Generation for Hardware-Friendly Post-Training Quantization, Accepted to IEEE/CVF Winter Conference on Applications of Computer Vision (WACV) 2025](https://arxiv.org/abs/2410.22110)
240
234
 
241
235
 
@@ -1,4 +1,4 @@
1
- model_compression_toolkit/__init__.py,sha256=7A1BkpyKbKHFbnY13gd9nhz5dN6RwtjcyNcmz2phauQ,1573
1
+ model_compression_toolkit/__init__.py,sha256=zVRBw5AaiemU9kcIUsX-NlE27jnB2iW9beBf0n-WvFA,1573
2
2
  model_compression_toolkit/constants.py,sha256=i4wYheBkIdQmsQA-axIpcT3YiSO1USNc-jaNiNE8w6E,3920
3
3
  model_compression_toolkit/defaultdict.py,sha256=LSc-sbZYXENMCw3U9F4GiXuv67IKpdn0Qm7Fr11jy-4,2277
4
4
  model_compression_toolkit/logger.py,sha256=3DByV41XHRR3kLTJNbpaMmikL8icd9e1N-nkQAY9oDk,4567
@@ -559,8 +559,8 @@ model_compression_toolkit/xquant/pytorch/model_analyzer.py,sha256=b93o800yVB3Z-i
559
559
  model_compression_toolkit/xquant/pytorch/pytorch_report_utils.py,sha256=bOc-hFL3gdoSM1Th_S2N_-9JJSlPGpZCTx_QLJHS6lg,3388
560
560
  model_compression_toolkit/xquant/pytorch/similarity_functions.py,sha256=CERxq5K8rqaiE-DlwhZBTUd9x69dtYJlkHOPLB54vm8,2354
561
561
  model_compression_toolkit/xquant/pytorch/tensorboard_utils.py,sha256=mkoEktLFFHtEKzzFRn_jCnxjhJolK12TZ5AQeDHzUO8,9767
562
- mct_nightly-2.2.0.20241126.528.dist-info/LICENSE.md,sha256=aYSSIb-5AFPeITTvXm1UAoe0uYBiMmSS8flvXaaFUks,10174
563
- mct_nightly-2.2.0.20241126.528.dist-info/METADATA,sha256=CDV5X_5C52edbaXPQVlQKTjBsNkB7cU0Rli1MaNxQnA,26473
564
- mct_nightly-2.2.0.20241126.528.dist-info/WHEEL,sha256=tZoeGjtWxWRfdplE7E3d45VPlLNQnvbKiYnx7gwAy8A,92
565
- mct_nightly-2.2.0.20241126.528.dist-info/top_level.txt,sha256=gsYA8juk0Z-ZmQRKULkb3JLGdOdz8jW_cMRjisn9ga4,26
566
- mct_nightly-2.2.0.20241126.528.dist-info/RECORD,,
562
+ mct_nightly-2.2.0.20241128.546.dist-info/LICENSE.md,sha256=aYSSIb-5AFPeITTvXm1UAoe0uYBiMmSS8flvXaaFUks,10174
563
+ mct_nightly-2.2.0.20241128.546.dist-info/METADATA,sha256=0CvdGOzW-TiaTXZdjW8IyWZflelwtWDxF7mH95b3H-0,26446
564
+ mct_nightly-2.2.0.20241128.546.dist-info/WHEEL,sha256=tZoeGjtWxWRfdplE7E3d45VPlLNQnvbKiYnx7gwAy8A,92
565
+ mct_nightly-2.2.0.20241128.546.dist-info/top_level.txt,sha256=gsYA8juk0Z-ZmQRKULkb3JLGdOdz8jW_cMRjisn9ga4,26
566
+ mct_nightly-2.2.0.20241128.546.dist-info/RECORD,,
@@ -27,4 +27,4 @@ from model_compression_toolkit import data_generation
27
27
  from model_compression_toolkit import pruning
28
28
  from model_compression_toolkit.trainable_infrastructure.keras.load_model import keras_load_quantized_model
29
29
 
30
- __version__ = "2.2.0.20241126.000528"
30
+ __version__ = "2.2.0.20241128.000546"