mct-nightly 2.3.0.20250212.515__py3-none-any.whl → 2.3.0.20250214.519__py3-none-any.whl

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
@@ -1,6 +1,6 @@
1
1
  Metadata-Version: 2.2
2
2
  Name: mct-nightly
3
- Version: 2.3.0.20250212.515
3
+ Version: 2.3.0.20250214.519
4
4
  Summary: A Model Compression Toolkit for neural networks
5
5
  Classifier: Programming Language :: Python :: 3
6
6
  Classifier: License :: OSI Approved :: Apache Software License
@@ -32,7 +32,7 @@ Dynamic: summary
32
32
  <div align="center" markdown="1">
33
33
  <p>
34
34
  <a href="https://sony.github.io/model_optimization/" target="_blank">
35
- <img src="/docsrc/images/mctHeader1-cropped.svg" width="1000"></a>
35
+ <img src="https://github.com/sony/model_optimization/blob/main/docsrc/images/mctHeader1-cropped.svg" width="1000"></a>
36
36
  </p>
37
37
 
38
38
  ______________________________________________________________________
@@ -98,7 +98,7 @@ For further details, please see [Supported features and algorithms](#high-level-
98
98
  <div align="center">
99
99
  <p align="center">
100
100
 
101
- <img src="/docsrc/images/mctDiagram_clean.svg" width="800">
101
+ <img src="https://github.com/sony/model_optimization/blob/main/docsrc/images/mctDiagram_clean.svg" width="800">
102
102
  </p>
103
103
  </div>
104
104
 
@@ -179,16 +179,16 @@ Currently, MCT is being tested on various Python, Pytorch and TensorFlow version
179
179
  ## <div align="center">Results</div>
180
180
 
181
181
  <p align="center">
182
- <img src="/docsrc/images/Classification.png" width="200">
183
- <img src="/docsrc/images/SemSeg.png" width="200">
184
- <img src="/docsrc/images/PoseEst.png" width="200">
185
- <img src="/docsrc/images/ObjDet.png" width="200">
182
+ <img src="https://github.com/sony/model_optimization/blob/main/docsrc/images/Classification.png" width="200">
183
+ <img src="https://github.com/sony/model_optimization/blob/main/docsrc/images/SemSeg.png" width="200">
184
+ <img src="https://github.com/sony/model_optimization/blob/main/docsrc/images/PoseEst.png" width="200">
185
+ <img src="https://github.com/sony/model_optimization/blob/main/docsrc/images/ObjDet.png" width="200">
186
186
 
187
187
  MCT can quantize an existing 32-bit floating-point model to an 8-bit fixed-point (or less) model without compromising accuracy.
188
188
  Below is a graph of [MobileNetV2](https://pytorch.org/vision/main/models/generated/torchvision.models.mobilenet_v2.html) accuracy on ImageNet vs average bit-width of weights (X-axis), using **single-precision** quantization, **mixed-precision** quantization, and mixed-precision quantization with GPTQ.
189
189
 
190
190
  <p align="center">
191
- <img src="/docsrc/images/torch_mobilenetv2.png" width="800">
191
+ <img src="https://github.com/sony/model_optimization/blob/main/docsrc/images/torch_mobilenetv2.png" width="800">
192
192
 
193
193
  For more results, please see [1]
194
194
 
@@ -1,4 +1,4 @@
1
- model_compression_toolkit/__init__.py,sha256=fdQULWA3Jfq4DXBTLjZeTDAj4emfHV9r-YOcqsP6s4s,1557
1
+ model_compression_toolkit/__init__.py,sha256=G7Jw23vCRFmgc4p4ecBtStYMdDk79Lf_gpYPEL7AxZE,1557
2
2
  model_compression_toolkit/constants.py,sha256=i_R6uXBfO1ph_X6DNJych2x59SUojfJbn7dNjs_mZnc,3846
3
3
  model_compression_toolkit/defaultdict.py,sha256=LSc-sbZYXENMCw3U9F4GiXuv67IKpdn0Qm7Fr11jy-4,2277
4
4
  model_compression_toolkit/logger.py,sha256=L3q7tn3Uht0i_7phnlOWMR2Te2zvzrt2HOz9vYEInts,4529
@@ -523,8 +523,8 @@ model_compression_toolkit/xquant/pytorch/model_analyzer.py,sha256=b93o800yVB3Z-i
523
523
  model_compression_toolkit/xquant/pytorch/pytorch_report_utils.py,sha256=UVN_S9ULHBEldBpShCOt8-soT8YTQ5oE362y96qF_FA,3950
524
524
  model_compression_toolkit/xquant/pytorch/similarity_functions.py,sha256=CERxq5K8rqaiE-DlwhZBTUd9x69dtYJlkHOPLB54vm8,2354
525
525
  model_compression_toolkit/xquant/pytorch/tensorboard_utils.py,sha256=mkoEktLFFHtEKzzFRn_jCnxjhJolK12TZ5AQeDHzUO8,9767
526
- mct_nightly-2.3.0.20250212.515.dist-info/LICENSE.md,sha256=aYSSIb-5AFPeITTvXm1UAoe0uYBiMmSS8flvXaaFUks,10174
527
- mct_nightly-2.3.0.20250212.515.dist-info/METADATA,sha256=UH1-RbailayAjuK7F01Wtojtlinjt03YaMrEO2O--pY,26572
528
- mct_nightly-2.3.0.20250212.515.dist-info/WHEEL,sha256=In9FTNxeP60KnTkGw7wk6mJPYd_dQSjEZmXdBdMCI-8,91
529
- mct_nightly-2.3.0.20250212.515.dist-info/top_level.txt,sha256=gsYA8juk0Z-ZmQRKULkb3JLGdOdz8jW_cMRjisn9ga4,26
530
- mct_nightly-2.3.0.20250212.515.dist-info/RECORD,,
526
+ mct_nightly-2.3.0.20250214.519.dist-info/LICENSE.md,sha256=aYSSIb-5AFPeITTvXm1UAoe0uYBiMmSS8flvXaaFUks,10174
527
+ mct_nightly-2.3.0.20250214.519.dist-info/METADATA,sha256=EgRG9bv9An6_dmDl0LVHb7GwXCtLhF-NDHsPCODmeLU,26936
528
+ mct_nightly-2.3.0.20250214.519.dist-info/WHEEL,sha256=In9FTNxeP60KnTkGw7wk6mJPYd_dQSjEZmXdBdMCI-8,91
529
+ mct_nightly-2.3.0.20250214.519.dist-info/top_level.txt,sha256=gsYA8juk0Z-ZmQRKULkb3JLGdOdz8jW_cMRjisn9ga4,26
530
+ mct_nightly-2.3.0.20250214.519.dist-info/RECORD,,
@@ -27,4 +27,4 @@ from model_compression_toolkit import data_generation
27
27
  from model_compression_toolkit import pruning
28
28
  from model_compression_toolkit.trainable_infrastructure.keras.load_model import keras_load_quantized_model
29
29
 
30
- __version__ = "2.3.0.20250212.000515"
30
+ __version__ = "2.3.0.20250214.000519"