mct-nightly 2.0.0.20240521.145957__py3-none-any.whl → 2.0.0.20240521.151450__py3-none-any.whl

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
@@ -1,6 +1,6 @@
1
1
  Metadata-Version: 2.1
2
2
  Name: mct-nightly
3
- Version: 2.0.0.20240521.145957
3
+ Version: 2.0.0.20240521.151450
4
4
  Summary: A Model Compression Toolkit for neural networks
5
5
  Home-page: UNKNOWN
6
6
  License: UNKNOWN
@@ -137,7 +137,7 @@ The specifications of the algorithm are detailed in the paper: _"**EPTQ: Enhance
137
137
  More details on the how to use EPTQ via MCT can be found in the [EPTQ guidelines](https://github.com/sony/model_optimization/blob/main/model_compression_toolkit/gptq/README.md).
138
138
 
139
139
 
140
- ### Structured Pruning [*]((https://github.com/sony/model_optimization?tab=readme-ov-file#experimental-features))
140
+ ### Structured Pruning [*](https://github.com/sony/model_optimization?tab=readme-ov-file#experimental-features)
141
141
  MCT introduces a structured and hardware-aware model pruning.
142
142
  This pruning technique is designed to compress models for specific hardware architectures,
143
143
  taking into account the target platform's Single Instruction, Multiple Data (SIMD) capabilities.
@@ -1,4 +1,4 @@
1
- model_compression_toolkit/__init__.py,sha256=NcbUULZQvxx5LO_lLdFH8ZUOUCSKVi6uQIEwaBa0A6Q,1573
1
+ model_compression_toolkit/__init__.py,sha256=Bn_mg-MuzyxO3LZrpsbrxNcVtLR939pJG0pwrTKxDd8,1573
2
2
  model_compression_toolkit/constants.py,sha256=b63Jk_bC7VXEX3Qn9TZ3wUvrNKD8Mkz8zIuayoyF5eU,3828
3
3
  model_compression_toolkit/defaultdict.py,sha256=LSc-sbZYXENMCw3U9F4GiXuv67IKpdn0Qm7Fr11jy-4,2277
4
4
  model_compression_toolkit/logger.py,sha256=3DByV41XHRR3kLTJNbpaMmikL8icd9e1N-nkQAY9oDk,4567
@@ -483,8 +483,8 @@ model_compression_toolkit/trainable_infrastructure/keras/quantize_wrapper.py,sha
483
483
  model_compression_toolkit/trainable_infrastructure/keras/quantizer_utils.py,sha256=MVwXNymmFRB2NXIBx4e2mdJ1RfoHxRPYRgjb1MQP5kY,1797
484
484
  model_compression_toolkit/trainable_infrastructure/pytorch/__init__.py,sha256=huHoBUcKNB6BnY6YaUCcFvdyBtBI172ZoUD8ZYeNc6o,696
485
485
  model_compression_toolkit/trainable_infrastructure/pytorch/base_pytorch_quantizer.py,sha256=MxylaVFPgN7zBiRBy6WV610EA4scLgRJFbMucKvvNDU,2896
486
- mct_nightly-2.0.0.20240521.145957.dist-info/LICENSE.md,sha256=aYSSIb-5AFPeITTvXm1UAoe0uYBiMmSS8flvXaaFUks,10174
487
- mct_nightly-2.0.0.20240521.145957.dist-info/METADATA,sha256=XUTU6GmKo0wKAzupIrdoygT5xZyQEUeP_X5E2jimx3w,19726
488
- mct_nightly-2.0.0.20240521.145957.dist-info/WHEEL,sha256=GJ7t_kWBFywbagK5eo9IoUwLW6oyOeTKmQ-9iHFVNxQ,92
489
- mct_nightly-2.0.0.20240521.145957.dist-info/top_level.txt,sha256=gsYA8juk0Z-ZmQRKULkb3JLGdOdz8jW_cMRjisn9ga4,26
490
- mct_nightly-2.0.0.20240521.145957.dist-info/RECORD,,
486
+ mct_nightly-2.0.0.20240521.151450.dist-info/LICENSE.md,sha256=aYSSIb-5AFPeITTvXm1UAoe0uYBiMmSS8flvXaaFUks,10174
487
+ mct_nightly-2.0.0.20240521.151450.dist-info/METADATA,sha256=qw3NzBwEoxOOrWzEoLp9sk3IV_SrRgD3eLVsTULtMdo,19724
488
+ mct_nightly-2.0.0.20240521.151450.dist-info/WHEEL,sha256=GJ7t_kWBFywbagK5eo9IoUwLW6oyOeTKmQ-9iHFVNxQ,92
489
+ mct_nightly-2.0.0.20240521.151450.dist-info/top_level.txt,sha256=gsYA8juk0Z-ZmQRKULkb3JLGdOdz8jW_cMRjisn9ga4,26
490
+ mct_nightly-2.0.0.20240521.151450.dist-info/RECORD,,
@@ -27,4 +27,4 @@ from model_compression_toolkit import data_generation
27
27
  from model_compression_toolkit import pruning
28
28
  from model_compression_toolkit.trainable_infrastructure.keras.load_model import keras_load_quantized_model
29
29
 
30
- __version__ = "2.0.0.20240521.145957"
30
+ __version__ = "2.0.0.20240521.151450"