mct-nightly 2.3.0.20250507.555__py3-none-any.whl → 2.3.0.20250508.558__py3-none-any.whl

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
@@ -1,6 +1,6 @@
1
1
  Metadata-Version: 2.4
2
2
  Name: mct-nightly
3
- Version: 2.3.0.20250507.555
3
+ Version: 2.3.0.20250508.558
4
4
  Summary: A Model Compression Toolkit for neural networks
5
5
  Author-email: ssi-dnn-dev@sony.com
6
6
  Classifier: Programming Language :: Python :: 3
@@ -52,7 +52,7 @@ ______________________________________________________________________
52
52
  <a href="#license">License</a>
53
53
  </p>
54
54
  <p align="center">
55
- <a href="https://sony.github.io/model_optimization#prerequisites"><img src="https://img.shields.io/badge/pytorch-2.2%20%7C%202.3%20%7C%202.4%20%7C%202.5-blue" /></a>
55
+ <a href="https://sony.github.io/model_optimization#prerequisites"><img src="https://img.shields.io/badge/pytorch-2.3%20%7C%202.4%20%7C%202.5%20%7C%202.6-blue" /></a>
56
56
  <a href="https://sony.github.io/model_optimization#prerequisites"><img src="https://img.shields.io/badge/tensorflow-2.14%20%7C%202.15-blue" /></a>
57
57
  <a href="https://sony.github.io/model_optimization#prerequisites"><img src="https://img.shields.io/badge/python-3.9%20%7C%203.10%20%7C%203.11%20%7C%203.12-blue" /></a>
58
58
  <a href="https://github.com/sony/model_optimization/releases"><img src="https://img.shields.io/github/v/release/sony/model_optimization" /></a>
@@ -65,7 +65,7 @@ ________________________________________________________________________________
65
65
 
66
66
  ## <div align="center">Getting Started</div>
67
67
  ### Quick Installation
68
- Pip install the model compression toolkit package in a Python>=3.9 environment with PyTorch>=2.1 or Tensorflow>=2.14.
68
+ Pip install the model compression toolkit package in a Python>=3.9 environment with PyTorch>=2.3 or Tensorflow>=2.14.
69
69
  ```
70
70
  pip install model-compression-toolkit
71
71
  ```
@@ -165,12 +165,12 @@ Currently, MCT is being tested on various Python, Pytorch and TensorFlow version
165
165
  <details id="supported-versions">
166
166
  <summary>Supported Versions Table</summary>
167
167
 
168
- | | PyTorch 2.2 | PyTorch 2.3 | PyTorch 2.4 | PyTorch 2.5 |
168
+ | | PyTorch 2.3 | PyTorch 2.4 | PyTorch 2.5 | PyTorch 2.6 |
169
169
  |-------------|--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
170
- | Python 3.9 | [![Run Tests](https://github.com/sony/model_optimization/actions/workflows/run_tests_python39_pytorch22.yml/badge.svg)](https://github.com/sony/model_optimization/actions/workflows/run_tests_python39_pytorch22.yml) | [![Run Tests](https://github.com/sony/model_optimization/actions/workflows/run_tests_python39_pytorch23.yml/badge.svg)](https://github.com/sony/model_optimization/actions/workflows/run_tests_python39_pytorch23.yml) | [![Run Tests](https://github.com/sony/model_optimization/actions/workflows/run_tests_python39_pytorch24.yml/badge.svg)](https://github.com/sony/model_optimization/actions/workflows/run_tests_python39_pytorch24.yml) | [![Run Tests](https://github.com/sony/model_optimization/actions/workflows/run_tests_python39_pytorch25.yml/badge.svg)](https://github.com/sony/model_optimization/actions/workflows/run_tests_python39_pytorch25.yml) |
171
- | Python 3.10 | [![Run Tests](https://github.com/sony/model_optimization/actions/workflows/run_tests_python310_pytorch22.yml/badge.svg)](https://github.com/sony/model_optimization/actions/workflows/run_tests_python310_pytorch22.yml) | [![Run Tests](https://github.com/sony/model_optimization/actions/workflows/run_tests_python310_pytorch23.yml/badge.svg)](https://github.com/sony/model_optimization/actions/workflows/run_tests_python310_pytorch23.yml) | [![Run Tests](https://github.com/sony/model_optimization/actions/workflows/run_tests_python310_pytorch24.yml/badge.svg)](https://github.com/sony/model_optimization/actions/workflows/run_tests_python310_pytorch24.yml) | [![Run Tests](https://github.com/sony/model_optimization/actions/workflows/run_tests_python310_pytorch25.yml/badge.svg)](https://github.com/sony/model_optimization/actions/workflows/run_tests_python310_pytorch25.yml) |
172
- | Python 3.11 | [![Run Tests](https://github.com/sony/model_optimization/actions/workflows/run_tests_python311_pytorch22.yml/badge.svg)](https://github.com/sony/model_optimization/actions/workflows/run_tests_python311_pytorch22.yml) | [![Run Tests](https://github.com/sony/model_optimization/actions/workflows/run_tests_python311_pytorch23.yml/badge.svg)](https://github.com/sony/model_optimization/actions/workflows/run_tests_python311_pytorch23.yml) | [![Run Tests](https://github.com/sony/model_optimization/actions/workflows/run_tests_python311_pytorch24.yml/badge.svg)](https://github.com/sony/model_optimization/actions/workflows/run_tests_python311_pytorch24.yml) | [![Run Tests](https://github.com/sony/model_optimization/actions/workflows/run_tests_python311_pytorch25.yml/badge.svg)](https://github.com/sony/model_optimization/actions/workflows/run_tests_python311_pytorch25.yml) |
173
- | Python 3.12 | [![Run Tests](https://github.com/sony/model_optimization/actions/workflows/run_tests_python312_pytorch22.yml/badge.svg)](https://github.com/sony/model_optimization/actions/workflows/run_tests_python312_pytorch22.yml) | [![Run Tests](https://github.com/sony/model_optimization/actions/workflows/run_tests_python312_pytorch23.yml/badge.svg)](https://github.com/sony/model_optimization/actions/workflows/run_tests_python312_pytorch23.yml) | [![Run Tests](https://github.com/sony/model_optimization/actions/workflows/run_tests_python312_pytorch24.yml/badge.svg)](https://github.com/sony/model_optimization/actions/workflows/run_tests_python312_pytorch24.yml) | [![Run Tests](https://github.com/sony/model_optimization/actions/workflows/run_tests_python312_pytorch25.yml/badge.svg)](https://github.com/sony/model_optimization/actions/workflows/run_tests_python312_pytorch25.yml) |
170
+ | Python 3.9 | [![Run Tests](https://github.com/sony/model_optimization/actions/workflows/run_tests_python39_pytorch23.yml/badge.svg)](https://github.com/sony/model_optimization/actions/workflows/run_tests_python39_pytorch23.yml) | [![Run Tests](https://github.com/sony/model_optimization/actions/workflows/run_tests_python39_pytorch24.yml/badge.svg)](https://github.com/sony/model_optimization/actions/workflows/run_tests_python39_pytorch24.yml) | [![Run Tests](https://github.com/sony/model_optimization/actions/workflows/run_tests_python39_pytorch25.yml/badge.svg)](https://github.com/sony/model_optimization/actions/workflows/run_tests_python39_pytorch25.yml) | [![Run Tests](https://github.com/sony/model_optimization/actions/workflows/run_tests_python39_pytorch26.yml/badge.svg)](https://github.com/sony/model_optimization/actions/workflows/run_tests_python39_pytorch26.yml) |
171
+ | Python 3.10 | [![Run Tests](https://github.com/sony/model_optimization/actions/workflows/run_tests_python310_pytorch23.yml/badge.svg)](https://github.com/sony/model_optimization/actions/workflows/run_tests_python310_pytorch23.yml) | [![Run Tests](https://github.com/sony/model_optimization/actions/workflows/run_tests_python310_pytorch24.yml/badge.svg)](https://github.com/sony/model_optimization/actions/workflows/run_tests_python310_pytorch24.yml) | [![Run Tests](https://github.com/sony/model_optimization/actions/workflows/run_tests_python310_pytorch25.yml/badge.svg)](https://github.com/sony/model_optimization/actions/workflows/run_tests_python310_pytorch25.yml) | [![Run Tests](https://github.com/sony/model_optimization/actions/workflows/run_tests_python310_pytorch26.yml/badge.svg)](https://github.com/sony/model_optimization/actions/workflows/run_tests_python310_pytorch26.yml) |
172
+ | Python 3.11 | [![Run Tests](https://github.com/sony/model_optimization/actions/workflows/run_tests_python311_pytorch23.yml/badge.svg)](https://github.com/sony/model_optimization/actions/workflows/run_tests_python311_pytorch23.yml) | [![Run Tests](https://github.com/sony/model_optimization/actions/workflows/run_tests_python311_pytorch24.yml/badge.svg)](https://github.com/sony/model_optimization/actions/workflows/run_tests_python311_pytorch24.yml) | [![Run Tests](https://github.com/sony/model_optimization/actions/workflows/run_tests_python311_pytorch25.yml/badge.svg)](https://github.com/sony/model_optimization/actions/workflows/run_tests_python311_pytorch25.yml) | [![Run Tests](https://github.com/sony/model_optimization/actions/workflows/run_tests_python311_pytorch26.yml/badge.svg)](https://github.com/sony/model_optimization/actions/workflows/run_tests_python311_pytorch26.yml) |
173
+ | Python 3.12 | [![Run Tests](https://github.com/sony/model_optimization/actions/workflows/run_tests_python312_pytorch23.yml/badge.svg)](https://github.com/sony/model_optimization/actions/workflows/run_tests_python312_pytorch23.yml) | [![Run Tests](https://github.com/sony/model_optimization/actions/workflows/run_tests_python312_pytorch24.yml/badge.svg)](https://github.com/sony/model_optimization/actions/workflows/run_tests_python312_pytorch24.yml) | [![Run Tests](https://github.com/sony/model_optimization/actions/workflows/run_tests_python312_pytorch25.yml/badge.svg)](https://github.com/sony/model_optimization/actions/workflows/run_tests_python312_pytorch25.yml) | [![Run Tests](https://github.com/sony/model_optimization/actions/workflows/run_tests_python312_pytorch26.yml/badge.svg)](https://github.com/sony/model_optimization/actions/workflows/run_tests_python312_pytorch26.yml) |
174
174
 
175
175
  | | TensorFlow 2.14 | TensorFlow 2.15 |
176
176
  |-------------|------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
@@ -1,5 +1,5 @@
1
- mct_nightly-2.3.0.20250507.555.dist-info/licenses/LICENSE.md,sha256=aYSSIb-5AFPeITTvXm1UAoe0uYBiMmSS8flvXaaFUks,10174
2
- model_compression_toolkit/__init__.py,sha256=QzNbJcOvpHUdWIDaA4UVEDr7PLGs8z-feuEZ7nopltg,1557
1
+ mct_nightly-2.3.0.20250508.558.dist-info/licenses/LICENSE.md,sha256=aYSSIb-5AFPeITTvXm1UAoe0uYBiMmSS8flvXaaFUks,10174
2
+ model_compression_toolkit/__init__.py,sha256=BZrRYpDrYtribQk-M_cdzsfLrt0CRD2n5ESgo8l2Vz0,1557
3
3
  model_compression_toolkit/constants.py,sha256=iJ6vfTjC2oFIZWt8wvHoxEw5YJi3yl0Hd4q30_8q0Zc,3958
4
4
  model_compression_toolkit/defaultdict.py,sha256=LSc-sbZYXENMCw3U9F4GiXuv67IKpdn0Qm7Fr11jy-4,2277
5
5
  model_compression_toolkit/logger.py,sha256=L3q7tn3Uht0i_7phnlOWMR2Te2zvzrt2HOz9vYEInts,4529
@@ -335,9 +335,9 @@ model_compression_toolkit/exporter/model_exporter/keras/mctq_keras_exporter.py,s
335
335
  model_compression_toolkit/exporter/model_exporter/pytorch/__init__.py,sha256=uZ2RigbY9O2PJ0Il8wPpS_s7frgg9WUGd_SHeKGyl1A,699
336
336
  model_compression_toolkit/exporter/model_exporter/pytorch/base_pytorch_exporter.py,sha256=UPVkEUQCMZ4Lld6CRnEOPEmlfe5vcQZG0Q3FwRBodD4,4021
337
337
  model_compression_toolkit/exporter/model_exporter/pytorch/export_serialization_format.py,sha256=bPevy6OBqng41PqytBR55e6cBEuyrUS0H8dWX4zgjQ4,967
338
- model_compression_toolkit/exporter/model_exporter/pytorch/fakely_quant_onnx_pytorch_exporter.py,sha256=5DHg8ch8_DtSQ7M5Aw3LcZLSVFqFH556cKcosp3Ik-I,8720
338
+ model_compression_toolkit/exporter/model_exporter/pytorch/fakely_quant_onnx_pytorch_exporter.py,sha256=nx1PQ8HC6GljiJQF10681CYAAi7rarD1PNdaattR7FM,8946
339
339
  model_compression_toolkit/exporter/model_exporter/pytorch/fakely_quant_torchscript_pytorch_exporter.py,sha256=ksWV2A-Njo-wAxQ_Ye2sLIZXBWJ_WNyjT7-qFFwvV2o,2897
340
- model_compression_toolkit/exporter/model_exporter/pytorch/pytorch_export_facade.py,sha256=8vYGKa58BkasvoHejYaPwubOJPcW0s-RY79_Kkw0Hy8,6236
340
+ model_compression_toolkit/exporter/model_exporter/pytorch/pytorch_export_facade.py,sha256=I2YZOwzTMLcclGJoruFrHJlOt5mepodX7mGw_GbnqxA,6372
341
341
  model_compression_toolkit/exporter/model_wrapper/__init__.py,sha256=7CF2zvpTrIEm8qnbuHnLZyTZkwBBxV24V8QA0oxGbh0,1187
342
342
  model_compression_toolkit/exporter/model_wrapper/fw_agnostic/__init__.py,sha256=pKAdbTCFM_2BrZXUtTIw0ouKotrWwUDF_hP3rPwCM2k,696
343
343
  model_compression_toolkit/exporter/model_wrapper/fw_agnostic/get_inferable_quantizers.py,sha256=Bd3QhAR__YC9Xmobd5qHv9ofh_rPn_eTFV0sXizcBnY,2297
@@ -528,7 +528,7 @@ model_compression_toolkit/xquant/pytorch/model_analyzer.py,sha256=b93o800yVB3Z-i
528
528
  model_compression_toolkit/xquant/pytorch/pytorch_report_utils.py,sha256=UVN_S9ULHBEldBpShCOt8-soT8YTQ5oE362y96qF_FA,3950
529
529
  model_compression_toolkit/xquant/pytorch/similarity_functions.py,sha256=CERxq5K8rqaiE-DlwhZBTUd9x69dtYJlkHOPLB54vm8,2354
530
530
  model_compression_toolkit/xquant/pytorch/tensorboard_utils.py,sha256=mkoEktLFFHtEKzzFRn_jCnxjhJolK12TZ5AQeDHzUO8,9767
531
- mct_nightly-2.3.0.20250507.555.dist-info/METADATA,sha256=hIfm1mpPLcDseqKnO60EFxdc5f3T66WfAIb0gwT7TEk,25157
532
- mct_nightly-2.3.0.20250507.555.dist-info/WHEEL,sha256=0CuiUZ_p9E4cD6NyLD6UG80LBXYyiSYZOKDm5lp32xk,91
533
- mct_nightly-2.3.0.20250507.555.dist-info/top_level.txt,sha256=gsYA8juk0Z-ZmQRKULkb3JLGdOdz8jW_cMRjisn9ga4,26
534
- mct_nightly-2.3.0.20250507.555.dist-info/RECORD,,
531
+ mct_nightly-2.3.0.20250508.558.dist-info/METADATA,sha256=UrQFua3cjFBx3XleuRW5EOLzgggCBmLPpjrANOAzwPk,25155
532
+ mct_nightly-2.3.0.20250508.558.dist-info/WHEEL,sha256=0CuiUZ_p9E4cD6NyLD6UG80LBXYyiSYZOKDm5lp32xk,91
533
+ mct_nightly-2.3.0.20250508.558.dist-info/top_level.txt,sha256=gsYA8juk0Z-ZmQRKULkb3JLGdOdz8jW_cMRjisn9ga4,26
534
+ mct_nightly-2.3.0.20250508.558.dist-info/RECORD,,
@@ -27,4 +27,4 @@ from model_compression_toolkit import data_generation
27
27
  from model_compression_toolkit import pruning
28
28
  from model_compression_toolkit.trainable_infrastructure.keras.load_model import keras_load_quantized_model
29
29
 
30
- __version__ = "2.3.0.20250507.000555"
30
+ __version__ = "2.3.0.20250508.000558"
@@ -18,6 +18,8 @@ from io import BytesIO
18
18
  import torch.nn
19
19
 
20
20
  from mct_quantizers import PytorchActivationQuantizationHolder, PytorchQuantizationWrapper
21
+
22
+ from model_compression_toolkit.core.pytorch.reader.node_holders import DummyPlaceHolder
21
23
  from model_compression_toolkit.verify_packages import FOUND_ONNX
22
24
  from model_compression_toolkit.logger import Logger
23
25
  from model_compression_toolkit.core.pytorch.utils import to_torch_tensor
@@ -98,15 +100,17 @@ if FOUND_ONNX:
98
100
  model_output = self.model(*model_input) if isinstance(model_input, (list, tuple)) else self.model(
99
101
  model_input)
100
102
 
103
+ input_nodes = [n for n in self.model.node_sort if n.type == DummyPlaceHolder]
104
+ input_names = [f"input_{i}" for i in range(len(input_nodes))] if len(input_nodes) > 1 else ["input"]
105
+ dynamic_axes = {name: {0: 'batch_size'} for name in input_names}
101
106
  if output_names is None:
102
107
  # Determine number of outputs and prepare output_names and dynamic_axes
103
108
  if isinstance(model_output, (list, tuple)):
104
109
  output_names = [f"output_{i}" for i in range(len(model_output))]
105
- dynamic_axes = {'input': {0: 'batch_size'}}
106
110
  dynamic_axes.update({name: {0: 'batch_size'} for name in output_names})
107
111
  else:
108
112
  output_names = ['output']
109
- dynamic_axes = {'input': {0: 'batch_size'}, 'output': {0: 'batch_size'}}
113
+ dynamic_axes.update({'output': {0: 'batch_size'}})
110
114
  else:
111
115
  if isinstance(model_output, (list, tuple)):
112
116
  num_of_outputs = len(model_output)
@@ -115,9 +119,7 @@ if FOUND_ONNX:
115
119
  assert len(output_names) == num_of_outputs, (f"Mismatch between number of requested output names "
116
120
  f"({output_names}) and model output count "
117
121
  f"({num_of_outputs}):\n")
118
- dynamic_axes = {'input': {0: 'batch_size'}}
119
122
  dynamic_axes.update({name: {0: 'batch_size'} for name in output_names})
120
-
121
123
  if hasattr(self.model, 'metadata'):
122
124
  onnx_bytes = BytesIO()
123
125
  torch.onnx.export(self.model,
@@ -125,7 +127,7 @@ if FOUND_ONNX:
125
127
  onnx_bytes,
126
128
  opset_version=self._onnx_opset_version,
127
129
  verbose=False,
128
- input_names=['input'],
130
+ input_names=input_names,
129
131
  output_names=output_names,
130
132
  dynamic_axes=dynamic_axes)
131
133
  onnx_model = onnx.load_from_string(onnx_bytes.getvalue())
@@ -137,7 +139,7 @@ if FOUND_ONNX:
137
139
  self.save_model_path,
138
140
  opset_version=self._onnx_opset_version,
139
141
  verbose=False,
140
- input_names=['input'],
142
+ input_names=input_names,
141
143
  output_names=output_names,
142
144
  dynamic_axes=dynamic_axes)
143
145
 
@@ -13,6 +13,7 @@
13
13
  # limitations under the License.
14
14
  # ==============================================================================
15
15
  from typing import Callable
16
+ from packaging import version
16
17
 
17
18
  from model_compression_toolkit.verify_packages import FOUND_TORCH
18
19
  from model_compression_toolkit.exporter.model_exporter.fw_agonstic.quantization_format import QuantizationFormat
@@ -30,6 +31,9 @@ if FOUND_TORCH:
30
31
  from model_compression_toolkit.exporter.model_exporter.pytorch.fakely_quant_torchscript_pytorch_exporter import FakelyQuantTorchScriptPyTorchExporter
31
32
  from model_compression_toolkit.exporter.model_wrapper.pytorch.validate_layer import is_pytorch_layer_exportable
32
33
 
34
+ if version.parse(torch.__version__) >= version.parse("2.4"):
35
+ DEFAULT_ONNX_OPSET_VERSION = 20
36
+
33
37
  supported_serialization_quantization_export_dict = {
34
38
  PytorchExportSerializationFormat.TORCHSCRIPT: [QuantizationFormat.FAKELY_QUANT],
35
39
  PytorchExportSerializationFormat.ONNX: [QuantizationFormat.FAKELY_QUANT, QuantizationFormat.MCTQ]