mct-nightly 2.2.0.20241126.528__tar.gz → 2.2.0.20241128.546__tar.gz

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
Files changed (571) hide show
  1. {mct-nightly-2.2.0.20241126.528 → mct-nightly-2.2.0.20241128.546}/PKG-INFO +29 -35
  2. {mct-nightly-2.2.0.20241126.528 → mct-nightly-2.2.0.20241128.546}/README.md +28 -34
  3. {mct-nightly-2.2.0.20241126.528 → mct-nightly-2.2.0.20241128.546}/mct_nightly.egg-info/PKG-INFO +29 -35
  4. {mct-nightly-2.2.0.20241126.528 → mct-nightly-2.2.0.20241128.546}/model_compression_toolkit/__init__.py +1 -1
  5. {mct-nightly-2.2.0.20241126.528 → mct-nightly-2.2.0.20241128.546}/LICENSE.md +0 -0
  6. {mct-nightly-2.2.0.20241126.528 → mct-nightly-2.2.0.20241128.546}/mct_nightly.egg-info/SOURCES.txt +0 -0
  7. {mct-nightly-2.2.0.20241126.528 → mct-nightly-2.2.0.20241128.546}/mct_nightly.egg-info/dependency_links.txt +0 -0
  8. {mct-nightly-2.2.0.20241126.528 → mct-nightly-2.2.0.20241128.546}/mct_nightly.egg-info/requires.txt +0 -0
  9. {mct-nightly-2.2.0.20241126.528 → mct-nightly-2.2.0.20241128.546}/mct_nightly.egg-info/top_level.txt +0 -0
  10. {mct-nightly-2.2.0.20241126.528 → mct-nightly-2.2.0.20241128.546}/model_compression_toolkit/constants.py +0 -0
  11. {mct-nightly-2.2.0.20241126.528 → mct-nightly-2.2.0.20241128.546}/model_compression_toolkit/core/__init__.py +0 -0
  12. {mct-nightly-2.2.0.20241126.528 → mct-nightly-2.2.0.20241128.546}/model_compression_toolkit/core/analyzer.py +0 -0
  13. {mct-nightly-2.2.0.20241126.528 → mct-nightly-2.2.0.20241128.546}/model_compression_toolkit/core/common/__init__.py +0 -0
  14. {mct-nightly-2.2.0.20241126.528 → mct-nightly-2.2.0.20241128.546}/model_compression_toolkit/core/common/back2framework/__init__.py +0 -0
  15. {mct-nightly-2.2.0.20241126.528 → mct-nightly-2.2.0.20241128.546}/model_compression_toolkit/core/common/back2framework/base_model_builder.py +0 -0
  16. {mct-nightly-2.2.0.20241126.528 → mct-nightly-2.2.0.20241128.546}/model_compression_toolkit/core/common/base_substitutions.py +0 -0
  17. {mct-nightly-2.2.0.20241126.528 → mct-nightly-2.2.0.20241128.546}/model_compression_toolkit/core/common/collectors/__init__.py +0 -0
  18. {mct-nightly-2.2.0.20241126.528 → mct-nightly-2.2.0.20241128.546}/model_compression_toolkit/core/common/collectors/base_collector.py +0 -0
  19. {mct-nightly-2.2.0.20241126.528 → mct-nightly-2.2.0.20241128.546}/model_compression_toolkit/core/common/collectors/histogram_collector.py +0 -0
  20. {mct-nightly-2.2.0.20241126.528 → mct-nightly-2.2.0.20241128.546}/model_compression_toolkit/core/common/collectors/mean_collector.py +0 -0
  21. {mct-nightly-2.2.0.20241126.528 → mct-nightly-2.2.0.20241128.546}/model_compression_toolkit/core/common/collectors/min_max_per_channel_collector.py +0 -0
  22. {mct-nightly-2.2.0.20241126.528 → mct-nightly-2.2.0.20241128.546}/model_compression_toolkit/core/common/collectors/statistics_collector.py +0 -0
  23. {mct-nightly-2.2.0.20241126.528 → mct-nightly-2.2.0.20241128.546}/model_compression_toolkit/core/common/framework_implementation.py +0 -0
  24. {mct-nightly-2.2.0.20241126.528 → mct-nightly-2.2.0.20241128.546}/model_compression_toolkit/core/common/framework_info.py +0 -0
  25. {mct-nightly-2.2.0.20241126.528 → mct-nightly-2.2.0.20241128.546}/model_compression_toolkit/core/common/fusion/__init__.py +0 -0
  26. {mct-nightly-2.2.0.20241126.528 → mct-nightly-2.2.0.20241128.546}/model_compression_toolkit/core/common/fusion/graph_fuser.py +0 -0
  27. {mct-nightly-2.2.0.20241126.528 → mct-nightly-2.2.0.20241128.546}/model_compression_toolkit/core/common/fusion/layer_fusing.py +0 -0
  28. {mct-nightly-2.2.0.20241126.528 → mct-nightly-2.2.0.20241128.546}/model_compression_toolkit/core/common/graph/__init__.py +0 -0
  29. {mct-nightly-2.2.0.20241126.528 → mct-nightly-2.2.0.20241128.546}/model_compression_toolkit/core/common/graph/base_graph.py +0 -0
  30. {mct-nightly-2.2.0.20241126.528 → mct-nightly-2.2.0.20241128.546}/model_compression_toolkit/core/common/graph/base_node.py +0 -0
  31. {mct-nightly-2.2.0.20241126.528 → mct-nightly-2.2.0.20241128.546}/model_compression_toolkit/core/common/graph/edge.py +0 -0
  32. {mct-nightly-2.2.0.20241126.528 → mct-nightly-2.2.0.20241128.546}/model_compression_toolkit/core/common/graph/functional_node.py +0 -0
  33. {mct-nightly-2.2.0.20241126.528 → mct-nightly-2.2.0.20241128.546}/model_compression_toolkit/core/common/graph/graph_matchers.py +0 -0
  34. {mct-nightly-2.2.0.20241126.528 → mct-nightly-2.2.0.20241128.546}/model_compression_toolkit/core/common/graph/graph_searches.py +0 -0
  35. {mct-nightly-2.2.0.20241126.528 → mct-nightly-2.2.0.20241128.546}/model_compression_toolkit/core/common/graph/memory_graph/__init__.py +0 -0
  36. {mct-nightly-2.2.0.20241126.528 → mct-nightly-2.2.0.20241128.546}/model_compression_toolkit/core/common/graph/memory_graph/bipartite_graph.py +0 -0
  37. {mct-nightly-2.2.0.20241126.528 → mct-nightly-2.2.0.20241128.546}/model_compression_toolkit/core/common/graph/memory_graph/compute_graph_max_cut.py +0 -0
  38. {mct-nightly-2.2.0.20241126.528 → mct-nightly-2.2.0.20241128.546}/model_compression_toolkit/core/common/graph/memory_graph/cut.py +0 -0
  39. {mct-nightly-2.2.0.20241126.528 → mct-nightly-2.2.0.20241128.546}/model_compression_toolkit/core/common/graph/memory_graph/max_cut_astar.py +0 -0
  40. {mct-nightly-2.2.0.20241126.528 → mct-nightly-2.2.0.20241128.546}/model_compression_toolkit/core/common/graph/memory_graph/memory_element.py +0 -0
  41. {mct-nightly-2.2.0.20241126.528 → mct-nightly-2.2.0.20241128.546}/model_compression_toolkit/core/common/graph/memory_graph/memory_graph.py +0 -0
  42. {mct-nightly-2.2.0.20241126.528 → mct-nightly-2.2.0.20241128.546}/model_compression_toolkit/core/common/graph/virtual_activation_weights_node.py +0 -0
  43. {mct-nightly-2.2.0.20241126.528 → mct-nightly-2.2.0.20241128.546}/model_compression_toolkit/core/common/hessian/__init__.py +0 -0
  44. {mct-nightly-2.2.0.20241126.528 → mct-nightly-2.2.0.20241128.546}/model_compression_toolkit/core/common/hessian/hessian_info_service.py +0 -0
  45. {mct-nightly-2.2.0.20241126.528 → mct-nightly-2.2.0.20241128.546}/model_compression_toolkit/core/common/hessian/hessian_info_utils.py +0 -0
  46. {mct-nightly-2.2.0.20241126.528 → mct-nightly-2.2.0.20241128.546}/model_compression_toolkit/core/common/hessian/hessian_scores_calculator.py +0 -0
  47. {mct-nightly-2.2.0.20241126.528 → mct-nightly-2.2.0.20241128.546}/model_compression_toolkit/core/common/hessian/hessian_scores_request.py +0 -0
  48. {mct-nightly-2.2.0.20241126.528 → mct-nightly-2.2.0.20241128.546}/model_compression_toolkit/core/common/matchers/__init__.py +0 -0
  49. {mct-nightly-2.2.0.20241126.528 → mct-nightly-2.2.0.20241128.546}/model_compression_toolkit/core/common/matchers/base_graph_filter.py +0 -0
  50. {mct-nightly-2.2.0.20241126.528 → mct-nightly-2.2.0.20241128.546}/model_compression_toolkit/core/common/matchers/base_matcher.py +0 -0
  51. {mct-nightly-2.2.0.20241126.528 → mct-nightly-2.2.0.20241128.546}/model_compression_toolkit/core/common/matchers/edge_matcher.py +0 -0
  52. {mct-nightly-2.2.0.20241126.528 → mct-nightly-2.2.0.20241128.546}/model_compression_toolkit/core/common/matchers/function.py +0 -0
  53. {mct-nightly-2.2.0.20241126.528 → mct-nightly-2.2.0.20241128.546}/model_compression_toolkit/core/common/matchers/node_matcher.py +0 -0
  54. {mct-nightly-2.2.0.20241126.528 → mct-nightly-2.2.0.20241128.546}/model_compression_toolkit/core/common/matchers/walk_matcher.py +0 -0
  55. {mct-nightly-2.2.0.20241126.528 → mct-nightly-2.2.0.20241128.546}/model_compression_toolkit/core/common/memory_computation.py +0 -0
  56. {mct-nightly-2.2.0.20241126.528 → mct-nightly-2.2.0.20241128.546}/model_compression_toolkit/core/common/mixed_precision/__init__.py +0 -0
  57. {mct-nightly-2.2.0.20241126.528 → mct-nightly-2.2.0.20241128.546}/model_compression_toolkit/core/common/mixed_precision/bit_width_setter.py +0 -0
  58. {mct-nightly-2.2.0.20241126.528 → mct-nightly-2.2.0.20241128.546}/model_compression_toolkit/core/common/mixed_precision/configurable_quant_id.py +0 -0
  59. {mct-nightly-2.2.0.20241126.528 → mct-nightly-2.2.0.20241128.546}/model_compression_toolkit/core/common/mixed_precision/configurable_quantizer_utils.py +0 -0
  60. {mct-nightly-2.2.0.20241126.528 → mct-nightly-2.2.0.20241128.546}/model_compression_toolkit/core/common/mixed_precision/distance_weighting.py +0 -0
  61. {mct-nightly-2.2.0.20241126.528 → mct-nightly-2.2.0.20241128.546}/model_compression_toolkit/core/common/mixed_precision/mixed_precision_candidates_filter.py +0 -0
  62. {mct-nightly-2.2.0.20241126.528 → mct-nightly-2.2.0.20241128.546}/model_compression_toolkit/core/common/mixed_precision/mixed_precision_quantization_config.py +0 -0
  63. {mct-nightly-2.2.0.20241126.528 → mct-nightly-2.2.0.20241128.546}/model_compression_toolkit/core/common/mixed_precision/mixed_precision_search_facade.py +0 -0
  64. {mct-nightly-2.2.0.20241126.528 → mct-nightly-2.2.0.20241128.546}/model_compression_toolkit/core/common/mixed_precision/mixed_precision_search_manager.py +0 -0
  65. {mct-nightly-2.2.0.20241126.528 → mct-nightly-2.2.0.20241128.546}/model_compression_toolkit/core/common/mixed_precision/resource_utilization_tools/__init__.py +0 -0
  66. {mct-nightly-2.2.0.20241126.528 → mct-nightly-2.2.0.20241128.546}/model_compression_toolkit/core/common/mixed_precision/resource_utilization_tools/resource_utilization.py +0 -0
  67. {mct-nightly-2.2.0.20241126.528 → mct-nightly-2.2.0.20241128.546}/model_compression_toolkit/core/common/mixed_precision/resource_utilization_tools/resource_utilization_data.py +0 -0
  68. {mct-nightly-2.2.0.20241126.528 → mct-nightly-2.2.0.20241128.546}/model_compression_toolkit/core/common/mixed_precision/resource_utilization_tools/ru_aggregation_methods.py +0 -0
  69. {mct-nightly-2.2.0.20241126.528 → mct-nightly-2.2.0.20241128.546}/model_compression_toolkit/core/common/mixed_precision/resource_utilization_tools/ru_functions_mapping.py +0 -0
  70. {mct-nightly-2.2.0.20241126.528 → mct-nightly-2.2.0.20241128.546}/model_compression_toolkit/core/common/mixed_precision/resource_utilization_tools/ru_methods.py +0 -0
  71. {mct-nightly-2.2.0.20241126.528 → mct-nightly-2.2.0.20241128.546}/model_compression_toolkit/core/common/mixed_precision/search_methods/__init__.py +0 -0
  72. {mct-nightly-2.2.0.20241126.528 → mct-nightly-2.2.0.20241128.546}/model_compression_toolkit/core/common/mixed_precision/search_methods/linear_programming.py +0 -0
  73. {mct-nightly-2.2.0.20241126.528 → mct-nightly-2.2.0.20241128.546}/model_compression_toolkit/core/common/mixed_precision/sensitivity_evaluation.py +0 -0
  74. {mct-nightly-2.2.0.20241126.528 → mct-nightly-2.2.0.20241128.546}/model_compression_toolkit/core/common/mixed_precision/set_layer_to_bitwidth.py +0 -0
  75. {mct-nightly-2.2.0.20241126.528 → mct-nightly-2.2.0.20241128.546}/model_compression_toolkit/core/common/mixed_precision/solution_refinement_procedure.py +0 -0
  76. {mct-nightly-2.2.0.20241126.528 → mct-nightly-2.2.0.20241128.546}/model_compression_toolkit/core/common/model_builder_mode.py +0 -0
  77. {mct-nightly-2.2.0.20241126.528 → mct-nightly-2.2.0.20241128.546}/model_compression_toolkit/core/common/model_collector.py +0 -0
  78. {mct-nightly-2.2.0.20241126.528 → mct-nightly-2.2.0.20241128.546}/model_compression_toolkit/core/common/model_validation.py +0 -0
  79. {mct-nightly-2.2.0.20241126.528 → mct-nightly-2.2.0.20241128.546}/model_compression_toolkit/core/common/network_editors/__init__.py +0 -0
  80. {mct-nightly-2.2.0.20241126.528 → mct-nightly-2.2.0.20241128.546}/model_compression_toolkit/core/common/network_editors/actions.py +0 -0
  81. {mct-nightly-2.2.0.20241126.528 → mct-nightly-2.2.0.20241128.546}/model_compression_toolkit/core/common/network_editors/edit_network.py +0 -0
  82. {mct-nightly-2.2.0.20241126.528 → mct-nightly-2.2.0.20241128.546}/model_compression_toolkit/core/common/network_editors/node_filters.py +0 -0
  83. {mct-nightly-2.2.0.20241126.528 → mct-nightly-2.2.0.20241128.546}/model_compression_toolkit/core/common/node_prior_info.py +0 -0
  84. {mct-nightly-2.2.0.20241126.528 → mct-nightly-2.2.0.20241128.546}/model_compression_toolkit/core/common/pruning/__init__.py +0 -0
  85. {mct-nightly-2.2.0.20241126.528 → mct-nightly-2.2.0.20241128.546}/model_compression_toolkit/core/common/pruning/channels_grouping.py +0 -0
  86. {mct-nightly-2.2.0.20241126.528 → mct-nightly-2.2.0.20241128.546}/model_compression_toolkit/core/common/pruning/greedy_mask_calculator.py +0 -0
  87. {mct-nightly-2.2.0.20241126.528 → mct-nightly-2.2.0.20241128.546}/model_compression_toolkit/core/common/pruning/importance_metrics/__init__.py +0 -0
  88. {mct-nightly-2.2.0.20241126.528 → mct-nightly-2.2.0.20241128.546}/model_compression_toolkit/core/common/pruning/importance_metrics/base_importance_metric.py +0 -0
  89. {mct-nightly-2.2.0.20241126.528 → mct-nightly-2.2.0.20241128.546}/model_compression_toolkit/core/common/pruning/importance_metrics/importance_metric_factory.py +0 -0
  90. {mct-nightly-2.2.0.20241126.528 → mct-nightly-2.2.0.20241128.546}/model_compression_toolkit/core/common/pruning/importance_metrics/lfh_importance_metric.py +0 -0
  91. {mct-nightly-2.2.0.20241126.528 → mct-nightly-2.2.0.20241128.546}/model_compression_toolkit/core/common/pruning/mask/__init__.py +0 -0
  92. {mct-nightly-2.2.0.20241126.528 → mct-nightly-2.2.0.20241128.546}/model_compression_toolkit/core/common/pruning/mask/per_channel_mask.py +0 -0
  93. {mct-nightly-2.2.0.20241126.528 → mct-nightly-2.2.0.20241128.546}/model_compression_toolkit/core/common/pruning/mask/per_simd_group_mask.py +0 -0
  94. {mct-nightly-2.2.0.20241126.528 → mct-nightly-2.2.0.20241128.546}/model_compression_toolkit/core/common/pruning/memory_calculator.py +0 -0
  95. {mct-nightly-2.2.0.20241126.528 → mct-nightly-2.2.0.20241128.546}/model_compression_toolkit/core/common/pruning/prune_graph.py +0 -0
  96. {mct-nightly-2.2.0.20241126.528 → mct-nightly-2.2.0.20241128.546}/model_compression_toolkit/core/common/pruning/pruner.py +0 -0
  97. {mct-nightly-2.2.0.20241126.528 → mct-nightly-2.2.0.20241128.546}/model_compression_toolkit/core/common/pruning/pruning_config.py +0 -0
  98. {mct-nightly-2.2.0.20241126.528 → mct-nightly-2.2.0.20241128.546}/model_compression_toolkit/core/common/pruning/pruning_framework_implementation.py +0 -0
  99. {mct-nightly-2.2.0.20241126.528 → mct-nightly-2.2.0.20241128.546}/model_compression_toolkit/core/common/pruning/pruning_info.py +0 -0
  100. {mct-nightly-2.2.0.20241126.528 → mct-nightly-2.2.0.20241128.546}/model_compression_toolkit/core/common/pruning/pruning_section.py +0 -0
  101. {mct-nightly-2.2.0.20241126.528 → mct-nightly-2.2.0.20241128.546}/model_compression_toolkit/core/common/quantization/__init__.py +0 -0
  102. {mct-nightly-2.2.0.20241126.528 → mct-nightly-2.2.0.20241128.546}/model_compression_toolkit/core/common/quantization/bit_width_config.py +0 -0
  103. {mct-nightly-2.2.0.20241126.528 → mct-nightly-2.2.0.20241128.546}/model_compression_toolkit/core/common/quantization/candidate_node_quantization_config.py +0 -0
  104. {mct-nightly-2.2.0.20241126.528 → mct-nightly-2.2.0.20241128.546}/model_compression_toolkit/core/common/quantization/core_config.py +0 -0
  105. {mct-nightly-2.2.0.20241126.528 → mct-nightly-2.2.0.20241128.546}/model_compression_toolkit/core/common/quantization/debug_config.py +0 -0
  106. {mct-nightly-2.2.0.20241126.528 → mct-nightly-2.2.0.20241128.546}/model_compression_toolkit/core/common/quantization/filter_nodes_candidates.py +0 -0
  107. {mct-nightly-2.2.0.20241126.528 → mct-nightly-2.2.0.20241128.546}/model_compression_toolkit/core/common/quantization/node_quantization_config.py +0 -0
  108. {mct-nightly-2.2.0.20241126.528 → mct-nightly-2.2.0.20241128.546}/model_compression_toolkit/core/common/quantization/quantization_config.py +0 -0
  109. {mct-nightly-2.2.0.20241126.528 → mct-nightly-2.2.0.20241128.546}/model_compression_toolkit/core/common/quantization/quantization_fn_selection.py +0 -0
  110. {mct-nightly-2.2.0.20241126.528 → mct-nightly-2.2.0.20241128.546}/model_compression_toolkit/core/common/quantization/quantization_params_fn_selection.py +0 -0
  111. {mct-nightly-2.2.0.20241126.528 → mct-nightly-2.2.0.20241128.546}/model_compression_toolkit/core/common/quantization/quantization_params_generation/__init__.py +0 -0
  112. {mct-nightly-2.2.0.20241126.528 → mct-nightly-2.2.0.20241128.546}/model_compression_toolkit/core/common/quantization/quantization_params_generation/error_functions.py +0 -0
  113. {mct-nightly-2.2.0.20241126.528 → mct-nightly-2.2.0.20241128.546}/model_compression_toolkit/core/common/quantization/quantization_params_generation/lut_kmeans_params.py +0 -0
  114. {mct-nightly-2.2.0.20241126.528 → mct-nightly-2.2.0.20241128.546}/model_compression_toolkit/core/common/quantization/quantization_params_generation/outlier_filter.py +0 -0
  115. {mct-nightly-2.2.0.20241126.528 → mct-nightly-2.2.0.20241128.546}/model_compression_toolkit/core/common/quantization/quantization_params_generation/power_of_two_selection.py +0 -0
  116. {mct-nightly-2.2.0.20241126.528 → mct-nightly-2.2.0.20241128.546}/model_compression_toolkit/core/common/quantization/quantization_params_generation/qparams_activations_computation.py +0 -0
  117. {mct-nightly-2.2.0.20241126.528 → mct-nightly-2.2.0.20241128.546}/model_compression_toolkit/core/common/quantization/quantization_params_generation/qparams_computation.py +0 -0
  118. {mct-nightly-2.2.0.20241126.528 → mct-nightly-2.2.0.20241128.546}/model_compression_toolkit/core/common/quantization/quantization_params_generation/qparams_search.py +0 -0
  119. {mct-nightly-2.2.0.20241126.528 → mct-nightly-2.2.0.20241128.546}/model_compression_toolkit/core/common/quantization/quantization_params_generation/qparams_weights_computation.py +0 -0
  120. {mct-nightly-2.2.0.20241126.528 → mct-nightly-2.2.0.20241128.546}/model_compression_toolkit/core/common/quantization/quantization_params_generation/symmetric_selection.py +0 -0
  121. {mct-nightly-2.2.0.20241126.528 → mct-nightly-2.2.0.20241128.546}/model_compression_toolkit/core/common/quantization/quantization_params_generation/uniform_selection.py +0 -0
  122. {mct-nightly-2.2.0.20241126.528 → mct-nightly-2.2.0.20241128.546}/model_compression_toolkit/core/common/quantization/quantize_graph_weights.py +0 -0
  123. {mct-nightly-2.2.0.20241126.528 → mct-nightly-2.2.0.20241128.546}/model_compression_toolkit/core/common/quantization/quantize_node.py +0 -0
  124. {mct-nightly-2.2.0.20241126.528 → mct-nightly-2.2.0.20241128.546}/model_compression_toolkit/core/common/quantization/quantizers/__init__.py +0 -0
  125. {mct-nightly-2.2.0.20241126.528 → mct-nightly-2.2.0.20241128.546}/model_compression_toolkit/core/common/quantization/quantizers/lut_kmeans_quantizer.py +0 -0
  126. {mct-nightly-2.2.0.20241126.528 → mct-nightly-2.2.0.20241128.546}/model_compression_toolkit/core/common/quantization/quantizers/quantizers_helpers.py +0 -0
  127. {mct-nightly-2.2.0.20241126.528 → mct-nightly-2.2.0.20241128.546}/model_compression_toolkit/core/common/quantization/quantizers/uniform_quantizers.py +0 -0
  128. {mct-nightly-2.2.0.20241126.528 → mct-nightly-2.2.0.20241128.546}/model_compression_toolkit/core/common/quantization/set_node_quantization_config.py +0 -0
  129. {mct-nightly-2.2.0.20241126.528 → mct-nightly-2.2.0.20241128.546}/model_compression_toolkit/core/common/similarity_analyzer.py +0 -0
  130. {mct-nightly-2.2.0.20241126.528 → mct-nightly-2.2.0.20241128.546}/model_compression_toolkit/core/common/statistics_correction/__init__.py +0 -0
  131. {mct-nightly-2.2.0.20241126.528 → mct-nightly-2.2.0.20241128.546}/model_compression_toolkit/core/common/statistics_correction/apply_activation_bias_correction_to_graph.py +0 -0
  132. {mct-nightly-2.2.0.20241126.528 → mct-nightly-2.2.0.20241128.546}/model_compression_toolkit/core/common/statistics_correction/apply_bias_correction_to_graph.py +0 -0
  133. {mct-nightly-2.2.0.20241126.528 → mct-nightly-2.2.0.20241128.546}/model_compression_toolkit/core/common/statistics_correction/apply_second_moment_correction_to_graph.py +0 -0
  134. {mct-nightly-2.2.0.20241126.528 → mct-nightly-2.2.0.20241128.546}/model_compression_toolkit/core/common/statistics_correction/compute_activation_bias_correction_of_graph.py +0 -0
  135. {mct-nightly-2.2.0.20241126.528 → mct-nightly-2.2.0.20241128.546}/model_compression_toolkit/core/common/statistics_correction/compute_bias_correction_of_graph.py +0 -0
  136. {mct-nightly-2.2.0.20241126.528 → mct-nightly-2.2.0.20241128.546}/model_compression_toolkit/core/common/statistics_correction/statistics_correction.py +0 -0
  137. {mct-nightly-2.2.0.20241126.528 → mct-nightly-2.2.0.20241128.546}/model_compression_toolkit/core/common/substitutions/__init__.py +0 -0
  138. {mct-nightly-2.2.0.20241126.528 → mct-nightly-2.2.0.20241128.546}/model_compression_toolkit/core/common/substitutions/apply_substitutions.py +0 -0
  139. {mct-nightly-2.2.0.20241126.528 → mct-nightly-2.2.0.20241128.546}/model_compression_toolkit/core/common/substitutions/batchnorm_folding.py +0 -0
  140. {mct-nightly-2.2.0.20241126.528 → mct-nightly-2.2.0.20241128.546}/model_compression_toolkit/core/common/substitutions/batchnorm_reconstruction.py +0 -0
  141. {mct-nightly-2.2.0.20241126.528 → mct-nightly-2.2.0.20241128.546}/model_compression_toolkit/core/common/substitutions/batchnorm_refusing.py +0 -0
  142. {mct-nightly-2.2.0.20241126.528 → mct-nightly-2.2.0.20241128.546}/model_compression_toolkit/core/common/substitutions/linear_collapsing.py +0 -0
  143. {mct-nightly-2.2.0.20241126.528 → mct-nightly-2.2.0.20241128.546}/model_compression_toolkit/core/common/substitutions/linear_collapsing_substitution.py +0 -0
  144. {mct-nightly-2.2.0.20241126.528 → mct-nightly-2.2.0.20241128.546}/model_compression_toolkit/core/common/substitutions/remove_identity.py +0 -0
  145. {mct-nightly-2.2.0.20241126.528 → mct-nightly-2.2.0.20241128.546}/model_compression_toolkit/core/common/substitutions/residual_collapsing.py +0 -0
  146. {mct-nightly-2.2.0.20241126.528 → mct-nightly-2.2.0.20241128.546}/model_compression_toolkit/core/common/substitutions/scale_equalization.py +0 -0
  147. {mct-nightly-2.2.0.20241126.528 → mct-nightly-2.2.0.20241128.546}/model_compression_toolkit/core/common/substitutions/shift_negative_activation.py +0 -0
  148. {mct-nightly-2.2.0.20241126.528 → mct-nightly-2.2.0.20241128.546}/model_compression_toolkit/core/common/substitutions/softmax_shift.py +0 -0
  149. {mct-nightly-2.2.0.20241126.528 → mct-nightly-2.2.0.20241128.546}/model_compression_toolkit/core/common/substitutions/virtual_activation_weights_composition.py +0 -0
  150. {mct-nightly-2.2.0.20241126.528 → mct-nightly-2.2.0.20241128.546}/model_compression_toolkit/core/common/substitutions/weights_activation_split.py +0 -0
  151. {mct-nightly-2.2.0.20241126.528 → mct-nightly-2.2.0.20241128.546}/model_compression_toolkit/core/common/user_info.py +0 -0
  152. {mct-nightly-2.2.0.20241126.528 → mct-nightly-2.2.0.20241128.546}/model_compression_toolkit/core/common/visualization/__init__.py +0 -0
  153. {mct-nightly-2.2.0.20241126.528 → mct-nightly-2.2.0.20241128.546}/model_compression_toolkit/core/common/visualization/final_config_visualizer.py +0 -0
  154. {mct-nightly-2.2.0.20241126.528 → mct-nightly-2.2.0.20241128.546}/model_compression_toolkit/core/common/visualization/nn_visualizer.py +0 -0
  155. {mct-nightly-2.2.0.20241126.528 → mct-nightly-2.2.0.20241128.546}/model_compression_toolkit/core/common/visualization/tensorboard_writer.py +0 -0
  156. {mct-nightly-2.2.0.20241126.528 → mct-nightly-2.2.0.20241128.546}/model_compression_toolkit/core/graph_prep_runner.py +0 -0
  157. {mct-nightly-2.2.0.20241126.528 → mct-nightly-2.2.0.20241128.546}/model_compression_toolkit/core/keras/__init__.py +0 -0
  158. {mct-nightly-2.2.0.20241126.528 → mct-nightly-2.2.0.20241128.546}/model_compression_toolkit/core/keras/back2framework/__init__.py +0 -0
  159. {mct-nightly-2.2.0.20241126.528 → mct-nightly-2.2.0.20241128.546}/model_compression_toolkit/core/keras/back2framework/factory_model_builder.py +0 -0
  160. {mct-nightly-2.2.0.20241126.528 → mct-nightly-2.2.0.20241128.546}/model_compression_toolkit/core/keras/back2framework/float_model_builder.py +0 -0
  161. {mct-nightly-2.2.0.20241126.528 → mct-nightly-2.2.0.20241128.546}/model_compression_toolkit/core/keras/back2framework/instance_builder.py +0 -0
  162. {mct-nightly-2.2.0.20241126.528 → mct-nightly-2.2.0.20241128.546}/model_compression_toolkit/core/keras/back2framework/keras_model_builder.py +0 -0
  163. {mct-nightly-2.2.0.20241126.528 → mct-nightly-2.2.0.20241128.546}/model_compression_toolkit/core/keras/back2framework/mixed_precision_model_builder.py +0 -0
  164. {mct-nightly-2.2.0.20241126.528 → mct-nightly-2.2.0.20241128.546}/model_compression_toolkit/core/keras/back2framework/quantized_model_builder.py +0 -0
  165. {mct-nightly-2.2.0.20241126.528 → mct-nightly-2.2.0.20241128.546}/model_compression_toolkit/core/keras/constants.py +0 -0
  166. {mct-nightly-2.2.0.20241126.528 → mct-nightly-2.2.0.20241128.546}/model_compression_toolkit/core/keras/custom_layer_validation.py +0 -0
  167. {mct-nightly-2.2.0.20241126.528 → mct-nightly-2.2.0.20241128.546}/model_compression_toolkit/core/keras/data_util.py +0 -0
  168. {mct-nightly-2.2.0.20241126.528 → mct-nightly-2.2.0.20241128.546}/model_compression_toolkit/core/keras/default_framework_info.py +0 -0
  169. {mct-nightly-2.2.0.20241126.528 → mct-nightly-2.2.0.20241128.546}/model_compression_toolkit/core/keras/graph_substitutions/__init__.py +0 -0
  170. {mct-nightly-2.2.0.20241126.528 → mct-nightly-2.2.0.20241128.546}/model_compression_toolkit/core/keras/graph_substitutions/substitutions/__init__.py +0 -0
  171. {mct-nightly-2.2.0.20241126.528 → mct-nightly-2.2.0.20241128.546}/model_compression_toolkit/core/keras/graph_substitutions/substitutions/activation_decomposition.py +0 -0
  172. {mct-nightly-2.2.0.20241126.528 → mct-nightly-2.2.0.20241128.546}/model_compression_toolkit/core/keras/graph_substitutions/substitutions/batchnorm_folding.py +0 -0
  173. {mct-nightly-2.2.0.20241126.528 → mct-nightly-2.2.0.20241128.546}/model_compression_toolkit/core/keras/graph_substitutions/substitutions/batchnorm_reconstruction.py +0 -0
  174. {mct-nightly-2.2.0.20241126.528 → mct-nightly-2.2.0.20241128.546}/model_compression_toolkit/core/keras/graph_substitutions/substitutions/batchnorm_refusing.py +0 -0
  175. {mct-nightly-2.2.0.20241126.528 → mct-nightly-2.2.0.20241128.546}/model_compression_toolkit/core/keras/graph_substitutions/substitutions/concat_threshold_update.py +0 -0
  176. {mct-nightly-2.2.0.20241126.528 → mct-nightly-2.2.0.20241128.546}/model_compression_toolkit/core/keras/graph_substitutions/substitutions/conv_funcs_to_layer.py +0 -0
  177. {mct-nightly-2.2.0.20241126.528 → mct-nightly-2.2.0.20241128.546}/model_compression_toolkit/core/keras/graph_substitutions/substitutions/dwconv_to_conv.py +0 -0
  178. {mct-nightly-2.2.0.20241126.528 → mct-nightly-2.2.0.20241128.546}/model_compression_toolkit/core/keras/graph_substitutions/substitutions/input_scaling.py +0 -0
  179. {mct-nightly-2.2.0.20241126.528 → mct-nightly-2.2.0.20241128.546}/model_compression_toolkit/core/keras/graph_substitutions/substitutions/linear_collapsing.py +0 -0
  180. {mct-nightly-2.2.0.20241126.528 → mct-nightly-2.2.0.20241128.546}/model_compression_toolkit/core/keras/graph_substitutions/substitutions/matmul_substitution.py +0 -0
  181. {mct-nightly-2.2.0.20241126.528 → mct-nightly-2.2.0.20241128.546}/model_compression_toolkit/core/keras/graph_substitutions/substitutions/multi_head_attention_decomposition.py +0 -0
  182. {mct-nightly-2.2.0.20241126.528 → mct-nightly-2.2.0.20241128.546}/model_compression_toolkit/core/keras/graph_substitutions/substitutions/relu_bound_to_power_of_2.py +0 -0
  183. {mct-nightly-2.2.0.20241126.528 → mct-nightly-2.2.0.20241128.546}/model_compression_toolkit/core/keras/graph_substitutions/substitutions/remove_identity.py +0 -0
  184. {mct-nightly-2.2.0.20241126.528 → mct-nightly-2.2.0.20241128.546}/model_compression_toolkit/core/keras/graph_substitutions/substitutions/residual_collapsing.py +0 -0
  185. {mct-nightly-2.2.0.20241126.528 → mct-nightly-2.2.0.20241128.546}/model_compression_toolkit/core/keras/graph_substitutions/substitutions/scale_equalization.py +0 -0
  186. {mct-nightly-2.2.0.20241126.528 → mct-nightly-2.2.0.20241128.546}/model_compression_toolkit/core/keras/graph_substitutions/substitutions/separableconv_decomposition.py +0 -0
  187. {mct-nightly-2.2.0.20241126.528 → mct-nightly-2.2.0.20241128.546}/model_compression_toolkit/core/keras/graph_substitutions/substitutions/shift_negative_activation.py +0 -0
  188. {mct-nightly-2.2.0.20241126.528 → mct-nightly-2.2.0.20241128.546}/model_compression_toolkit/core/keras/graph_substitutions/substitutions/sigmoid_mul_to_swish.py +0 -0
  189. {mct-nightly-2.2.0.20241126.528 → mct-nightly-2.2.0.20241128.546}/model_compression_toolkit/core/keras/graph_substitutions/substitutions/softmax_shift.py +0 -0
  190. {mct-nightly-2.2.0.20241126.528 → mct-nightly-2.2.0.20241128.546}/model_compression_toolkit/core/keras/graph_substitutions/substitutions/virtual_activation_weights_composition.py +0 -0
  191. {mct-nightly-2.2.0.20241126.528 → mct-nightly-2.2.0.20241128.546}/model_compression_toolkit/core/keras/graph_substitutions/substitutions/weights_activation_split.py +0 -0
  192. {mct-nightly-2.2.0.20241126.528 → mct-nightly-2.2.0.20241128.546}/model_compression_toolkit/core/keras/hessian/__init__.py +0 -0
  193. {mct-nightly-2.2.0.20241126.528 → mct-nightly-2.2.0.20241128.546}/model_compression_toolkit/core/keras/hessian/activation_hessian_scores_calculator_keras.py +0 -0
  194. {mct-nightly-2.2.0.20241126.528 → mct-nightly-2.2.0.20241128.546}/model_compression_toolkit/core/keras/hessian/hessian_scores_calculator_keras.py +0 -0
  195. {mct-nightly-2.2.0.20241126.528 → mct-nightly-2.2.0.20241128.546}/model_compression_toolkit/core/keras/hessian/weights_hessian_scores_calculator_keras.py +0 -0
  196. {mct-nightly-2.2.0.20241126.528 → mct-nightly-2.2.0.20241128.546}/model_compression_toolkit/core/keras/keras_implementation.py +0 -0
  197. {mct-nightly-2.2.0.20241126.528 → mct-nightly-2.2.0.20241128.546}/model_compression_toolkit/core/keras/keras_model_validation.py +0 -0
  198. {mct-nightly-2.2.0.20241126.528 → mct-nightly-2.2.0.20241128.546}/model_compression_toolkit/core/keras/keras_node_prior_info.py +0 -0
  199. {mct-nightly-2.2.0.20241126.528 → mct-nightly-2.2.0.20241128.546}/model_compression_toolkit/core/keras/mixed_precision/__init__.py +0 -0
  200. {mct-nightly-2.2.0.20241126.528 → mct-nightly-2.2.0.20241128.546}/model_compression_toolkit/core/keras/mixed_precision/configurable_activation_quantizer.py +0 -0
  201. {mct-nightly-2.2.0.20241126.528 → mct-nightly-2.2.0.20241128.546}/model_compression_toolkit/core/keras/mixed_precision/configurable_weights_quantizer.py +0 -0
  202. {mct-nightly-2.2.0.20241126.528 → mct-nightly-2.2.0.20241128.546}/model_compression_toolkit/core/keras/pruning/__init__.py +0 -0
  203. {mct-nightly-2.2.0.20241126.528 → mct-nightly-2.2.0.20241128.546}/model_compression_toolkit/core/keras/pruning/pruning_keras_implementation.py +0 -0
  204. {mct-nightly-2.2.0.20241126.528 → mct-nightly-2.2.0.20241128.546}/model_compression_toolkit/core/keras/quantizer/__init__.py +0 -0
  205. {mct-nightly-2.2.0.20241126.528 → mct-nightly-2.2.0.20241128.546}/model_compression_toolkit/core/keras/quantizer/fake_quant_builder.py +0 -0
  206. {mct-nightly-2.2.0.20241126.528 → mct-nightly-2.2.0.20241128.546}/model_compression_toolkit/core/keras/quantizer/lut_fake_quant.py +0 -0
  207. {mct-nightly-2.2.0.20241126.528 → mct-nightly-2.2.0.20241128.546}/model_compression_toolkit/core/keras/reader/__init__.py +0 -0
  208. {mct-nightly-2.2.0.20241126.528 → mct-nightly-2.2.0.20241128.546}/model_compression_toolkit/core/keras/reader/common.py +0 -0
  209. {mct-nightly-2.2.0.20241126.528 → mct-nightly-2.2.0.20241128.546}/model_compression_toolkit/core/keras/reader/connectivity_handler.py +0 -0
  210. {mct-nightly-2.2.0.20241126.528 → mct-nightly-2.2.0.20241128.546}/model_compression_toolkit/core/keras/reader/nested_model/__init__.py +0 -0
  211. {mct-nightly-2.2.0.20241126.528 → mct-nightly-2.2.0.20241128.546}/model_compression_toolkit/core/keras/reader/nested_model/edges_merger.py +0 -0
  212. {mct-nightly-2.2.0.20241126.528 → mct-nightly-2.2.0.20241128.546}/model_compression_toolkit/core/keras/reader/nested_model/nested_model_handler.py +0 -0
  213. {mct-nightly-2.2.0.20241126.528 → mct-nightly-2.2.0.20241128.546}/model_compression_toolkit/core/keras/reader/nested_model/nodes_merger.py +0 -0
  214. {mct-nightly-2.2.0.20241126.528 → mct-nightly-2.2.0.20241128.546}/model_compression_toolkit/core/keras/reader/nested_model/outputs_merger.py +0 -0
  215. {mct-nightly-2.2.0.20241126.528 → mct-nightly-2.2.0.20241128.546}/model_compression_toolkit/core/keras/reader/node_builder.py +0 -0
  216. {mct-nightly-2.2.0.20241126.528 → mct-nightly-2.2.0.20241128.546}/model_compression_toolkit/core/keras/reader/reader.py +0 -0
  217. {mct-nightly-2.2.0.20241126.528 → mct-nightly-2.2.0.20241128.546}/model_compression_toolkit/core/keras/resource_utilization_data_facade.py +0 -0
  218. {mct-nightly-2.2.0.20241126.528 → mct-nightly-2.2.0.20241128.546}/model_compression_toolkit/core/keras/statistics_correction/__init__.py +0 -0
  219. {mct-nightly-2.2.0.20241126.528 → mct-nightly-2.2.0.20241128.546}/model_compression_toolkit/core/keras/statistics_correction/apply_second_moment_correction.py +0 -0
  220. {mct-nightly-2.2.0.20241126.528 → mct-nightly-2.2.0.20241128.546}/model_compression_toolkit/core/keras/statistics_correction/keras_compute_activation_bias_correction_of_graph.py +0 -0
  221. {mct-nightly-2.2.0.20241126.528 → mct-nightly-2.2.0.20241128.546}/model_compression_toolkit/core/keras/tf_tensor_numpy.py +0 -0
  222. {mct-nightly-2.2.0.20241126.528 → mct-nightly-2.2.0.20241128.546}/model_compression_toolkit/core/keras/visualization/__init__.py +0 -0
  223. {mct-nightly-2.2.0.20241126.528 → mct-nightly-2.2.0.20241128.546}/model_compression_toolkit/core/pytorch/__init__.py +0 -0
  224. {mct-nightly-2.2.0.20241126.528 → mct-nightly-2.2.0.20241128.546}/model_compression_toolkit/core/pytorch/back2framework/__init__.py +0 -0
  225. {mct-nightly-2.2.0.20241126.528 → mct-nightly-2.2.0.20241128.546}/model_compression_toolkit/core/pytorch/back2framework/factory_model_builder.py +0 -0
  226. {mct-nightly-2.2.0.20241126.528 → mct-nightly-2.2.0.20241128.546}/model_compression_toolkit/core/pytorch/back2framework/float_model_builder.py +0 -0
  227. {mct-nightly-2.2.0.20241126.528 → mct-nightly-2.2.0.20241128.546}/model_compression_toolkit/core/pytorch/back2framework/instance_builder.py +0 -0
  228. {mct-nightly-2.2.0.20241126.528 → mct-nightly-2.2.0.20241128.546}/model_compression_toolkit/core/pytorch/back2framework/mixed_precision_model_builder.py +0 -0
  229. {mct-nightly-2.2.0.20241126.528 → mct-nightly-2.2.0.20241128.546}/model_compression_toolkit/core/pytorch/back2framework/pytorch_model_builder.py +0 -0
  230. {mct-nightly-2.2.0.20241126.528 → mct-nightly-2.2.0.20241128.546}/model_compression_toolkit/core/pytorch/back2framework/quantization_wrapper/__init__.py +0 -0
  231. {mct-nightly-2.2.0.20241126.528 → mct-nightly-2.2.0.20241128.546}/model_compression_toolkit/core/pytorch/back2framework/quantization_wrapper/quantized_layer_wrapper.py +0 -0
  232. {mct-nightly-2.2.0.20241126.528 → mct-nightly-2.2.0.20241128.546}/model_compression_toolkit/core/pytorch/back2framework/quantization_wrapper/wrapper_quantize_config.py +0 -0
  233. {mct-nightly-2.2.0.20241126.528 → mct-nightly-2.2.0.20241128.546}/model_compression_toolkit/core/pytorch/back2framework/quantized_model_builder.py +0 -0
  234. {mct-nightly-2.2.0.20241126.528 → mct-nightly-2.2.0.20241128.546}/model_compression_toolkit/core/pytorch/constants.py +0 -0
  235. {mct-nightly-2.2.0.20241126.528 → mct-nightly-2.2.0.20241128.546}/model_compression_toolkit/core/pytorch/data_util.py +0 -0
  236. {mct-nightly-2.2.0.20241126.528 → mct-nightly-2.2.0.20241128.546}/model_compression_toolkit/core/pytorch/default_framework_info.py +0 -0
  237. {mct-nightly-2.2.0.20241126.528 → mct-nightly-2.2.0.20241128.546}/model_compression_toolkit/core/pytorch/graph_substitutions/__init__.py +0 -0
  238. {mct-nightly-2.2.0.20241126.528 → mct-nightly-2.2.0.20241128.546}/model_compression_toolkit/core/pytorch/graph_substitutions/substitutions/__init__.py +0 -0
  239. {mct-nightly-2.2.0.20241126.528 → mct-nightly-2.2.0.20241128.546}/model_compression_toolkit/core/pytorch/graph_substitutions/substitutions/batchnorm_folding.py +0 -0
  240. {mct-nightly-2.2.0.20241126.528 → mct-nightly-2.2.0.20241128.546}/model_compression_toolkit/core/pytorch/graph_substitutions/substitutions/batchnorm_reconstruction.py +0 -0
  241. {mct-nightly-2.2.0.20241126.528 → mct-nightly-2.2.0.20241128.546}/model_compression_toolkit/core/pytorch/graph_substitutions/substitutions/batchnorm_refusing.py +0 -0
  242. {mct-nightly-2.2.0.20241126.528 → mct-nightly-2.2.0.20241128.546}/model_compression_toolkit/core/pytorch/graph_substitutions/substitutions/concat_threshold_update.py +0 -0
  243. {mct-nightly-2.2.0.20241126.528 → mct-nightly-2.2.0.20241128.546}/model_compression_toolkit/core/pytorch/graph_substitutions/substitutions/const_holder_conv.py +0 -0
  244. {mct-nightly-2.2.0.20241126.528 → mct-nightly-2.2.0.20241128.546}/model_compression_toolkit/core/pytorch/graph_substitutions/substitutions/functional_batch_norm.py +0 -0
  245. {mct-nightly-2.2.0.20241126.528 → mct-nightly-2.2.0.20241128.546}/model_compression_toolkit/core/pytorch/graph_substitutions/substitutions/functional_layer_norm.py +0 -0
  246. {mct-nightly-2.2.0.20241126.528 → mct-nightly-2.2.0.20241128.546}/model_compression_toolkit/core/pytorch/graph_substitutions/substitutions/functional_linear.py +0 -0
  247. {mct-nightly-2.2.0.20241126.528 → mct-nightly-2.2.0.20241128.546}/model_compression_toolkit/core/pytorch/graph_substitutions/substitutions/linear_collapsing.py +0 -0
  248. {mct-nightly-2.2.0.20241126.528 → mct-nightly-2.2.0.20241128.546}/model_compression_toolkit/core/pytorch/graph_substitutions/substitutions/multi_head_attention_decomposition.py +0 -0
  249. {mct-nightly-2.2.0.20241126.528 → mct-nightly-2.2.0.20241128.546}/model_compression_toolkit/core/pytorch/graph_substitutions/substitutions/relu_bound_to_power_of_2.py +0 -0
  250. {mct-nightly-2.2.0.20241126.528 → mct-nightly-2.2.0.20241128.546}/model_compression_toolkit/core/pytorch/graph_substitutions/substitutions/remove_identity.py +0 -0
  251. {mct-nightly-2.2.0.20241126.528 → mct-nightly-2.2.0.20241128.546}/model_compression_toolkit/core/pytorch/graph_substitutions/substitutions/reshape_with_static_shapes.py +0 -0
  252. {mct-nightly-2.2.0.20241126.528 → mct-nightly-2.2.0.20241128.546}/model_compression_toolkit/core/pytorch/graph_substitutions/substitutions/residual_collapsing.py +0 -0
  253. {mct-nightly-2.2.0.20241126.528 → mct-nightly-2.2.0.20241128.546}/model_compression_toolkit/core/pytorch/graph_substitutions/substitutions/scale_equalization.py +0 -0
  254. {mct-nightly-2.2.0.20241126.528 → mct-nightly-2.2.0.20241128.546}/model_compression_toolkit/core/pytorch/graph_substitutions/substitutions/scaled_dot_product_attention.py +0 -0
  255. {mct-nightly-2.2.0.20241126.528 → mct-nightly-2.2.0.20241128.546}/model_compression_toolkit/core/pytorch/graph_substitutions/substitutions/shift_negative_activation.py +0 -0
  256. {mct-nightly-2.2.0.20241126.528 → mct-nightly-2.2.0.20241128.546}/model_compression_toolkit/core/pytorch/graph_substitutions/substitutions/softmax_shift.py +0 -0
  257. {mct-nightly-2.2.0.20241126.528 → mct-nightly-2.2.0.20241128.546}/model_compression_toolkit/core/pytorch/graph_substitutions/substitutions/transform_function_call_method.py +0 -0
  258. {mct-nightly-2.2.0.20241126.528 → mct-nightly-2.2.0.20241128.546}/model_compression_toolkit/core/pytorch/graph_substitutions/substitutions/virtual_activation_weights_composition.py +0 -0
  259. {mct-nightly-2.2.0.20241126.528 → mct-nightly-2.2.0.20241128.546}/model_compression_toolkit/core/pytorch/graph_substitutions/substitutions/weights_activation_split.py +0 -0
  260. {mct-nightly-2.2.0.20241126.528 → mct-nightly-2.2.0.20241128.546}/model_compression_toolkit/core/pytorch/hessian/__init__.py +0 -0
  261. {mct-nightly-2.2.0.20241126.528 → mct-nightly-2.2.0.20241128.546}/model_compression_toolkit/core/pytorch/hessian/activation_hessian_scores_calculator_pytorch.py +0 -0
  262. {mct-nightly-2.2.0.20241126.528 → mct-nightly-2.2.0.20241128.546}/model_compression_toolkit/core/pytorch/hessian/hessian_scores_calculator_pytorch.py +0 -0
  263. {mct-nightly-2.2.0.20241126.528 → mct-nightly-2.2.0.20241128.546}/model_compression_toolkit/core/pytorch/hessian/weights_hessian_scores_calculator_pytorch.py +0 -0
  264. {mct-nightly-2.2.0.20241126.528 → mct-nightly-2.2.0.20241128.546}/model_compression_toolkit/core/pytorch/mixed_precision/__init__.py +0 -0
  265. {mct-nightly-2.2.0.20241126.528 → mct-nightly-2.2.0.20241128.546}/model_compression_toolkit/core/pytorch/mixed_precision/configurable_activation_quantizer.py +0 -0
  266. {mct-nightly-2.2.0.20241126.528 → mct-nightly-2.2.0.20241128.546}/model_compression_toolkit/core/pytorch/mixed_precision/configurable_weights_quantizer.py +0 -0
  267. {mct-nightly-2.2.0.20241126.528 → mct-nightly-2.2.0.20241128.546}/model_compression_toolkit/core/pytorch/pruning/__init__.py +0 -0
  268. {mct-nightly-2.2.0.20241126.528 → mct-nightly-2.2.0.20241128.546}/model_compression_toolkit/core/pytorch/pruning/pruning_pytorch_implementation.py +0 -0
  269. {mct-nightly-2.2.0.20241126.528 → mct-nightly-2.2.0.20241128.546}/model_compression_toolkit/core/pytorch/pytorch_device_config.py +0 -0
  270. {mct-nightly-2.2.0.20241126.528 → mct-nightly-2.2.0.20241128.546}/model_compression_toolkit/core/pytorch/pytorch_implementation.py +0 -0
  271. {mct-nightly-2.2.0.20241126.528 → mct-nightly-2.2.0.20241128.546}/model_compression_toolkit/core/pytorch/pytorch_node_prior_info.py +0 -0
  272. {mct-nightly-2.2.0.20241126.528 → mct-nightly-2.2.0.20241128.546}/model_compression_toolkit/core/pytorch/quantizer/__init__.py +0 -0
  273. {mct-nightly-2.2.0.20241126.528 → mct-nightly-2.2.0.20241128.546}/model_compression_toolkit/core/pytorch/quantizer/fake_quant_builder.py +0 -0
  274. {mct-nightly-2.2.0.20241126.528 → mct-nightly-2.2.0.20241128.546}/model_compression_toolkit/core/pytorch/quantizer/lut_fake_quant.py +0 -0
  275. {mct-nightly-2.2.0.20241126.528 → mct-nightly-2.2.0.20241128.546}/model_compression_toolkit/core/pytorch/reader/__init__.py +0 -0
  276. {mct-nightly-2.2.0.20241126.528 → mct-nightly-2.2.0.20241128.546}/model_compression_toolkit/core/pytorch/reader/graph_builders.py +0 -0
  277. {mct-nightly-2.2.0.20241126.528 → mct-nightly-2.2.0.20241128.546}/model_compression_toolkit/core/pytorch/reader/node_holders.py +0 -0
  278. {mct-nightly-2.2.0.20241126.528 → mct-nightly-2.2.0.20241128.546}/model_compression_toolkit/core/pytorch/reader/reader.py +0 -0
  279. {mct-nightly-2.2.0.20241126.528 → mct-nightly-2.2.0.20241128.546}/model_compression_toolkit/core/pytorch/resource_utilization_data_facade.py +0 -0
  280. {mct-nightly-2.2.0.20241126.528 → mct-nightly-2.2.0.20241128.546}/model_compression_toolkit/core/pytorch/statistics_correction/__init__.py +0 -0
  281. {mct-nightly-2.2.0.20241126.528 → mct-nightly-2.2.0.20241128.546}/model_compression_toolkit/core/pytorch/statistics_correction/apply_second_moment_correction.py +0 -0
  282. {mct-nightly-2.2.0.20241126.528 → mct-nightly-2.2.0.20241128.546}/model_compression_toolkit/core/pytorch/statistics_correction/pytorch_compute_activation_bias_correction_of_graph.py +0 -0
  283. {mct-nightly-2.2.0.20241126.528 → mct-nightly-2.2.0.20241128.546}/model_compression_toolkit/core/pytorch/utils.py +0 -0
  284. {mct-nightly-2.2.0.20241126.528 → mct-nightly-2.2.0.20241128.546}/model_compression_toolkit/core/quantization_prep_runner.py +0 -0
  285. {mct-nightly-2.2.0.20241126.528 → mct-nightly-2.2.0.20241128.546}/model_compression_toolkit/core/runner.py +0 -0
  286. {mct-nightly-2.2.0.20241126.528 → mct-nightly-2.2.0.20241128.546}/model_compression_toolkit/data_generation/__init__.py +0 -0
  287. {mct-nightly-2.2.0.20241126.528 → mct-nightly-2.2.0.20241128.546}/model_compression_toolkit/data_generation/common/__init__.py +0 -0
  288. {mct-nightly-2.2.0.20241126.528 → mct-nightly-2.2.0.20241128.546}/model_compression_toolkit/data_generation/common/constants.py +0 -0
  289. {mct-nightly-2.2.0.20241126.528 → mct-nightly-2.2.0.20241128.546}/model_compression_toolkit/data_generation/common/data_generation.py +0 -0
  290. {mct-nightly-2.2.0.20241126.528 → mct-nightly-2.2.0.20241128.546}/model_compression_toolkit/data_generation/common/data_generation_config.py +0 -0
  291. {mct-nightly-2.2.0.20241126.528 → mct-nightly-2.2.0.20241128.546}/model_compression_toolkit/data_generation/common/enums.py +0 -0
  292. {mct-nightly-2.2.0.20241126.528 → mct-nightly-2.2.0.20241128.546}/model_compression_toolkit/data_generation/common/image_pipeline.py +0 -0
  293. {mct-nightly-2.2.0.20241126.528 → mct-nightly-2.2.0.20241128.546}/model_compression_toolkit/data_generation/common/model_info_exctractors.py +0 -0
  294. {mct-nightly-2.2.0.20241126.528 → mct-nightly-2.2.0.20241128.546}/model_compression_toolkit/data_generation/common/optimization_utils.py +0 -0
  295. {mct-nightly-2.2.0.20241126.528 → mct-nightly-2.2.0.20241128.546}/model_compression_toolkit/data_generation/keras/__init__.py +0 -0
  296. {mct-nightly-2.2.0.20241126.528 → mct-nightly-2.2.0.20241128.546}/model_compression_toolkit/data_generation/keras/constants.py +0 -0
  297. {mct-nightly-2.2.0.20241126.528 → mct-nightly-2.2.0.20241128.546}/model_compression_toolkit/data_generation/keras/image_operations.py +0 -0
  298. {mct-nightly-2.2.0.20241126.528 → mct-nightly-2.2.0.20241128.546}/model_compression_toolkit/data_generation/keras/image_pipeline.py +0 -0
  299. {mct-nightly-2.2.0.20241126.528 → mct-nightly-2.2.0.20241128.546}/model_compression_toolkit/data_generation/keras/keras_data_generation.py +0 -0
  300. {mct-nightly-2.2.0.20241126.528 → mct-nightly-2.2.0.20241128.546}/model_compression_toolkit/data_generation/keras/model_info_exctractors.py +0 -0
  301. {mct-nightly-2.2.0.20241126.528 → mct-nightly-2.2.0.20241128.546}/model_compression_toolkit/data_generation/keras/optimization_functions/__init__.py +0 -0
  302. {mct-nightly-2.2.0.20241126.528 → mct-nightly-2.2.0.20241128.546}/model_compression_toolkit/data_generation/keras/optimization_functions/batchnorm_alignment_functions.py +0 -0
  303. {mct-nightly-2.2.0.20241126.528 → mct-nightly-2.2.0.20241128.546}/model_compression_toolkit/data_generation/keras/optimization_functions/bn_layer_weighting_functions.py +0 -0
  304. {mct-nightly-2.2.0.20241126.528 → mct-nightly-2.2.0.20241128.546}/model_compression_toolkit/data_generation/keras/optimization_functions/image_initilization.py +0 -0
  305. {mct-nightly-2.2.0.20241126.528 → mct-nightly-2.2.0.20241128.546}/model_compression_toolkit/data_generation/keras/optimization_functions/lr_scheduler.py +0 -0
  306. {mct-nightly-2.2.0.20241126.528 → mct-nightly-2.2.0.20241128.546}/model_compression_toolkit/data_generation/keras/optimization_functions/output_loss_functions.py +0 -0
  307. {mct-nightly-2.2.0.20241126.528 → mct-nightly-2.2.0.20241128.546}/model_compression_toolkit/data_generation/keras/optimization_functions/scheduler_step_functions.py +0 -0
  308. {mct-nightly-2.2.0.20241126.528 → mct-nightly-2.2.0.20241128.546}/model_compression_toolkit/data_generation/keras/optimization_utils.py +0 -0
  309. {mct-nightly-2.2.0.20241126.528 → mct-nightly-2.2.0.20241128.546}/model_compression_toolkit/data_generation/pytorch/__init__.py +0 -0
  310. {mct-nightly-2.2.0.20241126.528 → mct-nightly-2.2.0.20241128.546}/model_compression_toolkit/data_generation/pytorch/constants.py +0 -0
  311. {mct-nightly-2.2.0.20241126.528 → mct-nightly-2.2.0.20241128.546}/model_compression_toolkit/data_generation/pytorch/image_operations.py +0 -0
  312. {mct-nightly-2.2.0.20241126.528 → mct-nightly-2.2.0.20241128.546}/model_compression_toolkit/data_generation/pytorch/image_pipeline.py +0 -0
  313. {mct-nightly-2.2.0.20241126.528 → mct-nightly-2.2.0.20241128.546}/model_compression_toolkit/data_generation/pytorch/model_info_exctractors.py +0 -0
  314. {mct-nightly-2.2.0.20241126.528 → mct-nightly-2.2.0.20241128.546}/model_compression_toolkit/data_generation/pytorch/optimization_functions/__init__.py +0 -0
  315. {mct-nightly-2.2.0.20241126.528 → mct-nightly-2.2.0.20241128.546}/model_compression_toolkit/data_generation/pytorch/optimization_functions/batchnorm_alignment_functions.py +0 -0
  316. {mct-nightly-2.2.0.20241126.528 → mct-nightly-2.2.0.20241128.546}/model_compression_toolkit/data_generation/pytorch/optimization_functions/bn_layer_weighting_functions.py +0 -0
  317. {mct-nightly-2.2.0.20241126.528 → mct-nightly-2.2.0.20241128.546}/model_compression_toolkit/data_generation/pytorch/optimization_functions/image_initilization.py +0 -0
  318. {mct-nightly-2.2.0.20241126.528 → mct-nightly-2.2.0.20241128.546}/model_compression_toolkit/data_generation/pytorch/optimization_functions/lr_scheduler.py +0 -0
  319. {mct-nightly-2.2.0.20241126.528 → mct-nightly-2.2.0.20241128.546}/model_compression_toolkit/data_generation/pytorch/optimization_functions/output_loss_functions.py +0 -0
  320. {mct-nightly-2.2.0.20241126.528 → mct-nightly-2.2.0.20241128.546}/model_compression_toolkit/data_generation/pytorch/optimization_functions/scheduler_step_functions.py +0 -0
  321. {mct-nightly-2.2.0.20241126.528 → mct-nightly-2.2.0.20241128.546}/model_compression_toolkit/data_generation/pytorch/optimization_utils.py +0 -0
  322. {mct-nightly-2.2.0.20241126.528 → mct-nightly-2.2.0.20241128.546}/model_compression_toolkit/data_generation/pytorch/pytorch_data_generation.py +0 -0
  323. {mct-nightly-2.2.0.20241126.528 → mct-nightly-2.2.0.20241128.546}/model_compression_toolkit/defaultdict.py +0 -0
  324. {mct-nightly-2.2.0.20241126.528 → mct-nightly-2.2.0.20241128.546}/model_compression_toolkit/exporter/__init__.py +0 -0
  325. {mct-nightly-2.2.0.20241126.528 → mct-nightly-2.2.0.20241128.546}/model_compression_toolkit/exporter/model_exporter/__init__.py +0 -0
  326. {mct-nightly-2.2.0.20241126.528 → mct-nightly-2.2.0.20241128.546}/model_compression_toolkit/exporter/model_exporter/fw_agonstic/__init__.py +0 -0
  327. {mct-nightly-2.2.0.20241126.528 → mct-nightly-2.2.0.20241128.546}/model_compression_toolkit/exporter/model_exporter/fw_agonstic/exporter.py +0 -0
  328. {mct-nightly-2.2.0.20241126.528 → mct-nightly-2.2.0.20241128.546}/model_compression_toolkit/exporter/model_exporter/fw_agonstic/quantization_format.py +0 -0
  329. {mct-nightly-2.2.0.20241126.528 → mct-nightly-2.2.0.20241128.546}/model_compression_toolkit/exporter/model_exporter/keras/__init__.py +0 -0
  330. {mct-nightly-2.2.0.20241126.528 → mct-nightly-2.2.0.20241128.546}/model_compression_toolkit/exporter/model_exporter/keras/base_keras_exporter.py +0 -0
  331. {mct-nightly-2.2.0.20241126.528 → mct-nightly-2.2.0.20241128.546}/model_compression_toolkit/exporter/model_exporter/keras/export_serialization_format.py +0 -0
  332. {mct-nightly-2.2.0.20241126.528 → mct-nightly-2.2.0.20241128.546}/model_compression_toolkit/exporter/model_exporter/keras/fakely_quant_keras_exporter.py +0 -0
  333. {mct-nightly-2.2.0.20241126.528 → mct-nightly-2.2.0.20241128.546}/model_compression_toolkit/exporter/model_exporter/keras/fakely_quant_tflite_exporter.py +0 -0
  334. {mct-nightly-2.2.0.20241126.528 → mct-nightly-2.2.0.20241128.546}/model_compression_toolkit/exporter/model_exporter/keras/int8_tflite_exporter.py +0 -0
  335. {mct-nightly-2.2.0.20241126.528 → mct-nightly-2.2.0.20241128.546}/model_compression_toolkit/exporter/model_exporter/keras/keras_export_facade.py +0 -0
  336. {mct-nightly-2.2.0.20241126.528 → mct-nightly-2.2.0.20241128.546}/model_compression_toolkit/exporter/model_exporter/keras/mctq_keras_exporter.py +0 -0
  337. {mct-nightly-2.2.0.20241126.528 → mct-nightly-2.2.0.20241128.546}/model_compression_toolkit/exporter/model_exporter/pytorch/__init__.py +0 -0
  338. {mct-nightly-2.2.0.20241126.528 → mct-nightly-2.2.0.20241128.546}/model_compression_toolkit/exporter/model_exporter/pytorch/base_pytorch_exporter.py +0 -0
  339. {mct-nightly-2.2.0.20241126.528 → mct-nightly-2.2.0.20241128.546}/model_compression_toolkit/exporter/model_exporter/pytorch/export_serialization_format.py +0 -0
  340. {mct-nightly-2.2.0.20241126.528 → mct-nightly-2.2.0.20241128.546}/model_compression_toolkit/exporter/model_exporter/pytorch/fakely_quant_onnx_pytorch_exporter.py +0 -0
  341. {mct-nightly-2.2.0.20241126.528 → mct-nightly-2.2.0.20241128.546}/model_compression_toolkit/exporter/model_exporter/pytorch/fakely_quant_torchscript_pytorch_exporter.py +0 -0
  342. {mct-nightly-2.2.0.20241126.528 → mct-nightly-2.2.0.20241128.546}/model_compression_toolkit/exporter/model_exporter/pytorch/pytorch_export_facade.py +0 -0
  343. {mct-nightly-2.2.0.20241126.528 → mct-nightly-2.2.0.20241128.546}/model_compression_toolkit/exporter/model_wrapper/__init__.py +0 -0
  344. {mct-nightly-2.2.0.20241126.528 → mct-nightly-2.2.0.20241128.546}/model_compression_toolkit/exporter/model_wrapper/fw_agnostic/__init__.py +0 -0
  345. {mct-nightly-2.2.0.20241126.528 → mct-nightly-2.2.0.20241128.546}/model_compression_toolkit/exporter/model_wrapper/fw_agnostic/get_inferable_quantizers.py +0 -0
  346. {mct-nightly-2.2.0.20241126.528 → mct-nightly-2.2.0.20241128.546}/model_compression_toolkit/exporter/model_wrapper/keras/__init__.py +0 -0
  347. {mct-nightly-2.2.0.20241126.528 → mct-nightly-2.2.0.20241128.546}/model_compression_toolkit/exporter/model_wrapper/keras/builder/__init__.py +0 -0
  348. {mct-nightly-2.2.0.20241126.528 → mct-nightly-2.2.0.20241128.546}/model_compression_toolkit/exporter/model_wrapper/keras/builder/fully_quantized_model_builder.py +0 -0
  349. {mct-nightly-2.2.0.20241126.528 → mct-nightly-2.2.0.20241128.546}/model_compression_toolkit/exporter/model_wrapper/keras/builder/node_to_quantizer.py +0 -0
  350. {mct-nightly-2.2.0.20241126.528 → mct-nightly-2.2.0.20241128.546}/model_compression_toolkit/exporter/model_wrapper/keras/validate_layer.py +0 -0
  351. {mct-nightly-2.2.0.20241126.528 → mct-nightly-2.2.0.20241128.546}/model_compression_toolkit/exporter/model_wrapper/pytorch/__init__.py +0 -0
  352. {mct-nightly-2.2.0.20241126.528 → mct-nightly-2.2.0.20241128.546}/model_compression_toolkit/exporter/model_wrapper/pytorch/builder/__init__.py +0 -0
  353. {mct-nightly-2.2.0.20241126.528 → mct-nightly-2.2.0.20241128.546}/model_compression_toolkit/exporter/model_wrapper/pytorch/builder/fully_quantized_model_builder.py +0 -0
  354. {mct-nightly-2.2.0.20241126.528 → mct-nightly-2.2.0.20241128.546}/model_compression_toolkit/exporter/model_wrapper/pytorch/builder/node_to_quantizer.py +0 -0
  355. {mct-nightly-2.2.0.20241126.528 → mct-nightly-2.2.0.20241128.546}/model_compression_toolkit/exporter/model_wrapper/pytorch/validate_layer.py +0 -0
  356. {mct-nightly-2.2.0.20241126.528 → mct-nightly-2.2.0.20241128.546}/model_compression_toolkit/gptq/__init__.py +0 -0
  357. {mct-nightly-2.2.0.20241126.528 → mct-nightly-2.2.0.20241128.546}/model_compression_toolkit/gptq/common/__init__.py +0 -0
  358. {mct-nightly-2.2.0.20241126.528 → mct-nightly-2.2.0.20241128.546}/model_compression_toolkit/gptq/common/gptq_config.py +0 -0
  359. {mct-nightly-2.2.0.20241126.528 → mct-nightly-2.2.0.20241128.546}/model_compression_toolkit/gptq/common/gptq_constants.py +0 -0
  360. {mct-nightly-2.2.0.20241126.528 → mct-nightly-2.2.0.20241128.546}/model_compression_toolkit/gptq/common/gptq_framework_implementation.py +0 -0
  361. {mct-nightly-2.2.0.20241126.528 → mct-nightly-2.2.0.20241128.546}/model_compression_toolkit/gptq/common/gptq_graph.py +0 -0
  362. {mct-nightly-2.2.0.20241126.528 → mct-nightly-2.2.0.20241128.546}/model_compression_toolkit/gptq/common/gptq_training.py +0 -0
  363. {mct-nightly-2.2.0.20241126.528 → mct-nightly-2.2.0.20241128.546}/model_compression_toolkit/gptq/common/gradual_activation_quantization.py +0 -0
  364. {mct-nightly-2.2.0.20241126.528 → mct-nightly-2.2.0.20241128.546}/model_compression_toolkit/gptq/common/regularization_factory.py +0 -0
  365. {mct-nightly-2.2.0.20241126.528 → mct-nightly-2.2.0.20241128.546}/model_compression_toolkit/gptq/keras/__init__.py +0 -0
  366. {mct-nightly-2.2.0.20241126.528 → mct-nightly-2.2.0.20241128.546}/model_compression_toolkit/gptq/keras/gptq_keras_implementation.py +0 -0
  367. {mct-nightly-2.2.0.20241126.528 → mct-nightly-2.2.0.20241128.546}/model_compression_toolkit/gptq/keras/gptq_loss.py +0 -0
  368. {mct-nightly-2.2.0.20241126.528 → mct-nightly-2.2.0.20241128.546}/model_compression_toolkit/gptq/keras/gptq_training.py +0 -0
  369. {mct-nightly-2.2.0.20241126.528 → mct-nightly-2.2.0.20241128.546}/model_compression_toolkit/gptq/keras/graph_info.py +0 -0
  370. {mct-nightly-2.2.0.20241126.528 → mct-nightly-2.2.0.20241128.546}/model_compression_toolkit/gptq/keras/quantization_facade.py +0 -0
  371. {mct-nightly-2.2.0.20241126.528 → mct-nightly-2.2.0.20241128.546}/model_compression_toolkit/gptq/keras/quantizer/__init__.py +0 -0
  372. {mct-nightly-2.2.0.20241126.528 → mct-nightly-2.2.0.20241128.546}/model_compression_toolkit/gptq/keras/quantizer/base_keras_gptq_quantizer.py +0 -0
  373. {mct-nightly-2.2.0.20241126.528 → mct-nightly-2.2.0.20241128.546}/model_compression_toolkit/gptq/keras/quantizer/quant_utils.py +0 -0
  374. {mct-nightly-2.2.0.20241126.528 → mct-nightly-2.2.0.20241128.546}/model_compression_toolkit/gptq/keras/quantizer/quantization_builder.py +0 -0
  375. {mct-nightly-2.2.0.20241126.528 → mct-nightly-2.2.0.20241128.546}/model_compression_toolkit/gptq/keras/quantizer/soft_rounding/__init__.py +0 -0
  376. {mct-nightly-2.2.0.20241126.528 → mct-nightly-2.2.0.20241128.546}/model_compression_toolkit/gptq/keras/quantizer/soft_rounding/soft_quantizer_reg.py +0 -0
  377. {mct-nightly-2.2.0.20241126.528 → mct-nightly-2.2.0.20241128.546}/model_compression_toolkit/gptq/keras/quantizer/soft_rounding/symmetric_soft_quantizer.py +0 -0
  378. {mct-nightly-2.2.0.20241126.528 → mct-nightly-2.2.0.20241128.546}/model_compression_toolkit/gptq/keras/quantizer/soft_rounding/uniform_soft_quantizer.py +0 -0
  379. {mct-nightly-2.2.0.20241126.528 → mct-nightly-2.2.0.20241128.546}/model_compression_toolkit/gptq/keras/quantizer/ste_rounding/__init__.py +0 -0
  380. {mct-nightly-2.2.0.20241126.528 → mct-nightly-2.2.0.20241128.546}/model_compression_toolkit/gptq/keras/quantizer/ste_rounding/symmetric_ste.py +0 -0
  381. {mct-nightly-2.2.0.20241126.528 → mct-nightly-2.2.0.20241128.546}/model_compression_toolkit/gptq/pytorch/__init__.py +0 -0
  382. {mct-nightly-2.2.0.20241126.528 → mct-nightly-2.2.0.20241128.546}/model_compression_toolkit/gptq/pytorch/gptq_loss.py +0 -0
  383. {mct-nightly-2.2.0.20241126.528 → mct-nightly-2.2.0.20241128.546}/model_compression_toolkit/gptq/pytorch/gptq_pytorch_implementation.py +0 -0
  384. {mct-nightly-2.2.0.20241126.528 → mct-nightly-2.2.0.20241128.546}/model_compression_toolkit/gptq/pytorch/gptq_training.py +0 -0
  385. {mct-nightly-2.2.0.20241126.528 → mct-nightly-2.2.0.20241128.546}/model_compression_toolkit/gptq/pytorch/graph_info.py +0 -0
  386. {mct-nightly-2.2.0.20241126.528 → mct-nightly-2.2.0.20241128.546}/model_compression_toolkit/gptq/pytorch/quantization_facade.py +0 -0
  387. {mct-nightly-2.2.0.20241126.528 → mct-nightly-2.2.0.20241128.546}/model_compression_toolkit/gptq/pytorch/quantizer/__init__.py +0 -0
  388. {mct-nightly-2.2.0.20241126.528 → mct-nightly-2.2.0.20241128.546}/model_compression_toolkit/gptq/pytorch/quantizer/base_pytorch_gptq_quantizer.py +0 -0
  389. {mct-nightly-2.2.0.20241126.528 → mct-nightly-2.2.0.20241128.546}/model_compression_toolkit/gptq/pytorch/quantizer/quant_utils.py +0 -0
  390. {mct-nightly-2.2.0.20241126.528 → mct-nightly-2.2.0.20241128.546}/model_compression_toolkit/gptq/pytorch/quantizer/quantization_builder.py +0 -0
  391. {mct-nightly-2.2.0.20241126.528 → mct-nightly-2.2.0.20241128.546}/model_compression_toolkit/gptq/pytorch/quantizer/soft_rounding/__init__.py +0 -0
  392. {mct-nightly-2.2.0.20241126.528 → mct-nightly-2.2.0.20241128.546}/model_compression_toolkit/gptq/pytorch/quantizer/soft_rounding/soft_quantizer_reg.py +0 -0
  393. {mct-nightly-2.2.0.20241126.528 → mct-nightly-2.2.0.20241128.546}/model_compression_toolkit/gptq/pytorch/quantizer/soft_rounding/symmetric_soft_quantizer.py +0 -0
  394. {mct-nightly-2.2.0.20241126.528 → mct-nightly-2.2.0.20241128.546}/model_compression_toolkit/gptq/pytorch/quantizer/soft_rounding/uniform_soft_quantizer.py +0 -0
  395. {mct-nightly-2.2.0.20241126.528 → mct-nightly-2.2.0.20241128.546}/model_compression_toolkit/gptq/pytorch/quantizer/ste_rounding/__init__.py +0 -0
  396. {mct-nightly-2.2.0.20241126.528 → mct-nightly-2.2.0.20241128.546}/model_compression_toolkit/gptq/pytorch/quantizer/ste_rounding/symmetric_ste.py +0 -0
  397. {mct-nightly-2.2.0.20241126.528 → mct-nightly-2.2.0.20241128.546}/model_compression_toolkit/gptq/runner.py +0 -0
  398. {mct-nightly-2.2.0.20241126.528 → mct-nightly-2.2.0.20241128.546}/model_compression_toolkit/logger.py +0 -0
  399. {mct-nightly-2.2.0.20241126.528 → mct-nightly-2.2.0.20241128.546}/model_compression_toolkit/metadata.py +0 -0
  400. {mct-nightly-2.2.0.20241126.528 → mct-nightly-2.2.0.20241128.546}/model_compression_toolkit/pruning/__init__.py +0 -0
  401. {mct-nightly-2.2.0.20241126.528 → mct-nightly-2.2.0.20241128.546}/model_compression_toolkit/pruning/keras/__init__.py +0 -0
  402. {mct-nightly-2.2.0.20241126.528 → mct-nightly-2.2.0.20241128.546}/model_compression_toolkit/pruning/keras/pruning_facade.py +0 -0
  403. {mct-nightly-2.2.0.20241126.528 → mct-nightly-2.2.0.20241128.546}/model_compression_toolkit/pruning/pytorch/__init__.py +0 -0
  404. {mct-nightly-2.2.0.20241126.528 → mct-nightly-2.2.0.20241128.546}/model_compression_toolkit/pruning/pytorch/pruning_facade.py +0 -0
  405. {mct-nightly-2.2.0.20241126.528 → mct-nightly-2.2.0.20241128.546}/model_compression_toolkit/ptq/__init__.py +0 -0
  406. {mct-nightly-2.2.0.20241126.528 → mct-nightly-2.2.0.20241128.546}/model_compression_toolkit/ptq/keras/__init__.py +0 -0
  407. {mct-nightly-2.2.0.20241126.528 → mct-nightly-2.2.0.20241128.546}/model_compression_toolkit/ptq/keras/quantization_facade.py +0 -0
  408. {mct-nightly-2.2.0.20241126.528 → mct-nightly-2.2.0.20241128.546}/model_compression_toolkit/ptq/pytorch/__init__.py +0 -0
  409. {mct-nightly-2.2.0.20241126.528 → mct-nightly-2.2.0.20241128.546}/model_compression_toolkit/ptq/pytorch/quantization_facade.py +0 -0
  410. {mct-nightly-2.2.0.20241126.528 → mct-nightly-2.2.0.20241128.546}/model_compression_toolkit/ptq/runner.py +0 -0
  411. {mct-nightly-2.2.0.20241126.528 → mct-nightly-2.2.0.20241128.546}/model_compression_toolkit/qat/__init__.py +0 -0
  412. {mct-nightly-2.2.0.20241126.528 → mct-nightly-2.2.0.20241128.546}/model_compression_toolkit/qat/common/__init__.py +0 -0
  413. {mct-nightly-2.2.0.20241126.528 → mct-nightly-2.2.0.20241128.546}/model_compression_toolkit/qat/common/qat_config.py +0 -0
  414. {mct-nightly-2.2.0.20241126.528 → mct-nightly-2.2.0.20241128.546}/model_compression_toolkit/qat/keras/__init__.py +0 -0
  415. {mct-nightly-2.2.0.20241126.528 → mct-nightly-2.2.0.20241128.546}/model_compression_toolkit/qat/keras/quantization_facade.py +0 -0
  416. {mct-nightly-2.2.0.20241126.528 → mct-nightly-2.2.0.20241128.546}/model_compression_toolkit/qat/keras/quantizer/__init__.py +0 -0
  417. {mct-nightly-2.2.0.20241126.528 → mct-nightly-2.2.0.20241128.546}/model_compression_toolkit/qat/keras/quantizer/base_keras_qat_weight_quantizer.py +0 -0
  418. {mct-nightly-2.2.0.20241126.528 → mct-nightly-2.2.0.20241128.546}/model_compression_toolkit/qat/keras/quantizer/lsq/__init__.py +0 -0
  419. {mct-nightly-2.2.0.20241126.528 → mct-nightly-2.2.0.20241128.546}/model_compression_toolkit/qat/keras/quantizer/lsq/symmetric_lsq.py +0 -0
  420. {mct-nightly-2.2.0.20241126.528 → mct-nightly-2.2.0.20241128.546}/model_compression_toolkit/qat/keras/quantizer/lsq/uniform_lsq.py +0 -0
  421. {mct-nightly-2.2.0.20241126.528 → mct-nightly-2.2.0.20241128.546}/model_compression_toolkit/qat/keras/quantizer/quant_utils.py +0 -0
  422. {mct-nightly-2.2.0.20241126.528 → mct-nightly-2.2.0.20241128.546}/model_compression_toolkit/qat/keras/quantizer/quantization_builder.py +0 -0
  423. {mct-nightly-2.2.0.20241126.528 → mct-nightly-2.2.0.20241128.546}/model_compression_toolkit/qat/keras/quantizer/ste_rounding/__init__.py +0 -0
  424. {mct-nightly-2.2.0.20241126.528 → mct-nightly-2.2.0.20241128.546}/model_compression_toolkit/qat/keras/quantizer/ste_rounding/symmetric_ste.py +0 -0
  425. {mct-nightly-2.2.0.20241126.528 → mct-nightly-2.2.0.20241128.546}/model_compression_toolkit/qat/keras/quantizer/ste_rounding/uniform_ste.py +0 -0
  426. {mct-nightly-2.2.0.20241126.528 → mct-nightly-2.2.0.20241128.546}/model_compression_toolkit/qat/pytorch/__init__.py +0 -0
  427. {mct-nightly-2.2.0.20241126.528 → mct-nightly-2.2.0.20241128.546}/model_compression_toolkit/qat/pytorch/quantization_facade.py +0 -0
  428. {mct-nightly-2.2.0.20241126.528 → mct-nightly-2.2.0.20241128.546}/model_compression_toolkit/qat/pytorch/quantizer/__init__.py +0 -0
  429. {mct-nightly-2.2.0.20241126.528 → mct-nightly-2.2.0.20241128.546}/model_compression_toolkit/qat/pytorch/quantizer/base_pytorch_qat_weight_quantizer.py +0 -0
  430. {mct-nightly-2.2.0.20241126.528 → mct-nightly-2.2.0.20241128.546}/model_compression_toolkit/qat/pytorch/quantizer/lsq/__init__.py +0 -0
  431. {mct-nightly-2.2.0.20241126.528 → mct-nightly-2.2.0.20241128.546}/model_compression_toolkit/qat/pytorch/quantizer/lsq/symmetric_lsq.py +0 -0
  432. {mct-nightly-2.2.0.20241126.528 → mct-nightly-2.2.0.20241128.546}/model_compression_toolkit/qat/pytorch/quantizer/lsq/uniform_lsq.py +0 -0
  433. {mct-nightly-2.2.0.20241126.528 → mct-nightly-2.2.0.20241128.546}/model_compression_toolkit/qat/pytorch/quantizer/quantization_builder.py +0 -0
  434. {mct-nightly-2.2.0.20241126.528 → mct-nightly-2.2.0.20241128.546}/model_compression_toolkit/qat/pytorch/quantizer/ste_rounding/__init__.py +0 -0
  435. {mct-nightly-2.2.0.20241126.528 → mct-nightly-2.2.0.20241128.546}/model_compression_toolkit/qat/pytorch/quantizer/ste_rounding/symmetric_ste.py +0 -0
  436. {mct-nightly-2.2.0.20241126.528 → mct-nightly-2.2.0.20241128.546}/model_compression_toolkit/qat/pytorch/quantizer/ste_rounding/uniform_ste.py +0 -0
  437. {mct-nightly-2.2.0.20241126.528 → mct-nightly-2.2.0.20241128.546}/model_compression_toolkit/target_platform_capabilities/__init__.py +0 -0
  438. {mct-nightly-2.2.0.20241126.528 → mct-nightly-2.2.0.20241128.546}/model_compression_toolkit/target_platform_capabilities/constants.py +0 -0
  439. {mct-nightly-2.2.0.20241126.528 → mct-nightly-2.2.0.20241128.546}/model_compression_toolkit/target_platform_capabilities/immutable.py +0 -0
  440. {mct-nightly-2.2.0.20241126.528 → mct-nightly-2.2.0.20241128.546}/model_compression_toolkit/target_platform_capabilities/target_platform/__init__.py +0 -0
  441. {mct-nightly-2.2.0.20241126.528 → mct-nightly-2.2.0.20241128.546}/model_compression_toolkit/target_platform_capabilities/target_platform/current_tp_model.py +0 -0
  442. {mct-nightly-2.2.0.20241126.528 → mct-nightly-2.2.0.20241128.546}/model_compression_toolkit/target_platform_capabilities/target_platform/fusing.py +0 -0
  443. {mct-nightly-2.2.0.20241126.528 → mct-nightly-2.2.0.20241128.546}/model_compression_toolkit/target_platform_capabilities/target_platform/op_quantization_config.py +0 -0
  444. {mct-nightly-2.2.0.20241126.528 → mct-nightly-2.2.0.20241128.546}/model_compression_toolkit/target_platform_capabilities/target_platform/operators.py +0 -0
  445. {mct-nightly-2.2.0.20241126.528 → mct-nightly-2.2.0.20241128.546}/model_compression_toolkit/target_platform_capabilities/target_platform/target_platform_model.py +0 -0
  446. {mct-nightly-2.2.0.20241126.528 → mct-nightly-2.2.0.20241128.546}/model_compression_toolkit/target_platform_capabilities/target_platform/target_platform_model_component.py +0 -0
  447. {mct-nightly-2.2.0.20241126.528 → mct-nightly-2.2.0.20241128.546}/model_compression_toolkit/target_platform_capabilities/target_platform/targetplatform2framework/__init__.py +0 -0
  448. {mct-nightly-2.2.0.20241126.528 → mct-nightly-2.2.0.20241128.546}/model_compression_toolkit/target_platform_capabilities/target_platform/targetplatform2framework/attribute_filter.py +0 -0
  449. {mct-nightly-2.2.0.20241126.528 → mct-nightly-2.2.0.20241128.546}/model_compression_toolkit/target_platform_capabilities/target_platform/targetplatform2framework/current_tpc.py +0 -0
  450. {mct-nightly-2.2.0.20241126.528 → mct-nightly-2.2.0.20241128.546}/model_compression_toolkit/target_platform_capabilities/target_platform/targetplatform2framework/layer_filter_params.py +0 -0
  451. {mct-nightly-2.2.0.20241126.528 → mct-nightly-2.2.0.20241128.546}/model_compression_toolkit/target_platform_capabilities/target_platform/targetplatform2framework/operations_to_layers.py +0 -0
  452. {mct-nightly-2.2.0.20241126.528 → mct-nightly-2.2.0.20241128.546}/model_compression_toolkit/target_platform_capabilities/target_platform/targetplatform2framework/target_platform_capabilities.py +0 -0
  453. {mct-nightly-2.2.0.20241126.528 → mct-nightly-2.2.0.20241128.546}/model_compression_toolkit/target_platform_capabilities/target_platform/targetplatform2framework/target_platform_capabilities_component.py +0 -0
  454. {mct-nightly-2.2.0.20241126.528 → mct-nightly-2.2.0.20241128.546}/model_compression_toolkit/target_platform_capabilities/tpc_models/__init__.py +0 -0
  455. {mct-nightly-2.2.0.20241126.528 → mct-nightly-2.2.0.20241128.546}/model_compression_toolkit/target_platform_capabilities/tpc_models/get_target_platform_capabilities.py +0 -0
  456. {mct-nightly-2.2.0.20241126.528 → mct-nightly-2.2.0.20241128.546}/model_compression_toolkit/target_platform_capabilities/tpc_models/imx500_tpc/__init__.py +0 -0
  457. {mct-nightly-2.2.0.20241126.528 → mct-nightly-2.2.0.20241128.546}/model_compression_toolkit/target_platform_capabilities/tpc_models/imx500_tpc/latest/__init__.py +0 -0
  458. {mct-nightly-2.2.0.20241126.528 → mct-nightly-2.2.0.20241128.546}/model_compression_toolkit/target_platform_capabilities/tpc_models/imx500_tpc/target_platform_capabilities.py +0 -0
  459. {mct-nightly-2.2.0.20241126.528 → mct-nightly-2.2.0.20241128.546}/model_compression_toolkit/target_platform_capabilities/tpc_models/imx500_tpc/v1/__init__.py +0 -0
  460. {mct-nightly-2.2.0.20241126.528 → mct-nightly-2.2.0.20241128.546}/model_compression_toolkit/target_platform_capabilities/tpc_models/imx500_tpc/v1/tp_model.py +0 -0
  461. {mct-nightly-2.2.0.20241126.528 → mct-nightly-2.2.0.20241128.546}/model_compression_toolkit/target_platform_capabilities/tpc_models/imx500_tpc/v1/tpc_keras.py +0 -0
  462. {mct-nightly-2.2.0.20241126.528 → mct-nightly-2.2.0.20241128.546}/model_compression_toolkit/target_platform_capabilities/tpc_models/imx500_tpc/v1/tpc_pytorch.py +0 -0
  463. {mct-nightly-2.2.0.20241126.528 → mct-nightly-2.2.0.20241128.546}/model_compression_toolkit/target_platform_capabilities/tpc_models/imx500_tpc/v1_lut/__init__.py +0 -0
  464. {mct-nightly-2.2.0.20241126.528 → mct-nightly-2.2.0.20241128.546}/model_compression_toolkit/target_platform_capabilities/tpc_models/imx500_tpc/v1_lut/tp_model.py +0 -0
  465. {mct-nightly-2.2.0.20241126.528 → mct-nightly-2.2.0.20241128.546}/model_compression_toolkit/target_platform_capabilities/tpc_models/imx500_tpc/v1_lut/tpc_keras.py +0 -0
  466. {mct-nightly-2.2.0.20241126.528 → mct-nightly-2.2.0.20241128.546}/model_compression_toolkit/target_platform_capabilities/tpc_models/imx500_tpc/v1_lut/tpc_pytorch.py +0 -0
  467. {mct-nightly-2.2.0.20241126.528 → mct-nightly-2.2.0.20241128.546}/model_compression_toolkit/target_platform_capabilities/tpc_models/imx500_tpc/v1_pot/__init__.py +0 -0
  468. {mct-nightly-2.2.0.20241126.528 → mct-nightly-2.2.0.20241128.546}/model_compression_toolkit/target_platform_capabilities/tpc_models/imx500_tpc/v1_pot/tp_model.py +0 -0
  469. {mct-nightly-2.2.0.20241126.528 → mct-nightly-2.2.0.20241128.546}/model_compression_toolkit/target_platform_capabilities/tpc_models/imx500_tpc/v1_pot/tpc_keras.py +0 -0
  470. {mct-nightly-2.2.0.20241126.528 → mct-nightly-2.2.0.20241128.546}/model_compression_toolkit/target_platform_capabilities/tpc_models/imx500_tpc/v1_pot/tpc_pytorch.py +0 -0
  471. {mct-nightly-2.2.0.20241126.528 → mct-nightly-2.2.0.20241128.546}/model_compression_toolkit/target_platform_capabilities/tpc_models/imx500_tpc/v2/__init__.py +0 -0
  472. {mct-nightly-2.2.0.20241126.528 → mct-nightly-2.2.0.20241128.546}/model_compression_toolkit/target_platform_capabilities/tpc_models/imx500_tpc/v2/tp_model.py +0 -0
  473. {mct-nightly-2.2.0.20241126.528 → mct-nightly-2.2.0.20241128.546}/model_compression_toolkit/target_platform_capabilities/tpc_models/imx500_tpc/v2/tpc_keras.py +0 -0
  474. {mct-nightly-2.2.0.20241126.528 → mct-nightly-2.2.0.20241128.546}/model_compression_toolkit/target_platform_capabilities/tpc_models/imx500_tpc/v2/tpc_pytorch.py +0 -0
  475. {mct-nightly-2.2.0.20241126.528 → mct-nightly-2.2.0.20241128.546}/model_compression_toolkit/target_platform_capabilities/tpc_models/imx500_tpc/v2_lut/__init__.py +0 -0
  476. {mct-nightly-2.2.0.20241126.528 → mct-nightly-2.2.0.20241128.546}/model_compression_toolkit/target_platform_capabilities/tpc_models/imx500_tpc/v2_lut/tp_model.py +0 -0
  477. {mct-nightly-2.2.0.20241126.528 → mct-nightly-2.2.0.20241128.546}/model_compression_toolkit/target_platform_capabilities/tpc_models/imx500_tpc/v2_lut/tpc_keras.py +0 -0
  478. {mct-nightly-2.2.0.20241126.528 → mct-nightly-2.2.0.20241128.546}/model_compression_toolkit/target_platform_capabilities/tpc_models/imx500_tpc/v2_lut/tpc_pytorch.py +0 -0
  479. {mct-nightly-2.2.0.20241126.528 → mct-nightly-2.2.0.20241128.546}/model_compression_toolkit/target_platform_capabilities/tpc_models/imx500_tpc/v3/__init__.py +0 -0
  480. {mct-nightly-2.2.0.20241126.528 → mct-nightly-2.2.0.20241128.546}/model_compression_toolkit/target_platform_capabilities/tpc_models/imx500_tpc/v3/tp_model.py +0 -0
  481. {mct-nightly-2.2.0.20241126.528 → mct-nightly-2.2.0.20241128.546}/model_compression_toolkit/target_platform_capabilities/tpc_models/imx500_tpc/v3/tpc_keras.py +0 -0
  482. {mct-nightly-2.2.0.20241126.528 → mct-nightly-2.2.0.20241128.546}/model_compression_toolkit/target_platform_capabilities/tpc_models/imx500_tpc/v3/tpc_pytorch.py +0 -0
  483. {mct-nightly-2.2.0.20241126.528 → mct-nightly-2.2.0.20241128.546}/model_compression_toolkit/target_platform_capabilities/tpc_models/imx500_tpc/v3_lut/__init__.py +0 -0
  484. {mct-nightly-2.2.0.20241126.528 → mct-nightly-2.2.0.20241128.546}/model_compression_toolkit/target_platform_capabilities/tpc_models/imx500_tpc/v3_lut/tp_model.py +0 -0
  485. {mct-nightly-2.2.0.20241126.528 → mct-nightly-2.2.0.20241128.546}/model_compression_toolkit/target_platform_capabilities/tpc_models/imx500_tpc/v3_lut/tpc_keras.py +0 -0
  486. {mct-nightly-2.2.0.20241126.528 → mct-nightly-2.2.0.20241128.546}/model_compression_toolkit/target_platform_capabilities/tpc_models/imx500_tpc/v3_lut/tpc_pytorch.py +0 -0
  487. {mct-nightly-2.2.0.20241126.528 → mct-nightly-2.2.0.20241128.546}/model_compression_toolkit/target_platform_capabilities/tpc_models/imx500_tpc/v4/__init__.py +0 -0
  488. {mct-nightly-2.2.0.20241126.528 → mct-nightly-2.2.0.20241128.546}/model_compression_toolkit/target_platform_capabilities/tpc_models/imx500_tpc/v4/tp_model.py +0 -0
  489. {mct-nightly-2.2.0.20241126.528 → mct-nightly-2.2.0.20241128.546}/model_compression_toolkit/target_platform_capabilities/tpc_models/imx500_tpc/v4/tpc_keras.py +0 -0
  490. {mct-nightly-2.2.0.20241126.528 → mct-nightly-2.2.0.20241128.546}/model_compression_toolkit/target_platform_capabilities/tpc_models/imx500_tpc/v4/tpc_pytorch.py +0 -0
  491. {mct-nightly-2.2.0.20241126.528 → mct-nightly-2.2.0.20241128.546}/model_compression_toolkit/target_platform_capabilities/tpc_models/qnnpack_tpc/__init__.py +0 -0
  492. {mct-nightly-2.2.0.20241126.528 → mct-nightly-2.2.0.20241128.546}/model_compression_toolkit/target_platform_capabilities/tpc_models/qnnpack_tpc/latest/__init__.py +0 -0
  493. {mct-nightly-2.2.0.20241126.528 → mct-nightly-2.2.0.20241128.546}/model_compression_toolkit/target_platform_capabilities/tpc_models/qnnpack_tpc/target_platform_capabilities.py +0 -0
  494. {mct-nightly-2.2.0.20241126.528 → mct-nightly-2.2.0.20241128.546}/model_compression_toolkit/target_platform_capabilities/tpc_models/qnnpack_tpc/v1/__init__.py +0 -0
  495. {mct-nightly-2.2.0.20241126.528 → mct-nightly-2.2.0.20241128.546}/model_compression_toolkit/target_platform_capabilities/tpc_models/qnnpack_tpc/v1/tp_model.py +0 -0
  496. {mct-nightly-2.2.0.20241126.528 → mct-nightly-2.2.0.20241128.546}/model_compression_toolkit/target_platform_capabilities/tpc_models/qnnpack_tpc/v1/tpc_keras.py +0 -0
  497. {mct-nightly-2.2.0.20241126.528 → mct-nightly-2.2.0.20241128.546}/model_compression_toolkit/target_platform_capabilities/tpc_models/qnnpack_tpc/v1/tpc_pytorch.py +0 -0
  498. {mct-nightly-2.2.0.20241126.528 → mct-nightly-2.2.0.20241128.546}/model_compression_toolkit/target_platform_capabilities/tpc_models/tflite_tpc/__init__.py +0 -0
  499. {mct-nightly-2.2.0.20241126.528 → mct-nightly-2.2.0.20241128.546}/model_compression_toolkit/target_platform_capabilities/tpc_models/tflite_tpc/latest/__init__.py +0 -0
  500. {mct-nightly-2.2.0.20241126.528 → mct-nightly-2.2.0.20241128.546}/model_compression_toolkit/target_platform_capabilities/tpc_models/tflite_tpc/target_platform_capabilities.py +0 -0
  501. {mct-nightly-2.2.0.20241126.528 → mct-nightly-2.2.0.20241128.546}/model_compression_toolkit/target_platform_capabilities/tpc_models/tflite_tpc/v1/__init__.py +0 -0
  502. {mct-nightly-2.2.0.20241126.528 → mct-nightly-2.2.0.20241128.546}/model_compression_toolkit/target_platform_capabilities/tpc_models/tflite_tpc/v1/tp_model.py +0 -0
  503. {mct-nightly-2.2.0.20241126.528 → mct-nightly-2.2.0.20241128.546}/model_compression_toolkit/target_platform_capabilities/tpc_models/tflite_tpc/v1/tpc_keras.py +0 -0
  504. {mct-nightly-2.2.0.20241126.528 → mct-nightly-2.2.0.20241128.546}/model_compression_toolkit/target_platform_capabilities/tpc_models/tflite_tpc/v1/tpc_pytorch.py +0 -0
  505. {mct-nightly-2.2.0.20241126.528 → mct-nightly-2.2.0.20241128.546}/model_compression_toolkit/trainable_infrastructure/__init__.py +0 -0
  506. {mct-nightly-2.2.0.20241126.528 → mct-nightly-2.2.0.20241128.546}/model_compression_toolkit/trainable_infrastructure/common/__init__.py +0 -0
  507. {mct-nightly-2.2.0.20241126.528 → mct-nightly-2.2.0.20241128.546}/model_compression_toolkit/trainable_infrastructure/common/annealing_schedulers.py +0 -0
  508. {mct-nightly-2.2.0.20241126.528 → mct-nightly-2.2.0.20241128.546}/model_compression_toolkit/trainable_infrastructure/common/base_trainable_quantizer.py +0 -0
  509. {mct-nightly-2.2.0.20241126.528 → mct-nightly-2.2.0.20241128.546}/model_compression_toolkit/trainable_infrastructure/common/constants.py +0 -0
  510. {mct-nightly-2.2.0.20241126.528 → mct-nightly-2.2.0.20241128.546}/model_compression_toolkit/trainable_infrastructure/common/get_quantizer_config.py +0 -0
  511. {mct-nightly-2.2.0.20241126.528 → mct-nightly-2.2.0.20241128.546}/model_compression_toolkit/trainable_infrastructure/common/get_quantizers.py +0 -0
  512. {mct-nightly-2.2.0.20241126.528 → mct-nightly-2.2.0.20241128.546}/model_compression_toolkit/trainable_infrastructure/common/quant_utils.py +0 -0
  513. {mct-nightly-2.2.0.20241126.528 → mct-nightly-2.2.0.20241128.546}/model_compression_toolkit/trainable_infrastructure/common/trainable_quantizer_config.py +0 -0
  514. {mct-nightly-2.2.0.20241126.528 → mct-nightly-2.2.0.20241128.546}/model_compression_toolkit/trainable_infrastructure/common/training_method.py +0 -0
  515. {mct-nightly-2.2.0.20241126.528 → mct-nightly-2.2.0.20241128.546}/model_compression_toolkit/trainable_infrastructure/common/util.py +0 -0
  516. {mct-nightly-2.2.0.20241126.528 → mct-nightly-2.2.0.20241128.546}/model_compression_toolkit/trainable_infrastructure/keras/__init__.py +0 -0
  517. {mct-nightly-2.2.0.20241126.528 → mct-nightly-2.2.0.20241128.546}/model_compression_toolkit/trainable_infrastructure/keras/activation_quantizers/__init__.py +0 -0
  518. {mct-nightly-2.2.0.20241126.528 → mct-nightly-2.2.0.20241128.546}/model_compression_toolkit/trainable_infrastructure/keras/activation_quantizers/base_activation_quantizer.py +0 -0
  519. {mct-nightly-2.2.0.20241126.528 → mct-nightly-2.2.0.20241128.546}/model_compression_toolkit/trainable_infrastructure/keras/activation_quantizers/lsq/__init__.py +0 -0
  520. {mct-nightly-2.2.0.20241126.528 → mct-nightly-2.2.0.20241128.546}/model_compression_toolkit/trainable_infrastructure/keras/activation_quantizers/lsq/symmetric_lsq.py +0 -0
  521. {mct-nightly-2.2.0.20241126.528 → mct-nightly-2.2.0.20241128.546}/model_compression_toolkit/trainable_infrastructure/keras/activation_quantizers/lsq/uniform_lsq.py +0 -0
  522. {mct-nightly-2.2.0.20241126.528 → mct-nightly-2.2.0.20241128.546}/model_compression_toolkit/trainable_infrastructure/keras/activation_quantizers/ste/__init__.py +0 -0
  523. {mct-nightly-2.2.0.20241126.528 → mct-nightly-2.2.0.20241128.546}/model_compression_toolkit/trainable_infrastructure/keras/activation_quantizers/ste/symmetric_ste.py +0 -0
  524. {mct-nightly-2.2.0.20241126.528 → mct-nightly-2.2.0.20241128.546}/model_compression_toolkit/trainable_infrastructure/keras/activation_quantizers/ste/uniform_ste.py +0 -0
  525. {mct-nightly-2.2.0.20241126.528 → mct-nightly-2.2.0.20241128.546}/model_compression_toolkit/trainable_infrastructure/keras/annealing_schedulers.py +0 -0
  526. {mct-nightly-2.2.0.20241126.528 → mct-nightly-2.2.0.20241128.546}/model_compression_toolkit/trainable_infrastructure/keras/base_keras_quantizer.py +0 -0
  527. {mct-nightly-2.2.0.20241126.528 → mct-nightly-2.2.0.20241128.546}/model_compression_toolkit/trainable_infrastructure/keras/config_serialization.py +0 -0
  528. {mct-nightly-2.2.0.20241126.528 → mct-nightly-2.2.0.20241128.546}/model_compression_toolkit/trainable_infrastructure/keras/load_model.py +0 -0
  529. {mct-nightly-2.2.0.20241126.528 → mct-nightly-2.2.0.20241128.546}/model_compression_toolkit/trainable_infrastructure/keras/quantize_wrapper.py +0 -0
  530. {mct-nightly-2.2.0.20241126.528 → mct-nightly-2.2.0.20241128.546}/model_compression_toolkit/trainable_infrastructure/keras/quantizer_utils.py +0 -0
  531. {mct-nightly-2.2.0.20241126.528 → mct-nightly-2.2.0.20241128.546}/model_compression_toolkit/trainable_infrastructure/pytorch/__init__.py +0 -0
  532. {mct-nightly-2.2.0.20241126.528 → mct-nightly-2.2.0.20241128.546}/model_compression_toolkit/trainable_infrastructure/pytorch/activation_quantizers/__init__.py +0 -0
  533. {mct-nightly-2.2.0.20241126.528 → mct-nightly-2.2.0.20241128.546}/model_compression_toolkit/trainable_infrastructure/pytorch/activation_quantizers/base_activation_quantizer.py +0 -0
  534. {mct-nightly-2.2.0.20241126.528 → mct-nightly-2.2.0.20241128.546}/model_compression_toolkit/trainable_infrastructure/pytorch/activation_quantizers/lsq/__init__.py +0 -0
  535. {mct-nightly-2.2.0.20241126.528 → mct-nightly-2.2.0.20241128.546}/model_compression_toolkit/trainable_infrastructure/pytorch/activation_quantizers/lsq/symmetric_lsq.py +0 -0
  536. {mct-nightly-2.2.0.20241126.528 → mct-nightly-2.2.0.20241128.546}/model_compression_toolkit/trainable_infrastructure/pytorch/activation_quantizers/lsq/uniform_lsq.py +0 -0
  537. {mct-nightly-2.2.0.20241126.528 → mct-nightly-2.2.0.20241128.546}/model_compression_toolkit/trainable_infrastructure/pytorch/activation_quantizers/ste/__init__.py +0 -0
  538. {mct-nightly-2.2.0.20241126.528 → mct-nightly-2.2.0.20241128.546}/model_compression_toolkit/trainable_infrastructure/pytorch/activation_quantizers/ste/symmetric_ste.py +0 -0
  539. {mct-nightly-2.2.0.20241126.528 → mct-nightly-2.2.0.20241128.546}/model_compression_toolkit/trainable_infrastructure/pytorch/activation_quantizers/ste/uniform_ste.py +0 -0
  540. {mct-nightly-2.2.0.20241126.528 → mct-nightly-2.2.0.20241128.546}/model_compression_toolkit/trainable_infrastructure/pytorch/annealing_schedulers.py +0 -0
  541. {mct-nightly-2.2.0.20241126.528 → mct-nightly-2.2.0.20241128.546}/model_compression_toolkit/trainable_infrastructure/pytorch/base_pytorch_quantizer.py +0 -0
  542. {mct-nightly-2.2.0.20241126.528 → mct-nightly-2.2.0.20241128.546}/model_compression_toolkit/trainable_infrastructure/pytorch/quantizer_utils.py +0 -0
  543. {mct-nightly-2.2.0.20241126.528 → mct-nightly-2.2.0.20241128.546}/model_compression_toolkit/verify_packages.py +0 -0
  544. {mct-nightly-2.2.0.20241126.528 → mct-nightly-2.2.0.20241128.546}/model_compression_toolkit/xquant/__init__.py +0 -0
  545. {mct-nightly-2.2.0.20241126.528 → mct-nightly-2.2.0.20241128.546}/model_compression_toolkit/xquant/common/__init__.py +0 -0
  546. {mct-nightly-2.2.0.20241126.528 → mct-nightly-2.2.0.20241128.546}/model_compression_toolkit/xquant/common/constants.py +0 -0
  547. {mct-nightly-2.2.0.20241126.528 → mct-nightly-2.2.0.20241128.546}/model_compression_toolkit/xquant/common/core_report_generator.py +0 -0
  548. {mct-nightly-2.2.0.20241126.528 → mct-nightly-2.2.0.20241128.546}/model_compression_toolkit/xquant/common/dataset_utils.py +0 -0
  549. {mct-nightly-2.2.0.20241126.528 → mct-nightly-2.2.0.20241128.546}/model_compression_toolkit/xquant/common/framework_report_utils.py +0 -0
  550. {mct-nightly-2.2.0.20241126.528 → mct-nightly-2.2.0.20241128.546}/model_compression_toolkit/xquant/common/model_analyzer.py +0 -0
  551. {mct-nightly-2.2.0.20241126.528 → mct-nightly-2.2.0.20241128.546}/model_compression_toolkit/xquant/common/model_folding_utils.py +0 -0
  552. {mct-nightly-2.2.0.20241126.528 → mct-nightly-2.2.0.20241128.546}/model_compression_toolkit/xquant/common/similarity_calculator.py +0 -0
  553. {mct-nightly-2.2.0.20241126.528 → mct-nightly-2.2.0.20241128.546}/model_compression_toolkit/xquant/common/similarity_functions.py +0 -0
  554. {mct-nightly-2.2.0.20241126.528 → mct-nightly-2.2.0.20241128.546}/model_compression_toolkit/xquant/common/tensorboard_utils.py +0 -0
  555. {mct-nightly-2.2.0.20241126.528 → mct-nightly-2.2.0.20241128.546}/model_compression_toolkit/xquant/common/xquant_config.py +0 -0
  556. {mct-nightly-2.2.0.20241126.528 → mct-nightly-2.2.0.20241128.546}/model_compression_toolkit/xquant/keras/__init__.py +0 -0
  557. {mct-nightly-2.2.0.20241126.528 → mct-nightly-2.2.0.20241128.546}/model_compression_toolkit/xquant/keras/dataset_utils.py +0 -0
  558. {mct-nightly-2.2.0.20241126.528 → mct-nightly-2.2.0.20241128.546}/model_compression_toolkit/xquant/keras/facade_xquant_report.py +0 -0
  559. {mct-nightly-2.2.0.20241126.528 → mct-nightly-2.2.0.20241128.546}/model_compression_toolkit/xquant/keras/keras_report_utils.py +0 -0
  560. {mct-nightly-2.2.0.20241126.528 → mct-nightly-2.2.0.20241128.546}/model_compression_toolkit/xquant/keras/model_analyzer.py +0 -0
  561. {mct-nightly-2.2.0.20241126.528 → mct-nightly-2.2.0.20241128.546}/model_compression_toolkit/xquant/keras/similarity_functions.py +0 -0
  562. {mct-nightly-2.2.0.20241126.528 → mct-nightly-2.2.0.20241128.546}/model_compression_toolkit/xquant/keras/tensorboard_utils.py +0 -0
  563. {mct-nightly-2.2.0.20241126.528 → mct-nightly-2.2.0.20241128.546}/model_compression_toolkit/xquant/pytorch/__init__.py +0 -0
  564. {mct-nightly-2.2.0.20241126.528 → mct-nightly-2.2.0.20241128.546}/model_compression_toolkit/xquant/pytorch/dataset_utils.py +0 -0
  565. {mct-nightly-2.2.0.20241126.528 → mct-nightly-2.2.0.20241128.546}/model_compression_toolkit/xquant/pytorch/facade_xquant_report.py +0 -0
  566. {mct-nightly-2.2.0.20241126.528 → mct-nightly-2.2.0.20241128.546}/model_compression_toolkit/xquant/pytorch/model_analyzer.py +0 -0
  567. {mct-nightly-2.2.0.20241126.528 → mct-nightly-2.2.0.20241128.546}/model_compression_toolkit/xquant/pytorch/pytorch_report_utils.py +0 -0
  568. {mct-nightly-2.2.0.20241126.528 → mct-nightly-2.2.0.20241128.546}/model_compression_toolkit/xquant/pytorch/similarity_functions.py +0 -0
  569. {mct-nightly-2.2.0.20241126.528 → mct-nightly-2.2.0.20241128.546}/model_compression_toolkit/xquant/pytorch/tensorboard_utils.py +0 -0
  570. {mct-nightly-2.2.0.20241126.528 → mct-nightly-2.2.0.20241128.546}/setup.cfg +0 -0
  571. {mct-nightly-2.2.0.20241126.528 → mct-nightly-2.2.0.20241128.546}/setup.py +0 -0
@@ -1,6 +1,6 @@
1
1
  Metadata-Version: 2.1
2
2
  Name: mct-nightly
3
- Version: 2.2.0.20241126.528
3
+ Version: 2.2.0.20241128.546
4
4
  Summary: A Model Compression Toolkit for neural networks
5
5
  Home-page: UNKNOWN
6
6
  License: UNKNOWN
@@ -56,9 +56,9 @@ Description: <div align="center" markdown="1">
56
56
 
57
57
  Quantization Method | Complexity | Computational Cost | API | Tutorial
58
58
  -------------------- | -----------|--------------------|---------|--------
59
- PTQ (Post Training Quantization) | Low | Low (~1-10 CPU minutes) | [PyTorch API](https://sony.github.io/model_optimization/docs/api/api_docs/methods/pytorch_post_training_quantization.html) / [Keras API](https://sony.github.io/model_optimization/docs/api/api_docs/methods/keras_post_training_quantization.html) | <a href="https://colab.research.google.com/github/sony/model_optimization/blob/main/tutorials/notebooks/mct_features_notebooks/pytorch/example_pytorch_post_training_quantization.ipynb"><img src="https://img.shields.io/badge/Pytorch-green"/></a> <a href="https://colab.research.google.com/github/sony/model_optimization/blob/main/tutorials/notebooks/mct_features_notebooks/keras/example_keras_post-training_quantization.ipynb"><img src="https://img.shields.io/badge/Keras-green"/></a>
60
- GPTQ (parameters fine-tuning using gradients) | Moderate | Moderate (~1-3 GPU hours) | [PyTorch API](https://sony.github.io/model_optimization/docs/api/api_docs/methods/pytorch_gradient_post_training_quantization.html) / [Keras API](https://sony.github.io/model_optimization/docs/api/api_docs/methods/keras_gradient_post_training_quantization.html) | <a href="https://colab.research.google.com/github/sony/model_optimization/blob/main/tutorials/notebooks/mct_features_notebooks/pytorch/example_pytorch_mobilenet_gptq.ipynb"><img src="https://img.shields.io/badge/PyTorch-green"/></a> <a href="https://colab.research.google.com/github/sony/model_optimization/blob/main/tutorials/notebooks/mct_features_notebooks/keras/example_keras_mobilenet_gptq.ipynb"><img src="https://img.shields.io/badge/Keras-green"/></a>
61
- QAT (Quantization Aware Training) | High | High (~12-36 GPU hours) | [QAT API](https://sony.github.io/model_optimization/docs/api/api_docs/index.html#qat) | <a href="https://colab.research.google.com/github/sony/model_optimization/blob/main/tutorials/notebooks/mct_features_notebooks/keras/example_keras_qat.ipynb"><img src="https://img.shields.io/badge/Keras-green"/></a>
59
+ PTQ (Post Training Quantization) | Low | Low (~1-10 CPU minutes) | [PyTorch API](https://sony.github.io/model_optimization/api/api_docs/methods/pytorch_post_training_quantization.html) / [Keras API](https://sony.github.io/model_optimization/api/api_docs/methods/keras_post_training_quantization.html) | <a href="https://colab.research.google.com/github/sony/model_optimization/blob/main/tutorials/notebooks/mct_features_notebooks/pytorch/example_pytorch_post_training_quantization.ipynb"><img src="https://img.shields.io/badge/Pytorch-green"/></a> <a href="https://colab.research.google.com/github/sony/model_optimization/blob/main/tutorials/notebooks/mct_features_notebooks/keras/example_keras_post-training_quantization.ipynb"><img src="https://img.shields.io/badge/Keras-green"/></a>
60
+ GPTQ (parameters fine-tuning using gradients) | Moderate | Moderate (~1-3 GPU hours) | [PyTorch API](https://sony.github.io/model_optimization/api/api_docs/methods/pytorch_gradient_post_training_quantization.html) / [Keras API](https://sony.github.io/model_optimization/api/api_docs/methods/keras_gradient_post_training_quantization.html) | <a href="https://colab.research.google.com/github/sony/model_optimization/blob/main/tutorials/notebooks/mct_features_notebooks/pytorch/example_pytorch_mobilenet_gptq.ipynb"><img src="https://img.shields.io/badge/PyTorch-green"/></a> <a href="https://colab.research.google.com/github/sony/model_optimization/blob/main/tutorials/notebooks/mct_features_notebooks/keras/example_keras_mobilenet_gptq.ipynb"><img src="https://img.shields.io/badge/Keras-green"/></a>
61
+ QAT (Quantization Aware Training) | High | High (~12-36 GPU hours) | [QAT API](https://sony.github.io/model_optimization/api/api_docs/index.html#qat) | <a href="https://colab.research.google.com/github/sony/model_optimization/blob/main/tutorials/notebooks/mct_features_notebooks/keras/example_keras_qat.ipynb"><img src="https://img.shields.io/badge/Keras-green"/></a>
62
62
 
63
63
  </p>
64
64
  </div>
@@ -66,9 +66,9 @@ Description: <div align="center" markdown="1">
66
66
  For each flow, **Quantization core** utilizes various algorithms and hyper-parameters for optimal [hardware-aware](https://github.com/sony/model_optimization/blob/main/model_compression_toolkit/target_platform_capabilities/README.md) quantization results.
67
67
  For further details, please see [Supported features and algorithms](#high-level-features-and-techniques).
68
68
 
69
- Required input:
70
- - Floating point model - 32bit model in either .pt or .keras format
71
- - Representative dataset - can be either provided by the user, or generated utilizing the [Data Generation](#data-generation-) capability
69
+ **Required input**: Floating point model - 32bit model in either .pt or .keras format
70
+
71
+ **Optional input**: Representative dataset - can be either provided by the user, or generated utilizing the [Data Generation](#data-generation-) capability
72
72
 
73
73
  <div align="center">
74
74
  <p align="center">
@@ -98,15 +98,16 @@ Description: <div align="center" markdown="1">
98
98
  __________________________________________________________________________________________________________
99
99
  ### Data-free quantization (Data Generation) [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/sony/model_optimization/blob/main/tutorials/notebooks/mct_features_notebooks/pytorch/example_pytorch_data_generation.ipynb)
100
100
  Generates synthetic images based on the statistics stored in the model's batch normalization layers, according to your specific needs, for when image data isn’t available. See [Data Generation Library](https://github.com/sony/model_optimization/blob/main/model_compression_toolkit/data_generation/README.md) for more.
101
+ The specifications of the method are detailed in the paper: _"**Data Generation for Hardware-Friendly Post-Training Quantization**"_ [5].
101
102
  __________________________________________________________________________________________________________
102
103
  ### Structured Pruning [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/sony/model_optimization/blob/main/tutorials/notebooks/mct_features_notebooks/pytorch/example_pytorch_pruning_mnist.ipynb)
103
- Reduces model size/complexity and ensures better channels utilization by removing redundant input channels from layers and reconstruction of layer weights. Read more ([Pytorch API](https://sony.github.io/model_optimization/docs/api/api_docs/methods/pytorch_pruning_experimental.html) / [Keras API](https://sony.github.io/model_optimization/docs/api/api_docs/methods/keras_pruning_experimental.html)).
104
+ Reduces model size/complexity and ensures better channels utilization by removing redundant input channels from layers and reconstruction of layer weights. Read more ([Pytorch API](https://sony.github.io/model_optimization/api/api_docs/methods/pytorch_pruning_experimental.html) / [Keras API](https://sony.github.io/model_optimization/api/api_docs/methods/keras_pruning_experimental.html)).
104
105
  __________________________________________________________________________________________________________
105
106
  ### **Debugging and Visualization**
106
107
  **🎛️ Network Editor (Modify Quantization Configurations)** [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/sony/model_optimization/blob/main/tutorials/notebooks/mct_features_notebooks/keras/example_keras_network_editor.ipynb).
107
- Modify your model's quantization configuration for specific layers or apply a custom edit rule (e.g adjust layer's bit-width) using MCT’s network editor
108
+ Modify your model's quantization configuration for specific layers or apply a custom edit rule (e.g adjust layer's bit-width) using MCT’s network editor.
108
109
 
109
- **🖥️ Visualization**. Observe useful information for troubleshooting the quantized model's performance using TensorBoard. [Read more](https://sony.github.io/model_optimization/docs/guidelines/visualization.html).
110
+ **🖥️ Visualization**. Observe useful information for troubleshooting the quantized model's performance using TensorBoard. [Read more](https://sony.github.io/model_optimization/guidelines/visualization.html).
110
111
 
111
112
  **🔑 XQuant (Explainable Quantization)** [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/sony/model_optimization/blob/main/tutorials/notebooks/mct_features_notebooks/pytorch/example_pytorch_xquant.ipynb). Get valuable insights regarding the quality and success of the quantization process of your model. The report includes histograms and similarity metrics between the original float model and the quantized model in key points of the model. The report can be visualized using TensorBoard.
112
113
  __________________________________________________________________________________________________________
@@ -116,15 +117,15 @@ Description: <div align="center" markdown="1">
116
117
  More details on how to use EPTQ via MCT can be found in the [GPTQ guidelines](https://github.com/sony/model_optimization/blob/main/model_compression_toolkit/gptq/README.md).
117
118
 
118
119
  ## <div align="center">Resources</div>
119
- * [User Guide](https://sony.github.io/model_optimization/docs/index.html) contains detailed information about MCT and guides you from installation through optimizing models for your edge AI applications.
120
+ * [User Guide](https://sony.github.io/model_optimization/index.html) contains detailed information about MCT and guides you from installation through optimizing models for your edge AI applications.
120
121
 
121
- * MCT's [API Docs](https://sony.github.io/model_optimization/docs/api/api_docs/) is seperated per quantization methods:
122
+ * MCT's [API Docs](https://sony.github.io/model_optimization/api/api_docs/) is separated per quantization methods:
122
123
 
123
- * [Post-training quantization](https://sony.github.io/model_optimization/docs/api/api_docs/index.html#ptq) | PTQ API docs
124
- * [Gradient-based post-training quantization](https://sony.github.io/model_optimization/docs/api/api_docs/index.html#gptq) | GPTQ API docs
125
- * [Quantization-aware training](https://sony.github.io/model_optimization/docs/api/api_docs/index.html#qat) | QAT API docs
124
+ * [Post-training quantization](https://sony.github.io/model_optimization/api/api_docs/index.html#ptq) | PTQ API docs
125
+ * [Gradient-based post-training quantization](https://sony.github.io/model_optimization/api/api_docs/index.html#gptq) | GPTQ API docs
126
+ * [Quantization-aware training](https://sony.github.io/model_optimization/api/api_docs/index.html#qat) | QAT API docs
126
127
 
127
- * [Debug](https://sony.github.io/model_optimization/docs/guidelines/visualization.html) – modify optimization process or generate explainable report
128
+ * [Debug](https://sony.github.io/model_optimization/guidelines/visualization.html) – modify optimization process or generate an explainable report
128
129
 
129
130
  * [Release notes](https://github.com/sony/model_optimization/releases)
130
131
 
@@ -158,25 +159,15 @@ Description: <div align="center" markdown="1">
158
159
  <img src="/docsrc/images/PoseEst.png" width="200">
159
160
  <img src="/docsrc/images/ObjDet.png" width="200">
160
161
 
161
- ### Pytorch
162
- We quantized classification networks from the torchvision library.
163
- In the following table we present the ImageNet validation results for these models:
164
-
165
- | Network Name | Float Accuracy | 8Bit Accuracy | Data-Free 8Bit Accuracy |
166
- |---------------------------|-----------------|-----------------|-------------------------|
167
- | MobileNet V2 [3] | 71.886 | 71.444 |71.29|
168
- | ResNet-18 [3] | 69.86 | 69.63 |69.53|
169
- | SqueezeNet 1.1 [3] | 58.128 | 57.678 ||
170
-
171
- ### Keras
172
162
  MCT can quantize an existing 32-bit floating-point model to an 8-bit fixed-point (or less) model without compromising accuracy.
173
- Below is a graph of [MobileNetV2](https://keras.io/api/applications/mobilenet/) accuracy on ImageNet vs average bit-width of weights (X-axis), using
174
- single-precision quantization, mixed-precision quantization, and mixed-precision quantization with GPTQ.
163
+ Below is a graph of [MobileNetV2](https://pytorch.org/vision/main/models/generated/torchvision.models.mobilenet_v2.html) accuracy on ImageNet vs average bit-width of weights (X-axis), using **single-precision** quantization, **mixed-precision** quantization, and mixed-precision quantization with GPTQ.
175
164
 
176
- <img src="https://github.com/sony/model_optimization/raw/main/docsrc/images/mbv2_accuracy_graph.png">
165
+ <p align="center">
166
+ <img src="/docsrc/images/torch_mobilenetv2.png" width="800">
177
167
 
178
168
  For more results, please see [1]
179
169
 
170
+
180
171
  ### Pruning Results
181
172
 
182
173
  Results for applying pruning to reduce the parameters of the following models by 50%:
@@ -188,19 +179,20 @@ Description: <div align="center" markdown="1">
188
179
 
189
180
  ## <div align="center">Troubleshooting and Community</div>
190
181
 
191
- If you encountered large accuracy degradation with MCT, check out the [Quantization Troubleshooting](https://github.com/sony/model_optimization/tree/main/quantization_troubleshooting.md)
192
- for common pitfalls and some tools to improve quantized model's accuracy.
182
+ If you encountered a large accuracy degradation with MCT, check out the [Quantization Troubleshooting](https://github.com/sony/model_optimization/tree/main/quantization_troubleshooting.md)
183
+ for common pitfalls and some tools to improve the quantized model's accuracy.
193
184
 
194
185
  Check out the [FAQ](https://github.com/sony/model_optimization/tree/main/FAQ.md) for common issues.
195
186
 
196
- You are welcome to ask questions and get support on our [issues section](https://github.com/sony/model_optimization/issues) and manage community discussions under [discussions section](https://github.com/sony/model_optimization/discussions).
187
+ You are welcome to ask questions and get support on our [issues section](https://github.com/sony/model_optimization/issues) and manage community discussions under the [discussions section](https://github.com/sony/model_optimization/discussions).
197
188
 
198
189
 
199
190
  ## <div align="center">Contributions</div>
200
- MCT aims at keeping a more up-to-date fork and welcomes contributions from anyone.
191
+ We'd love your input! MCT would not be possible without help from our community, and welcomes contributions from anyone!
201
192
 
202
193
  *Checkout our [Contribution guide](https://github.com/sony/model_optimization/blob/main/CONTRIBUTING.md) for more details.
203
194
 
195
+ Thank you 🙏 to all our contributors!
204
196
 
205
197
  ## <div align="center">License</div>
206
198
  MCT is licensed under Apache License Version 2.0. By contributing to the project, you agree to the license and copyright terms therein and release your contribution under these terms.
@@ -215,7 +207,9 @@ Description: <div align="center" markdown="1">
215
207
 
216
208
  [3] [TORCHVISION.MODELS](https://pytorch.org/vision/stable/models.html)
217
209
 
218
- [4] Gordon, O., Cohen, E., Habi, H. V., & Netzer, A., 2024. [EPTQ: Enhanced Post-Training Quantization via Hessian-guided Network-wise Optimization. arXiv preprint](https://arxiv.org/abs/2309.11531)
210
+ [4] Gordon, O., Cohen, E., Habi, H. V., & Netzer, A., 2024. [EPTQ: Enhanced Post-Training Quantization via Hessian-guided Network-wise Optimization, European Conference on Computer Vision Workshop 2024, Computational Aspects of Deep Learning (CADL)](https://arxiv.org/abs/2309.11531)
211
+
212
+ [5] Dikstein, L., Lapid, A., Netzer, A., & Habi, H. V., 2024. [Data Generation for Hardware-Friendly Post-Training Quantization, Accepted to IEEE/CVF Winter Conference on Applications of Computer Vision (WACV) 2025](https://arxiv.org/abs/2410.22110)
219
213
 
220
214
  Platform: UNKNOWN
221
215
  Classifier: Programming Language :: Python :: 3
@@ -50,9 +50,9 @@ MCT supports various quantization methods as appears below.
50
50
 
51
51
  Quantization Method | Complexity | Computational Cost | API | Tutorial
52
52
  -------------------- | -----------|--------------------|---------|--------
53
- PTQ (Post Training Quantization) | Low | Low (~1-10 CPU minutes) | [PyTorch API](https://sony.github.io/model_optimization/docs/api/api_docs/methods/pytorch_post_training_quantization.html) / [Keras API](https://sony.github.io/model_optimization/docs/api/api_docs/methods/keras_post_training_quantization.html) | <a href="https://colab.research.google.com/github/sony/model_optimization/blob/main/tutorials/notebooks/mct_features_notebooks/pytorch/example_pytorch_post_training_quantization.ipynb"><img src="https://img.shields.io/badge/Pytorch-green"/></a> <a href="https://colab.research.google.com/github/sony/model_optimization/blob/main/tutorials/notebooks/mct_features_notebooks/keras/example_keras_post-training_quantization.ipynb"><img src="https://img.shields.io/badge/Keras-green"/></a>
54
- GPTQ (parameters fine-tuning using gradients) | Moderate | Moderate (~1-3 GPU hours) | [PyTorch API](https://sony.github.io/model_optimization/docs/api/api_docs/methods/pytorch_gradient_post_training_quantization.html) / [Keras API](https://sony.github.io/model_optimization/docs/api/api_docs/methods/keras_gradient_post_training_quantization.html) | <a href="https://colab.research.google.com/github/sony/model_optimization/blob/main/tutorials/notebooks/mct_features_notebooks/pytorch/example_pytorch_mobilenet_gptq.ipynb"><img src="https://img.shields.io/badge/PyTorch-green"/></a> <a href="https://colab.research.google.com/github/sony/model_optimization/blob/main/tutorials/notebooks/mct_features_notebooks/keras/example_keras_mobilenet_gptq.ipynb"><img src="https://img.shields.io/badge/Keras-green"/></a>
55
- QAT (Quantization Aware Training) | High | High (~12-36 GPU hours) | [QAT API](https://sony.github.io/model_optimization/docs/api/api_docs/index.html#qat) | <a href="https://colab.research.google.com/github/sony/model_optimization/blob/main/tutorials/notebooks/mct_features_notebooks/keras/example_keras_qat.ipynb"><img src="https://img.shields.io/badge/Keras-green"/></a>
53
+ PTQ (Post Training Quantization) | Low | Low (~1-10 CPU minutes) | [PyTorch API](https://sony.github.io/model_optimization/api/api_docs/methods/pytorch_post_training_quantization.html) / [Keras API](https://sony.github.io/model_optimization/api/api_docs/methods/keras_post_training_quantization.html) | <a href="https://colab.research.google.com/github/sony/model_optimization/blob/main/tutorials/notebooks/mct_features_notebooks/pytorch/example_pytorch_post_training_quantization.ipynb"><img src="https://img.shields.io/badge/Pytorch-green"/></a> <a href="https://colab.research.google.com/github/sony/model_optimization/blob/main/tutorials/notebooks/mct_features_notebooks/keras/example_keras_post-training_quantization.ipynb"><img src="https://img.shields.io/badge/Keras-green"/></a>
54
+ GPTQ (parameters fine-tuning using gradients) | Moderate | Moderate (~1-3 GPU hours) | [PyTorch API](https://sony.github.io/model_optimization/api/api_docs/methods/pytorch_gradient_post_training_quantization.html) / [Keras API](https://sony.github.io/model_optimization/api/api_docs/methods/keras_gradient_post_training_quantization.html) | <a href="https://colab.research.google.com/github/sony/model_optimization/blob/main/tutorials/notebooks/mct_features_notebooks/pytorch/example_pytorch_mobilenet_gptq.ipynb"><img src="https://img.shields.io/badge/PyTorch-green"/></a> <a href="https://colab.research.google.com/github/sony/model_optimization/blob/main/tutorials/notebooks/mct_features_notebooks/keras/example_keras_mobilenet_gptq.ipynb"><img src="https://img.shields.io/badge/Keras-green"/></a>
55
+ QAT (Quantization Aware Training) | High | High (~12-36 GPU hours) | [QAT API](https://sony.github.io/model_optimization/api/api_docs/index.html#qat) | <a href="https://colab.research.google.com/github/sony/model_optimization/blob/main/tutorials/notebooks/mct_features_notebooks/keras/example_keras_qat.ipynb"><img src="https://img.shields.io/badge/Keras-green"/></a>
56
56
 
57
57
  </p>
58
58
  </div>
@@ -60,9 +60,9 @@ QAT (Quantization Aware Training) | High | High (~12-36 GPU hours) | [QAT API](
60
60
  For each flow, **Quantization core** utilizes various algorithms and hyper-parameters for optimal [hardware-aware](https://github.com/sony/model_optimization/blob/main/model_compression_toolkit/target_platform_capabilities/README.md) quantization results.
61
61
  For further details, please see [Supported features and algorithms](#high-level-features-and-techniques).
62
62
 
63
- Required input:
64
- - Floating point model - 32bit model in either .pt or .keras format
65
- - Representative dataset - can be either provided by the user, or generated utilizing the [Data Generation](#data-generation-) capability
63
+ **Required input**: Floating point model - 32bit model in either .pt or .keras format
64
+
65
+ **Optional input**: Representative dataset - can be either provided by the user, or generated utilizing the [Data Generation](#data-generation-) capability
66
66
 
67
67
  <div align="center">
68
68
  <p align="center">
@@ -92,15 +92,16 @@ ________________________________________________________________________________
92
92
  __________________________________________________________________________________________________________
93
93
  ### Data-free quantization (Data Generation) [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/sony/model_optimization/blob/main/tutorials/notebooks/mct_features_notebooks/pytorch/example_pytorch_data_generation.ipynb)
94
94
  Generates synthetic images based on the statistics stored in the model's batch normalization layers, according to your specific needs, for when image data isn’t available. See [Data Generation Library](https://github.com/sony/model_optimization/blob/main/model_compression_toolkit/data_generation/README.md) for more.
95
+ The specifications of the method are detailed in the paper: _"**Data Generation for Hardware-Friendly Post-Training Quantization**"_ [5].
95
96
  __________________________________________________________________________________________________________
96
97
  ### Structured Pruning [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/sony/model_optimization/blob/main/tutorials/notebooks/mct_features_notebooks/pytorch/example_pytorch_pruning_mnist.ipynb)
97
- Reduces model size/complexity and ensures better channels utilization by removing redundant input channels from layers and reconstruction of layer weights. Read more ([Pytorch API](https://sony.github.io/model_optimization/docs/api/api_docs/methods/pytorch_pruning_experimental.html) / [Keras API](https://sony.github.io/model_optimization/docs/api/api_docs/methods/keras_pruning_experimental.html)).
98
+ Reduces model size/complexity and ensures better channels utilization by removing redundant input channels from layers and reconstruction of layer weights. Read more ([Pytorch API](https://sony.github.io/model_optimization/api/api_docs/methods/pytorch_pruning_experimental.html) / [Keras API](https://sony.github.io/model_optimization/api/api_docs/methods/keras_pruning_experimental.html)).
98
99
  __________________________________________________________________________________________________________
99
100
  ### **Debugging and Visualization**
100
101
  **🎛️ Network Editor (Modify Quantization Configurations)** [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/sony/model_optimization/blob/main/tutorials/notebooks/mct_features_notebooks/keras/example_keras_network_editor.ipynb).
101
- Modify your model's quantization configuration for specific layers or apply a custom edit rule (e.g adjust layer's bit-width) using MCT’s network editor
102
+ Modify your model's quantization configuration for specific layers or apply a custom edit rule (e.g adjust layer's bit-width) using MCT’s network editor.
102
103
 
103
- **🖥️ Visualization**. Observe useful information for troubleshooting the quantized model's performance using TensorBoard. [Read more](https://sony.github.io/model_optimization/docs/guidelines/visualization.html).
104
+ **🖥️ Visualization**. Observe useful information for troubleshooting the quantized model's performance using TensorBoard. [Read more](https://sony.github.io/model_optimization/guidelines/visualization.html).
104
105
 
105
106
  **🔑 XQuant (Explainable Quantization)** [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/sony/model_optimization/blob/main/tutorials/notebooks/mct_features_notebooks/pytorch/example_pytorch_xquant.ipynb). Get valuable insights regarding the quality and success of the quantization process of your model. The report includes histograms and similarity metrics between the original float model and the quantized model in key points of the model. The report can be visualized using TensorBoard.
106
107
  __________________________________________________________________________________________________________
@@ -110,15 +111,15 @@ The specifications of the algorithm are detailed in the paper: _"**EPTQ: Enhance
110
111
  More details on how to use EPTQ via MCT can be found in the [GPTQ guidelines](https://github.com/sony/model_optimization/blob/main/model_compression_toolkit/gptq/README.md).
111
112
 
112
113
  ## <div align="center">Resources</div>
113
- * [User Guide](https://sony.github.io/model_optimization/docs/index.html) contains detailed information about MCT and guides you from installation through optimizing models for your edge AI applications.
114
+ * [User Guide](https://sony.github.io/model_optimization/index.html) contains detailed information about MCT and guides you from installation through optimizing models for your edge AI applications.
114
115
 
115
- * MCT's [API Docs](https://sony.github.io/model_optimization/docs/api/api_docs/) is seperated per quantization methods:
116
+ * MCT's [API Docs](https://sony.github.io/model_optimization/api/api_docs/) is separated per quantization methods:
116
117
 
117
- * [Post-training quantization](https://sony.github.io/model_optimization/docs/api/api_docs/index.html#ptq) | PTQ API docs
118
- * [Gradient-based post-training quantization](https://sony.github.io/model_optimization/docs/api/api_docs/index.html#gptq) | GPTQ API docs
119
- * [Quantization-aware training](https://sony.github.io/model_optimization/docs/api/api_docs/index.html#qat) | QAT API docs
118
+ * [Post-training quantization](https://sony.github.io/model_optimization/api/api_docs/index.html#ptq) | PTQ API docs
119
+ * [Gradient-based post-training quantization](https://sony.github.io/model_optimization/api/api_docs/index.html#gptq) | GPTQ API docs
120
+ * [Quantization-aware training](https://sony.github.io/model_optimization/api/api_docs/index.html#qat) | QAT API docs
120
121
 
121
- * [Debug](https://sony.github.io/model_optimization/docs/guidelines/visualization.html) – modify optimization process or generate explainable report
122
+ * [Debug](https://sony.github.io/model_optimization/guidelines/visualization.html) – modify optimization process or generate an explainable report
122
123
 
123
124
  * [Release notes](https://github.com/sony/model_optimization/releases)
124
125
 
@@ -152,25 +153,15 @@ Currently, MCT is being tested on various Python, Pytorch and TensorFlow version
152
153
  <img src="/docsrc/images/PoseEst.png" width="200">
153
154
  <img src="/docsrc/images/ObjDet.png" width="200">
154
155
 
155
- ### Pytorch
156
- We quantized classification networks from the torchvision library.
157
- In the following table we present the ImageNet validation results for these models:
158
-
159
- | Network Name | Float Accuracy | 8Bit Accuracy | Data-Free 8Bit Accuracy |
160
- |---------------------------|-----------------|-----------------|-------------------------|
161
- | MobileNet V2 [3] | 71.886 | 71.444 |71.29|
162
- | ResNet-18 [3] | 69.86 | 69.63 |69.53|
163
- | SqueezeNet 1.1 [3] | 58.128 | 57.678 ||
164
-
165
- ### Keras
166
156
  MCT can quantize an existing 32-bit floating-point model to an 8-bit fixed-point (or less) model without compromising accuracy.
167
- Below is a graph of [MobileNetV2](https://keras.io/api/applications/mobilenet/) accuracy on ImageNet vs average bit-width of weights (X-axis), using
168
- single-precision quantization, mixed-precision quantization, and mixed-precision quantization with GPTQ.
157
+ Below is a graph of [MobileNetV2](https://pytorch.org/vision/main/models/generated/torchvision.models.mobilenet_v2.html) accuracy on ImageNet vs average bit-width of weights (X-axis), using **single-precision** quantization, **mixed-precision** quantization, and mixed-precision quantization with GPTQ.
169
158
 
170
- <img src="https://github.com/sony/model_optimization/raw/main/docsrc/images/mbv2_accuracy_graph.png">
159
+ <p align="center">
160
+ <img src="/docsrc/images/torch_mobilenetv2.png" width="800">
171
161
 
172
162
  For more results, please see [1]
173
163
 
164
+
174
165
  ### Pruning Results
175
166
 
176
167
  Results for applying pruning to reduce the parameters of the following models by 50%:
@@ -182,19 +173,20 @@ Results for applying pruning to reduce the parameters of the following models by
182
173
 
183
174
  ## <div align="center">Troubleshooting and Community</div>
184
175
 
185
- If you encountered large accuracy degradation with MCT, check out the [Quantization Troubleshooting](https://github.com/sony/model_optimization/tree/main/quantization_troubleshooting.md)
186
- for common pitfalls and some tools to improve quantized model's accuracy.
176
+ If you encountered a large accuracy degradation with MCT, check out the [Quantization Troubleshooting](https://github.com/sony/model_optimization/tree/main/quantization_troubleshooting.md)
177
+ for common pitfalls and some tools to improve the quantized model's accuracy.
187
178
 
188
179
  Check out the [FAQ](https://github.com/sony/model_optimization/tree/main/FAQ.md) for common issues.
189
180
 
190
- You are welcome to ask questions and get support on our [issues section](https://github.com/sony/model_optimization/issues) and manage community discussions under [discussions section](https://github.com/sony/model_optimization/discussions).
181
+ You are welcome to ask questions and get support on our [issues section](https://github.com/sony/model_optimization/issues) and manage community discussions under the [discussions section](https://github.com/sony/model_optimization/discussions).
191
182
 
192
183
 
193
184
  ## <div align="center">Contributions</div>
194
- MCT aims at keeping a more up-to-date fork and welcomes contributions from anyone.
185
+ We'd love your input! MCT would not be possible without help from our community, and welcomes contributions from anyone!
195
186
 
196
187
  *Checkout our [Contribution guide](https://github.com/sony/model_optimization/blob/main/CONTRIBUTING.md) for more details.
197
188
 
189
+ Thank you 🙏 to all our contributors!
198
190
 
199
191
  ## <div align="center">License</div>
200
192
  MCT is licensed under Apache License Version 2.0. By contributing to the project, you agree to the license and copyright terms therein and release your contribution under these terms.
@@ -209,4 +201,6 @@ MCT is licensed under Apache License Version 2.0. By contributing to the project
209
201
 
210
202
  [3] [TORCHVISION.MODELS](https://pytorch.org/vision/stable/models.html)
211
203
 
212
- [4] Gordon, O., Cohen, E., Habi, H. V., & Netzer, A., 2024. [EPTQ: Enhanced Post-Training Quantization via Hessian-guided Network-wise Optimization. arXiv preprint](https://arxiv.org/abs/2309.11531)
204
+ [4] Gordon, O., Cohen, E., Habi, H. V., & Netzer, A., 2024. [EPTQ: Enhanced Post-Training Quantization via Hessian-guided Network-wise Optimization, European Conference on Computer Vision Workshop 2024, Computational Aspects of Deep Learning (CADL)](https://arxiv.org/abs/2309.11531)
205
+
206
+ [5] Dikstein, L., Lapid, A., Netzer, A., & Habi, H. V., 2024. [Data Generation for Hardware-Friendly Post-Training Quantization, Accepted to IEEE/CVF Winter Conference on Applications of Computer Vision (WACV) 2025](https://arxiv.org/abs/2410.22110)
@@ -1,6 +1,6 @@
1
1
  Metadata-Version: 2.1
2
2
  Name: mct-nightly
3
- Version: 2.2.0.20241126.528
3
+ Version: 2.2.0.20241128.546
4
4
  Summary: A Model Compression Toolkit for neural networks
5
5
  Home-page: UNKNOWN
6
6
  License: UNKNOWN
@@ -56,9 +56,9 @@ Description: <div align="center" markdown="1">
56
56
 
57
57
  Quantization Method | Complexity | Computational Cost | API | Tutorial
58
58
  -------------------- | -----------|--------------------|---------|--------
59
- PTQ (Post Training Quantization) | Low | Low (~1-10 CPU minutes) | [PyTorch API](https://sony.github.io/model_optimization/docs/api/api_docs/methods/pytorch_post_training_quantization.html) / [Keras API](https://sony.github.io/model_optimization/docs/api/api_docs/methods/keras_post_training_quantization.html) | <a href="https://colab.research.google.com/github/sony/model_optimization/blob/main/tutorials/notebooks/mct_features_notebooks/pytorch/example_pytorch_post_training_quantization.ipynb"><img src="https://img.shields.io/badge/Pytorch-green"/></a> <a href="https://colab.research.google.com/github/sony/model_optimization/blob/main/tutorials/notebooks/mct_features_notebooks/keras/example_keras_post-training_quantization.ipynb"><img src="https://img.shields.io/badge/Keras-green"/></a>
60
- GPTQ (parameters fine-tuning using gradients) | Moderate | Moderate (~1-3 GPU hours) | [PyTorch API](https://sony.github.io/model_optimization/docs/api/api_docs/methods/pytorch_gradient_post_training_quantization.html) / [Keras API](https://sony.github.io/model_optimization/docs/api/api_docs/methods/keras_gradient_post_training_quantization.html) | <a href="https://colab.research.google.com/github/sony/model_optimization/blob/main/tutorials/notebooks/mct_features_notebooks/pytorch/example_pytorch_mobilenet_gptq.ipynb"><img src="https://img.shields.io/badge/PyTorch-green"/></a> <a href="https://colab.research.google.com/github/sony/model_optimization/blob/main/tutorials/notebooks/mct_features_notebooks/keras/example_keras_mobilenet_gptq.ipynb"><img src="https://img.shields.io/badge/Keras-green"/></a>
61
- QAT (Quantization Aware Training) | High | High (~12-36 GPU hours) | [QAT API](https://sony.github.io/model_optimization/docs/api/api_docs/index.html#qat) | <a href="https://colab.research.google.com/github/sony/model_optimization/blob/main/tutorials/notebooks/mct_features_notebooks/keras/example_keras_qat.ipynb"><img src="https://img.shields.io/badge/Keras-green"/></a>
59
+ PTQ (Post Training Quantization) | Low | Low (~1-10 CPU minutes) | [PyTorch API](https://sony.github.io/model_optimization/api/api_docs/methods/pytorch_post_training_quantization.html) / [Keras API](https://sony.github.io/model_optimization/api/api_docs/methods/keras_post_training_quantization.html) | <a href="https://colab.research.google.com/github/sony/model_optimization/blob/main/tutorials/notebooks/mct_features_notebooks/pytorch/example_pytorch_post_training_quantization.ipynb"><img src="https://img.shields.io/badge/Pytorch-green"/></a> <a href="https://colab.research.google.com/github/sony/model_optimization/blob/main/tutorials/notebooks/mct_features_notebooks/keras/example_keras_post-training_quantization.ipynb"><img src="https://img.shields.io/badge/Keras-green"/></a>
60
+ GPTQ (parameters fine-tuning using gradients) | Moderate | Moderate (~1-3 GPU hours) | [PyTorch API](https://sony.github.io/model_optimization/api/api_docs/methods/pytorch_gradient_post_training_quantization.html) / [Keras API](https://sony.github.io/model_optimization/api/api_docs/methods/keras_gradient_post_training_quantization.html) | <a href="https://colab.research.google.com/github/sony/model_optimization/blob/main/tutorials/notebooks/mct_features_notebooks/pytorch/example_pytorch_mobilenet_gptq.ipynb"><img src="https://img.shields.io/badge/PyTorch-green"/></a> <a href="https://colab.research.google.com/github/sony/model_optimization/blob/main/tutorials/notebooks/mct_features_notebooks/keras/example_keras_mobilenet_gptq.ipynb"><img src="https://img.shields.io/badge/Keras-green"/></a>
61
+ QAT (Quantization Aware Training) | High | High (~12-36 GPU hours) | [QAT API](https://sony.github.io/model_optimization/api/api_docs/index.html#qat) | <a href="https://colab.research.google.com/github/sony/model_optimization/blob/main/tutorials/notebooks/mct_features_notebooks/keras/example_keras_qat.ipynb"><img src="https://img.shields.io/badge/Keras-green"/></a>
62
62
 
63
63
  </p>
64
64
  </div>
@@ -66,9 +66,9 @@ Description: <div align="center" markdown="1">
66
66
  For each flow, **Quantization core** utilizes various algorithms and hyper-parameters for optimal [hardware-aware](https://github.com/sony/model_optimization/blob/main/model_compression_toolkit/target_platform_capabilities/README.md) quantization results.
67
67
  For further details, please see [Supported features and algorithms](#high-level-features-and-techniques).
68
68
 
69
- Required input:
70
- - Floating point model - 32bit model in either .pt or .keras format
71
- - Representative dataset - can be either provided by the user, or generated utilizing the [Data Generation](#data-generation-) capability
69
+ **Required input**: Floating point model - 32bit model in either .pt or .keras format
70
+
71
+ **Optional input**: Representative dataset - can be either provided by the user, or generated utilizing the [Data Generation](#data-generation-) capability
72
72
 
73
73
  <div align="center">
74
74
  <p align="center">
@@ -98,15 +98,16 @@ Description: <div align="center" markdown="1">
98
98
  __________________________________________________________________________________________________________
99
99
  ### Data-free quantization (Data Generation) [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/sony/model_optimization/blob/main/tutorials/notebooks/mct_features_notebooks/pytorch/example_pytorch_data_generation.ipynb)
100
100
  Generates synthetic images based on the statistics stored in the model's batch normalization layers, according to your specific needs, for when image data isn’t available. See [Data Generation Library](https://github.com/sony/model_optimization/blob/main/model_compression_toolkit/data_generation/README.md) for more.
101
+ The specifications of the method are detailed in the paper: _"**Data Generation for Hardware-Friendly Post-Training Quantization**"_ [5].
101
102
  __________________________________________________________________________________________________________
102
103
  ### Structured Pruning [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/sony/model_optimization/blob/main/tutorials/notebooks/mct_features_notebooks/pytorch/example_pytorch_pruning_mnist.ipynb)
103
- Reduces model size/complexity and ensures better channels utilization by removing redundant input channels from layers and reconstruction of layer weights. Read more ([Pytorch API](https://sony.github.io/model_optimization/docs/api/api_docs/methods/pytorch_pruning_experimental.html) / [Keras API](https://sony.github.io/model_optimization/docs/api/api_docs/methods/keras_pruning_experimental.html)).
104
+ Reduces model size/complexity and ensures better channels utilization by removing redundant input channels from layers and reconstruction of layer weights. Read more ([Pytorch API](https://sony.github.io/model_optimization/api/api_docs/methods/pytorch_pruning_experimental.html) / [Keras API](https://sony.github.io/model_optimization/api/api_docs/methods/keras_pruning_experimental.html)).
104
105
  __________________________________________________________________________________________________________
105
106
  ### **Debugging and Visualization**
106
107
  **🎛️ Network Editor (Modify Quantization Configurations)** [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/sony/model_optimization/blob/main/tutorials/notebooks/mct_features_notebooks/keras/example_keras_network_editor.ipynb).
107
- Modify your model's quantization configuration for specific layers or apply a custom edit rule (e.g adjust layer's bit-width) using MCT’s network editor
108
+ Modify your model's quantization configuration for specific layers or apply a custom edit rule (e.g adjust layer's bit-width) using MCT’s network editor.
108
109
 
109
- **🖥️ Visualization**. Observe useful information for troubleshooting the quantized model's performance using TensorBoard. [Read more](https://sony.github.io/model_optimization/docs/guidelines/visualization.html).
110
+ **🖥️ Visualization**. Observe useful information for troubleshooting the quantized model's performance using TensorBoard. [Read more](https://sony.github.io/model_optimization/guidelines/visualization.html).
110
111
 
111
112
  **🔑 XQuant (Explainable Quantization)** [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/sony/model_optimization/blob/main/tutorials/notebooks/mct_features_notebooks/pytorch/example_pytorch_xquant.ipynb). Get valuable insights regarding the quality and success of the quantization process of your model. The report includes histograms and similarity metrics between the original float model and the quantized model in key points of the model. The report can be visualized using TensorBoard.
112
113
  __________________________________________________________________________________________________________
@@ -116,15 +117,15 @@ Description: <div align="center" markdown="1">
116
117
  More details on how to use EPTQ via MCT can be found in the [GPTQ guidelines](https://github.com/sony/model_optimization/blob/main/model_compression_toolkit/gptq/README.md).
117
118
 
118
119
  ## <div align="center">Resources</div>
119
- * [User Guide](https://sony.github.io/model_optimization/docs/index.html) contains detailed information about MCT and guides you from installation through optimizing models for your edge AI applications.
120
+ * [User Guide](https://sony.github.io/model_optimization/index.html) contains detailed information about MCT and guides you from installation through optimizing models for your edge AI applications.
120
121
 
121
- * MCT's [API Docs](https://sony.github.io/model_optimization/docs/api/api_docs/) is seperated per quantization methods:
122
+ * MCT's [API Docs](https://sony.github.io/model_optimization/api/api_docs/) is separated per quantization methods:
122
123
 
123
- * [Post-training quantization](https://sony.github.io/model_optimization/docs/api/api_docs/index.html#ptq) | PTQ API docs
124
- * [Gradient-based post-training quantization](https://sony.github.io/model_optimization/docs/api/api_docs/index.html#gptq) | GPTQ API docs
125
- * [Quantization-aware training](https://sony.github.io/model_optimization/docs/api/api_docs/index.html#qat) | QAT API docs
124
+ * [Post-training quantization](https://sony.github.io/model_optimization/api/api_docs/index.html#ptq) | PTQ API docs
125
+ * [Gradient-based post-training quantization](https://sony.github.io/model_optimization/api/api_docs/index.html#gptq) | GPTQ API docs
126
+ * [Quantization-aware training](https://sony.github.io/model_optimization/api/api_docs/index.html#qat) | QAT API docs
126
127
 
127
- * [Debug](https://sony.github.io/model_optimization/docs/guidelines/visualization.html) – modify optimization process or generate explainable report
128
+ * [Debug](https://sony.github.io/model_optimization/guidelines/visualization.html) – modify optimization process or generate an explainable report
128
129
 
129
130
  * [Release notes](https://github.com/sony/model_optimization/releases)
130
131
 
@@ -158,25 +159,15 @@ Description: <div align="center" markdown="1">
158
159
  <img src="/docsrc/images/PoseEst.png" width="200">
159
160
  <img src="/docsrc/images/ObjDet.png" width="200">
160
161
 
161
- ### Pytorch
162
- We quantized classification networks from the torchvision library.
163
- In the following table we present the ImageNet validation results for these models:
164
-
165
- | Network Name | Float Accuracy | 8Bit Accuracy | Data-Free 8Bit Accuracy |
166
- |---------------------------|-----------------|-----------------|-------------------------|
167
- | MobileNet V2 [3] | 71.886 | 71.444 |71.29|
168
- | ResNet-18 [3] | 69.86 | 69.63 |69.53|
169
- | SqueezeNet 1.1 [3] | 58.128 | 57.678 ||
170
-
171
- ### Keras
172
162
  MCT can quantize an existing 32-bit floating-point model to an 8-bit fixed-point (or less) model without compromising accuracy.
173
- Below is a graph of [MobileNetV2](https://keras.io/api/applications/mobilenet/) accuracy on ImageNet vs average bit-width of weights (X-axis), using
174
- single-precision quantization, mixed-precision quantization, and mixed-precision quantization with GPTQ.
163
+ Below is a graph of [MobileNetV2](https://pytorch.org/vision/main/models/generated/torchvision.models.mobilenet_v2.html) accuracy on ImageNet vs average bit-width of weights (X-axis), using **single-precision** quantization, **mixed-precision** quantization, and mixed-precision quantization with GPTQ.
175
164
 
176
- <img src="https://github.com/sony/model_optimization/raw/main/docsrc/images/mbv2_accuracy_graph.png">
165
+ <p align="center">
166
+ <img src="/docsrc/images/torch_mobilenetv2.png" width="800">
177
167
 
178
168
  For more results, please see [1]
179
169
 
170
+
180
171
  ### Pruning Results
181
172
 
182
173
  Results for applying pruning to reduce the parameters of the following models by 50%:
@@ -188,19 +179,20 @@ Description: <div align="center" markdown="1">
188
179
 
189
180
  ## <div align="center">Troubleshooting and Community</div>
190
181
 
191
- If you encountered large accuracy degradation with MCT, check out the [Quantization Troubleshooting](https://github.com/sony/model_optimization/tree/main/quantization_troubleshooting.md)
192
- for common pitfalls and some tools to improve quantized model's accuracy.
182
+ If you encountered a large accuracy degradation with MCT, check out the [Quantization Troubleshooting](https://github.com/sony/model_optimization/tree/main/quantization_troubleshooting.md)
183
+ for common pitfalls and some tools to improve the quantized model's accuracy.
193
184
 
194
185
  Check out the [FAQ](https://github.com/sony/model_optimization/tree/main/FAQ.md) for common issues.
195
186
 
196
- You are welcome to ask questions and get support on our [issues section](https://github.com/sony/model_optimization/issues) and manage community discussions under [discussions section](https://github.com/sony/model_optimization/discussions).
187
+ You are welcome to ask questions and get support on our [issues section](https://github.com/sony/model_optimization/issues) and manage community discussions under the [discussions section](https://github.com/sony/model_optimization/discussions).
197
188
 
198
189
 
199
190
  ## <div align="center">Contributions</div>
200
- MCT aims at keeping a more up-to-date fork and welcomes contributions from anyone.
191
+ We'd love your input! MCT would not be possible without help from our community, and welcomes contributions from anyone!
201
192
 
202
193
  *Checkout our [Contribution guide](https://github.com/sony/model_optimization/blob/main/CONTRIBUTING.md) for more details.
203
194
 
195
+ Thank you 🙏 to all our contributors!
204
196
 
205
197
  ## <div align="center">License</div>
206
198
  MCT is licensed under Apache License Version 2.0. By contributing to the project, you agree to the license and copyright terms therein and release your contribution under these terms.
@@ -215,7 +207,9 @@ Description: <div align="center" markdown="1">
215
207
 
216
208
  [3] [TORCHVISION.MODELS](https://pytorch.org/vision/stable/models.html)
217
209
 
218
- [4] Gordon, O., Cohen, E., Habi, H. V., & Netzer, A., 2024. [EPTQ: Enhanced Post-Training Quantization via Hessian-guided Network-wise Optimization. arXiv preprint](https://arxiv.org/abs/2309.11531)
210
+ [4] Gordon, O., Cohen, E., Habi, H. V., & Netzer, A., 2024. [EPTQ: Enhanced Post-Training Quantization via Hessian-guided Network-wise Optimization, European Conference on Computer Vision Workshop 2024, Computational Aspects of Deep Learning (CADL)](https://arxiv.org/abs/2309.11531)
211
+
212
+ [5] Dikstein, L., Lapid, A., Netzer, A., & Habi, H. V., 2024. [Data Generation for Hardware-Friendly Post-Training Quantization, Accepted to IEEE/CVF Winter Conference on Applications of Computer Vision (WACV) 2025](https://arxiv.org/abs/2410.22110)
219
213
 
220
214
  Platform: UNKNOWN
221
215
  Classifier: Programming Language :: Python :: 3
@@ -27,4 +27,4 @@ from model_compression_toolkit import data_generation
27
27
  from model_compression_toolkit import pruning
28
28
  from model_compression_toolkit.trainable_infrastructure.keras.load_model import keras_load_quantized_model
29
29
 
30
- __version__ = "2.2.0.20241126.000528"
30
+ __version__ = "2.2.0.20241128.000546"