compressed-tensors 0.10.3a20250811__tar.gz → 0.10.3a20250814__tar.gz
This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
- {compressed_tensors-0.10.3a20250811/src/compressed_tensors.egg-info → compressed_tensors-0.10.3a20250814}/PKG-INFO +1 -1
- {compressed_tensors-0.10.3a20250811 → compressed_tensors-0.10.3a20250814}/src/compressed_tensors/compressors/model_compressors/model_compressor.py +97 -28
- {compressed_tensors-0.10.3a20250811 → compressed_tensors-0.10.3a20250814}/src/compressed_tensors/config/base.py +1 -0
- {compressed_tensors-0.10.3a20250811 → compressed_tensors-0.10.3a20250814}/src/compressed_tensors/quantization/quant_config.py +6 -0
- {compressed_tensors-0.10.3a20250811 → compressed_tensors-0.10.3a20250814}/src/compressed_tensors/quantization/quant_scheme.py +3 -0
- {compressed_tensors-0.10.3a20250811 → compressed_tensors-0.10.3a20250814}/src/compressed_tensors/utils/offload.py +15 -1
- {compressed_tensors-0.10.3a20250811 → compressed_tensors-0.10.3a20250814}/src/compressed_tensors/version.py +1 -1
- {compressed_tensors-0.10.3a20250811 → compressed_tensors-0.10.3a20250814/src/compressed_tensors.egg-info}/PKG-INFO +1 -1
- {compressed_tensors-0.10.3a20250811 → compressed_tensors-0.10.3a20250814}/tests/test_compressors/model_compressors/test_model_compressor.py +46 -5
- {compressed_tensors-0.10.3a20250811 → compressed_tensors-0.10.3a20250814}/tests/test_quantization/test_quant_scheme.py +5 -1
- {compressed_tensors-0.10.3a20250811 → compressed_tensors-0.10.3a20250814}/.github/.gitkeep +0 -0
- {compressed_tensors-0.10.3a20250811 → compressed_tensors-0.10.3a20250814}/.github/actions/test/action.yml +0 -0
- {compressed_tensors-0.10.3a20250811 → compressed_tensors-0.10.3a20250814}/.github/scripts/step-status +0 -0
- {compressed_tensors-0.10.3a20250811 → compressed_tensors-0.10.3a20250814}/.github/workflows/build-test.yml +0 -0
- {compressed_tensors-0.10.3a20250811 → compressed_tensors-0.10.3a20250814}/.github/workflows/build.yml +0 -0
- {compressed_tensors-0.10.3a20250811 → compressed_tensors-0.10.3a20250814}/.github/workflows/report.yml +0 -0
- {compressed_tensors-0.10.3a20250811 → compressed_tensors-0.10.3a20250814}/.github/workflows/test-check.yaml +0 -0
- {compressed_tensors-0.10.3a20250811 → compressed_tensors-0.10.3a20250814}/.github/workflows/test.yml +0 -0
- {compressed_tensors-0.10.3a20250811 → compressed_tensors-0.10.3a20250814}/.github/workflows/trigger-all.yml +0 -0
- {compressed_tensors-0.10.3a20250811 → compressed_tensors-0.10.3a20250814}/.github/workflows/upload.yml +0 -0
- {compressed_tensors-0.10.3a20250811 → compressed_tensors-0.10.3a20250814}/.gitignore +0 -0
- {compressed_tensors-0.10.3a20250811 → compressed_tensors-0.10.3a20250814}/LICENSE +0 -0
- {compressed_tensors-0.10.3a20250811 → compressed_tensors-0.10.3a20250814}/Makefile +0 -0
- {compressed_tensors-0.10.3a20250811 → compressed_tensors-0.10.3a20250814}/README.md +0 -0
- {compressed_tensors-0.10.3a20250811 → compressed_tensors-0.10.3a20250814}/examples/bit_packing/ex_quantize_and_pack.py +0 -0
- {compressed_tensors-0.10.3a20250811 → compressed_tensors-0.10.3a20250814}/examples/bit_packing/int4_config.json +0 -0
- {compressed_tensors-0.10.3a20250811 → compressed_tensors-0.10.3a20250814}/examples/bitmask_compression.ipynb +0 -0
- {compressed_tensors-0.10.3a20250811 → compressed_tensors-0.10.3a20250814}/examples/llama_1.1b/ex_config_quantization.py +0 -0
- {compressed_tensors-0.10.3a20250811 → compressed_tensors-0.10.3a20250814}/examples/llama_1.1b/ex_llmcompressor_quantization.py +0 -0
- {compressed_tensors-0.10.3a20250811 → compressed_tensors-0.10.3a20250814}/examples/llama_1.1b/example_quant_config.json +0 -0
- {compressed_tensors-0.10.3a20250811 → compressed_tensors-0.10.3a20250814}/examples/llama_1.1b/example_quant_recipe.yaml +0 -0
- {compressed_tensors-0.10.3a20250811 → compressed_tensors-0.10.3a20250814}/examples/quantize_and_pack_int4.ipynb +0 -0
- {compressed_tensors-0.10.3a20250811 → compressed_tensors-0.10.3a20250814}/pyproject.toml +0 -0
- {compressed_tensors-0.10.3a20250811 → compressed_tensors-0.10.3a20250814}/setup.cfg +0 -0
- {compressed_tensors-0.10.3a20250811 → compressed_tensors-0.10.3a20250814}/setup.py +0 -0
- {compressed_tensors-0.10.3a20250811 → compressed_tensors-0.10.3a20250814}/src/__init__.py +0 -0
- {compressed_tensors-0.10.3a20250811 → compressed_tensors-0.10.3a20250814}/src/compressed_tensors/README.md +0 -0
- {compressed_tensors-0.10.3a20250811 → compressed_tensors-0.10.3a20250814}/src/compressed_tensors/__init__.py +0 -0
- {compressed_tensors-0.10.3a20250811 → compressed_tensors-0.10.3a20250814}/src/compressed_tensors/base.py +0 -0
- {compressed_tensors-0.10.3a20250811 → compressed_tensors-0.10.3a20250814}/src/compressed_tensors/compressors/__init__.py +0 -0
- {compressed_tensors-0.10.3a20250811 → compressed_tensors-0.10.3a20250814}/src/compressed_tensors/compressors/base.py +0 -0
- {compressed_tensors-0.10.3a20250811 → compressed_tensors-0.10.3a20250814}/src/compressed_tensors/compressors/helpers.py +0 -0
- {compressed_tensors-0.10.3a20250811 → compressed_tensors-0.10.3a20250814}/src/compressed_tensors/compressors/model_compressors/__init__.py +0 -0
- {compressed_tensors-0.10.3a20250811 → compressed_tensors-0.10.3a20250814}/src/compressed_tensors/compressors/quantized_compressors/__init__.py +0 -0
- {compressed_tensors-0.10.3a20250811 → compressed_tensors-0.10.3a20250814}/src/compressed_tensors/compressors/quantized_compressors/base.py +0 -0
- {compressed_tensors-0.10.3a20250811 → compressed_tensors-0.10.3a20250814}/src/compressed_tensors/compressors/quantized_compressors/naive_quantized.py +0 -0
- {compressed_tensors-0.10.3a20250811 → compressed_tensors-0.10.3a20250814}/src/compressed_tensors/compressors/quantized_compressors/nvfp4_quantized.py +0 -0
- {compressed_tensors-0.10.3a20250811 → compressed_tensors-0.10.3a20250814}/src/compressed_tensors/compressors/quantized_compressors/pack_quantized.py +0 -0
- {compressed_tensors-0.10.3a20250811 → compressed_tensors-0.10.3a20250814}/src/compressed_tensors/compressors/sparse_compressors/__init__.py +0 -0
- {compressed_tensors-0.10.3a20250811 → compressed_tensors-0.10.3a20250814}/src/compressed_tensors/compressors/sparse_compressors/base.py +0 -0
- {compressed_tensors-0.10.3a20250811 → compressed_tensors-0.10.3a20250814}/src/compressed_tensors/compressors/sparse_compressors/dense.py +0 -0
- {compressed_tensors-0.10.3a20250811 → compressed_tensors-0.10.3a20250814}/src/compressed_tensors/compressors/sparse_compressors/sparse_24_bitmask.py +0 -0
- {compressed_tensors-0.10.3a20250811 → compressed_tensors-0.10.3a20250814}/src/compressed_tensors/compressors/sparse_compressors/sparse_bitmask.py +0 -0
- {compressed_tensors-0.10.3a20250811 → compressed_tensors-0.10.3a20250814}/src/compressed_tensors/compressors/sparse_quantized_compressors/__init__.py +0 -0
- {compressed_tensors-0.10.3a20250811 → compressed_tensors-0.10.3a20250814}/src/compressed_tensors/compressors/sparse_quantized_compressors/marlin_24.py +0 -0
- {compressed_tensors-0.10.3a20250811 → compressed_tensors-0.10.3a20250814}/src/compressed_tensors/config/__init__.py +0 -0
- {compressed_tensors-0.10.3a20250811 → compressed_tensors-0.10.3a20250814}/src/compressed_tensors/config/dense.py +0 -0
- {compressed_tensors-0.10.3a20250811 → compressed_tensors-0.10.3a20250814}/src/compressed_tensors/config/sparse_24_bitmask.py +0 -0
- {compressed_tensors-0.10.3a20250811 → compressed_tensors-0.10.3a20250814}/src/compressed_tensors/config/sparse_bitmask.py +0 -0
- {compressed_tensors-0.10.3a20250811 → compressed_tensors-0.10.3a20250814}/src/compressed_tensors/linear/__init__.py +0 -0
- {compressed_tensors-0.10.3a20250811 → compressed_tensors-0.10.3a20250814}/src/compressed_tensors/linear/compressed_linear.py +0 -0
- {compressed_tensors-0.10.3a20250811 → compressed_tensors-0.10.3a20250814}/src/compressed_tensors/quantization/__init__.py +0 -0
- {compressed_tensors-0.10.3a20250811 → compressed_tensors-0.10.3a20250814}/src/compressed_tensors/quantization/lifecycle/__init__.py +0 -0
- {compressed_tensors-0.10.3a20250811 → compressed_tensors-0.10.3a20250814}/src/compressed_tensors/quantization/lifecycle/apply.py +0 -0
- {compressed_tensors-0.10.3a20250811 → compressed_tensors-0.10.3a20250814}/src/compressed_tensors/quantization/lifecycle/compressed.py +0 -0
- {compressed_tensors-0.10.3a20250811 → compressed_tensors-0.10.3a20250814}/src/compressed_tensors/quantization/lifecycle/forward.py +0 -0
- {compressed_tensors-0.10.3a20250811 → compressed_tensors-0.10.3a20250814}/src/compressed_tensors/quantization/lifecycle/helpers.py +0 -0
- {compressed_tensors-0.10.3a20250811 → compressed_tensors-0.10.3a20250814}/src/compressed_tensors/quantization/lifecycle/initialize.py +0 -0
- {compressed_tensors-0.10.3a20250811 → compressed_tensors-0.10.3a20250814}/src/compressed_tensors/quantization/quant_args.py +0 -0
- {compressed_tensors-0.10.3a20250811 → compressed_tensors-0.10.3a20250814}/src/compressed_tensors/quantization/utils/__init__.py +0 -0
- {compressed_tensors-0.10.3a20250811 → compressed_tensors-0.10.3a20250814}/src/compressed_tensors/quantization/utils/helpers.py +0 -0
- {compressed_tensors-0.10.3a20250811 → compressed_tensors-0.10.3a20250814}/src/compressed_tensors/registry/__init__.py +0 -0
- {compressed_tensors-0.10.3a20250811 → compressed_tensors-0.10.3a20250814}/src/compressed_tensors/registry/registry.py +0 -0
- {compressed_tensors-0.10.3a20250811 → compressed_tensors-0.10.3a20250814}/src/compressed_tensors/transform/__init__.py +0 -0
- {compressed_tensors-0.10.3a20250811 → compressed_tensors-0.10.3a20250814}/src/compressed_tensors/transform/apply.py +0 -0
- {compressed_tensors-0.10.3a20250811 → compressed_tensors-0.10.3a20250814}/src/compressed_tensors/transform/factory/__init__.py +0 -0
- {compressed_tensors-0.10.3a20250811 → compressed_tensors-0.10.3a20250814}/src/compressed_tensors/transform/factory/base.py +0 -0
- {compressed_tensors-0.10.3a20250811 → compressed_tensors-0.10.3a20250814}/src/compressed_tensors/transform/factory/hadamard.py +0 -0
- {compressed_tensors-0.10.3a20250811 → compressed_tensors-0.10.3a20250814}/src/compressed_tensors/transform/factory/matrix_multiply.py +0 -0
- {compressed_tensors-0.10.3a20250811 → compressed_tensors-0.10.3a20250814}/src/compressed_tensors/transform/factory/random_hadamard.py +0 -0
- {compressed_tensors-0.10.3a20250811 → compressed_tensors-0.10.3a20250814}/src/compressed_tensors/transform/transform_args.py +0 -0
- {compressed_tensors-0.10.3a20250811 → compressed_tensors-0.10.3a20250814}/src/compressed_tensors/transform/transform_config.py +0 -0
- {compressed_tensors-0.10.3a20250811 → compressed_tensors-0.10.3a20250814}/src/compressed_tensors/transform/transform_scheme.py +0 -0
- {compressed_tensors-0.10.3a20250811 → compressed_tensors-0.10.3a20250814}/src/compressed_tensors/transform/utils/__init__.py +0 -0
- {compressed_tensors-0.10.3a20250811 → compressed_tensors-0.10.3a20250814}/src/compressed_tensors/transform/utils/hadamard.py +0 -0
- {compressed_tensors-0.10.3a20250811 → compressed_tensors-0.10.3a20250814}/src/compressed_tensors/transform/utils/hadamards.safetensors +0 -0
- {compressed_tensors-0.10.3a20250811 → compressed_tensors-0.10.3a20250814}/src/compressed_tensors/transform/utils/matrix.py +0 -0
- {compressed_tensors-0.10.3a20250811 → compressed_tensors-0.10.3a20250814}/src/compressed_tensors/utils/__init__.py +0 -0
- {compressed_tensors-0.10.3a20250811 → compressed_tensors-0.10.3a20250814}/src/compressed_tensors/utils/helpers.py +0 -0
- {compressed_tensors-0.10.3a20250811 → compressed_tensors-0.10.3a20250814}/src/compressed_tensors/utils/internal.py +0 -0
- {compressed_tensors-0.10.3a20250811 → compressed_tensors-0.10.3a20250814}/src/compressed_tensors/utils/match.py +0 -0
- {compressed_tensors-0.10.3a20250811 → compressed_tensors-0.10.3a20250814}/src/compressed_tensors/utils/permutations_24.py +0 -0
- {compressed_tensors-0.10.3a20250811 → compressed_tensors-0.10.3a20250814}/src/compressed_tensors/utils/permute.py +0 -0
- {compressed_tensors-0.10.3a20250811 → compressed_tensors-0.10.3a20250814}/src/compressed_tensors/utils/safetensors_load.py +0 -0
- {compressed_tensors-0.10.3a20250811 → compressed_tensors-0.10.3a20250814}/src/compressed_tensors/utils/semi_structured_conversions.py +0 -0
- {compressed_tensors-0.10.3a20250811 → compressed_tensors-0.10.3a20250814}/src/compressed_tensors/utils/type.py +0 -0
- {compressed_tensors-0.10.3a20250811 → compressed_tensors-0.10.3a20250814}/src/compressed_tensors.egg-info/SOURCES.txt +0 -0
- {compressed_tensors-0.10.3a20250811 → compressed_tensors-0.10.3a20250814}/src/compressed_tensors.egg-info/dependency_links.txt +0 -0
- {compressed_tensors-0.10.3a20250811 → compressed_tensors-0.10.3a20250814}/src/compressed_tensors.egg-info/requires.txt +0 -0
- {compressed_tensors-0.10.3a20250811 → compressed_tensors-0.10.3a20250814}/src/compressed_tensors.egg-info/top_level.txt +0 -0
- {compressed_tensors-0.10.3a20250811 → compressed_tensors-0.10.3a20250814}/tests/__init__.py +0 -0
- {compressed_tensors-0.10.3a20250811 → compressed_tensors-0.10.3a20250814}/tests/conftest.py +0 -0
- {compressed_tensors-0.10.3a20250811 → compressed_tensors-0.10.3a20250814}/tests/test_compressors/__init__.py +0 -0
- {compressed_tensors-0.10.3a20250811 → compressed_tensors-0.10.3a20250814}/tests/test_compressors/model_compressors/__init__.py +0 -0
- {compressed_tensors-0.10.3a20250811 → compressed_tensors-0.10.3a20250814}/tests/test_compressors/quantized_compressors/__init__.py +0 -0
- {compressed_tensors-0.10.3a20250811 → compressed_tensors-0.10.3a20250814}/tests/test_compressors/quantized_compressors/test_fp8_quant.py +0 -0
- {compressed_tensors-0.10.3a20250811 → compressed_tensors-0.10.3a20250814}/tests/test_compressors/quantized_compressors/test_int_quant.py +0 -0
- {compressed_tensors-0.10.3a20250811 → compressed_tensors-0.10.3a20250814}/tests/test_compressors/quantized_compressors/test_nvfp4_quant.py +0 -0
- {compressed_tensors-0.10.3a20250811 → compressed_tensors-0.10.3a20250814}/tests/test_compressors/quantized_compressors/test_pack_quant.py +0 -0
- {compressed_tensors-0.10.3a20250811 → compressed_tensors-0.10.3a20250814}/tests/test_compressors/sparse_compressors/__init__.py +0 -0
- {compressed_tensors-0.10.3a20250811 → compressed_tensors-0.10.3a20250814}/tests/test_compressors/sparse_compressors/test_bitmask.py +0 -0
- {compressed_tensors-0.10.3a20250811 → compressed_tensors-0.10.3a20250814}/tests/test_compressors/sparse_compressors/test_sparse_24_bitmask.py +0 -0
- {compressed_tensors-0.10.3a20250811 → compressed_tensors-0.10.3a20250814}/tests/test_compressors/sparse_quantized_compressors/__init__.py +0 -0
- {compressed_tensors-0.10.3a20250811 → compressed_tensors-0.10.3a20250814}/tests/test_compressors/sparse_quantized_compressors/test_marlin_24.py +0 -0
- {compressed_tensors-0.10.3a20250811 → compressed_tensors-0.10.3a20250814}/tests/test_configs/__init__.py +0 -0
- {compressed_tensors-0.10.3a20250811 → compressed_tensors-0.10.3a20250814}/tests/test_configs/test_base.py +0 -0
- {compressed_tensors-0.10.3a20250811 → compressed_tensors-0.10.3a20250814}/tests/test_examples/test_bitmask_compression_ipynb.py +0 -0
- {compressed_tensors-0.10.3a20250811 → compressed_tensors-0.10.3a20250814}/tests/test_linear/__init__.py +0 -0
- {compressed_tensors-0.10.3a20250811 → compressed_tensors-0.10.3a20250814}/tests/test_linear/test_compressed_linear.py +0 -0
- {compressed_tensors-0.10.3a20250811 → compressed_tensors-0.10.3a20250814}/tests/test_quantization/__init__.py +0 -0
- {compressed_tensors-0.10.3a20250811 → compressed_tensors-0.10.3a20250814}/tests/test_quantization/lifecycle/__init__.py +0 -0
- {compressed_tensors-0.10.3a20250811 → compressed_tensors-0.10.3a20250814}/tests/test_quantization/lifecycle/conftest.py +0 -0
- {compressed_tensors-0.10.3a20250811 → compressed_tensors-0.10.3a20250814}/tests/test_quantization/lifecycle/test_apply.py +0 -0
- {compressed_tensors-0.10.3a20250811 → compressed_tensors-0.10.3a20250814}/tests/test_quantization/lifecycle/test_dynamic_lifecycle.py +0 -0
- {compressed_tensors-0.10.3a20250811 → compressed_tensors-0.10.3a20250814}/tests/test_quantization/lifecycle/test_enabled.py +0 -0
- {compressed_tensors-0.10.3a20250811 → compressed_tensors-0.10.3a20250814}/tests/test_quantization/lifecycle/test_forward.py +0 -0
- {compressed_tensors-0.10.3a20250811 → compressed_tensors-0.10.3a20250814}/tests/test_quantization/lifecycle/test_helpers.py +0 -0
- {compressed_tensors-0.10.3a20250811 → compressed_tensors-0.10.3a20250814}/tests/test_quantization/lifecycle/test_initialize.py +0 -0
- {compressed_tensors-0.10.3a20250811 → compressed_tensors-0.10.3a20250814}/tests/test_quantization/lifecycle/test_lifecycle.py +0 -0
- {compressed_tensors-0.10.3a20250811 → compressed_tensors-0.10.3a20250814}/tests/test_quantization/test_configs/__init__.py +0 -0
- {compressed_tensors-0.10.3a20250811 → compressed_tensors-0.10.3a20250814}/tests/test_quantization/test_configs/test_bit_depths.py +0 -0
- {compressed_tensors-0.10.3a20250811 → compressed_tensors-0.10.3a20250814}/tests/test_quantization/test_configs/test_strategies.py +0 -0
- {compressed_tensors-0.10.3a20250811 → compressed_tensors-0.10.3a20250814}/tests/test_quantization/test_quant_args.py +0 -0
- {compressed_tensors-0.10.3a20250811 → compressed_tensors-0.10.3a20250814}/tests/test_quantization/test_quant_config.py +0 -0
- {compressed_tensors-0.10.3a20250811 → compressed_tensors-0.10.3a20250814}/tests/test_quantization/test_utils/test_helpers.py +0 -0
- {compressed_tensors-0.10.3a20250811 → compressed_tensors-0.10.3a20250814}/tests/test_registry.py +0 -0
- {compressed_tensors-0.10.3a20250811 → compressed_tensors-0.10.3a20250814}/tests/test_transform/conftest.py +0 -0
- {compressed_tensors-0.10.3a20250811 → compressed_tensors-0.10.3a20250814}/tests/test_transform/factory/test_correctness.py +0 -0
- {compressed_tensors-0.10.3a20250811 → compressed_tensors-0.10.3a20250814}/tests/test_transform/factory/test_memory.py +0 -0
- {compressed_tensors-0.10.3a20250811 → compressed_tensors-0.10.3a20250814}/tests/test_transform/factory/test_serialization.py +0 -0
- {compressed_tensors-0.10.3a20250811 → compressed_tensors-0.10.3a20250814}/tests/test_transform/test_transform_args.py +0 -0
- {compressed_tensors-0.10.3a20250811 → compressed_tensors-0.10.3a20250814}/tests/test_transform/test_transform_config.py +0 -0
- {compressed_tensors-0.10.3a20250811 → compressed_tensors-0.10.3a20250814}/tests/test_transform/test_transform_scheme.py +0 -0
- {compressed_tensors-0.10.3a20250811 → compressed_tensors-0.10.3a20250814}/tests/test_transform/utils/test_hadamard.py +0 -0
- {compressed_tensors-0.10.3a20250811 → compressed_tensors-0.10.3a20250814}/tests/test_utils/__init__.py +0 -0
- {compressed_tensors-0.10.3a20250811 → compressed_tensors-0.10.3a20250814}/tests/test_utils/test_helpers.py +0 -0
- {compressed_tensors-0.10.3a20250811 → compressed_tensors-0.10.3a20250814}/tests/test_utils/test_match.py +0 -0
- {compressed_tensors-0.10.3a20250811 → compressed_tensors-0.10.3a20250814}/tests/test_utils/test_offload.py +0 -0
- {compressed_tensors-0.10.3a20250811 → compressed_tensors-0.10.3a20250814}/tests/test_utils/test_safetensors_load.py +0 -0
- {compressed_tensors-0.10.3a20250811 → compressed_tensors-0.10.3a20250814}/tests/test_utils/test_type.py +0 -0
- {compressed_tensors-0.10.3a20250811 → compressed_tensors-0.10.3a20250814}/tests/testing_utils.py +0 -0
- {compressed_tensors-0.10.3a20250811 → compressed_tensors-0.10.3a20250814}/utils/copyright.py +0 -0
@@ -1,6 +1,6 @@
|
|
1
1
|
Metadata-Version: 2.4
|
2
2
|
Name: compressed-tensors
|
3
|
-
Version: 0.10.
|
3
|
+
Version: 0.10.3a20250814
|
4
4
|
Summary: Library for utilization of compressed safetensors of neural network models
|
5
5
|
Home-page: https://github.com/neuralmagic/compressed-tensors
|
6
6
|
Author: Neuralmagic, Inc.
|
@@ -169,7 +169,7 @@ class ModelCompressor:
|
|
169
169
|
cls,
|
170
170
|
model: Module,
|
171
171
|
sparsity_config: Union[SparsityCompressionConfig, str, None] = None,
|
172
|
-
quantization_format: Optional[str] = None,
|
172
|
+
quantization_format: Optional[Union[str, List[str]]] = None,
|
173
173
|
) -> Optional["ModelCompressor"]:
|
174
174
|
"""
|
175
175
|
Given a pytorch model and optional sparsity and/or quantization configs,
|
@@ -182,7 +182,6 @@ class ModelCompressor:
|
|
182
182
|
algorithm
|
183
183
|
:return: compressor for the configs, or None if model is not compressed
|
184
184
|
"""
|
185
|
-
# reconstruct config from schemes attached to modules
|
186
185
|
quantization_config = QuantizationConfig.from_pretrained(
|
187
186
|
model, format=quantization_format
|
188
187
|
)
|
@@ -203,11 +202,14 @@ class ModelCompressor:
|
|
203
202
|
sparsity_config=sparsity_config,
|
204
203
|
quantization_config=quantization_config,
|
205
204
|
transform_config=transform_config,
|
205
|
+
compression_formats=[quantization_format]
|
206
|
+
if isinstance(quantization_format, str)
|
207
|
+
else quantization_format,
|
206
208
|
)
|
207
209
|
|
208
210
|
@staticmethod
|
209
211
|
def parse_sparsity_config(
|
210
|
-
compression_config: Union[Dict[str, Any], "CompressedTensorsConfig"]
|
212
|
+
compression_config: Union[Dict[str, Any], "CompressedTensorsConfig"],
|
211
213
|
) -> Union[Dict[str, Any], None]:
|
212
214
|
"""
|
213
215
|
Parse sparsity config from quantization/compression config. Sparsity
|
@@ -227,7 +229,7 @@ class ModelCompressor:
|
|
227
229
|
|
228
230
|
@staticmethod
|
229
231
|
def parse_quantization_config(
|
230
|
-
compression_config: Union[Dict[str, Any], "CompressedTensorsConfig"]
|
232
|
+
compression_config: Union[Dict[str, Any], "CompressedTensorsConfig"],
|
231
233
|
) -> Union[Dict[str, Any], None]:
|
232
234
|
"""
|
233
235
|
Parse quantization config from quantization/compression config. The
|
@@ -246,6 +248,7 @@ class ModelCompressor:
|
|
246
248
|
|
247
249
|
quantization_config = deepcopy(compression_config)
|
248
250
|
quantization_config.pop(SPARSITY_CONFIG_NAME, None)
|
251
|
+
quantization_config.pop(TRANSFORM_CONFIG_NAME, None)
|
249
252
|
|
250
253
|
# some fields are required, even if a qconfig is not present
|
251
254
|
# pop them off and if nothing remains, then there is no qconfig
|
@@ -262,19 +265,39 @@ class ModelCompressor:
|
|
262
265
|
|
263
266
|
return quantization_config
|
264
267
|
|
268
|
+
def _fetch_unique_quantization_formats(self) -> List[str]:
|
269
|
+
"""
|
270
|
+
Get all unique compression formats present in a model.
|
271
|
+
:return: list of quantization formats
|
272
|
+
"""
|
273
|
+
quantization_formats = []
|
274
|
+
for _, scheme in self.quantization_config.config_groups.items():
|
275
|
+
if scheme.format is not None and scheme.format not in quantization_formats:
|
276
|
+
quantization_formats.append(scheme.format)
|
277
|
+
|
278
|
+
if (
|
279
|
+
len(quantization_formats) == 0
|
280
|
+
and self.quantization_config.format
|
281
|
+
!= CompressionFormat.mixed_precision.value
|
282
|
+
):
|
283
|
+
quantization_formats.append(self.quantization_config.format)
|
284
|
+
return quantization_formats
|
285
|
+
|
265
286
|
def __init__(
|
266
287
|
self,
|
267
288
|
sparsity_config: Optional[SparsityCompressionConfig] = None,
|
268
289
|
quantization_config: Optional[QuantizationConfig] = None,
|
269
290
|
transform_config: Optional[TransformConfig] = None,
|
291
|
+
compression_formats: Optional[List[str]] = None,
|
270
292
|
):
|
271
293
|
self.sparsity_config = sparsity_config
|
272
294
|
self.quantization_config = quantization_config
|
273
295
|
self.transform_config = transform_config
|
296
|
+
self.compression_formats = compression_formats
|
274
297
|
|
275
298
|
self.sparsity_compressor = None
|
276
299
|
self.quantization_compressor: Optional[
|
277
|
-
Union[BaseQuantizationCompressor, DenseCompressor]
|
300
|
+
Dict[str, Union[BaseQuantizationCompressor, DenseCompressor]]
|
278
301
|
] = None
|
279
302
|
# no transform compressor is required
|
280
303
|
|
@@ -282,10 +305,21 @@ class ModelCompressor:
|
|
282
305
|
self.sparsity_compressor = BaseCompressor.load_from_registry(
|
283
306
|
sparsity_config.format, config=sparsity_config
|
284
307
|
)
|
308
|
+
|
285
309
|
if quantization_config is not None:
|
286
|
-
|
287
|
-
|
288
|
-
|
310
|
+
# If a list of compression_format is not provided, we resolve the
|
311
|
+
# relevant quantization formats using the config groups from the config
|
312
|
+
# and if those are not defined, we fall-back to the global quantization format
|
313
|
+
if not self.compression_formats:
|
314
|
+
self.compression_formats = self._fetch_unique_quantization_formats()
|
315
|
+
|
316
|
+
self.quantization_compressor = {}
|
317
|
+
for format in self.compression_formats:
|
318
|
+
self.quantization_compressor[
|
319
|
+
format
|
320
|
+
] = BaseCompressor.load_from_registry(
|
321
|
+
format, config=quantization_config
|
322
|
+
)
|
289
323
|
|
290
324
|
# ----- used by hf quantizer ----- #
|
291
325
|
|
@@ -380,12 +414,13 @@ class ModelCompressor:
|
|
380
414
|
targets=scheme.targets,
|
381
415
|
ignore=self.quantization_config.ignore,
|
382
416
|
)
|
383
|
-
|
384
|
-
|
385
|
-
|
386
|
-
|
387
|
-
|
388
|
-
|
417
|
+
for quant_compressor in self.quantization_compressor.values():
|
418
|
+
unexpected_keys.update(
|
419
|
+
merge_names(target, param)
|
420
|
+
for target in quant_targets
|
421
|
+
for param in quant_compressor.compression_param_names
|
422
|
+
if param != "weight"
|
423
|
+
)
|
389
424
|
|
390
425
|
return list(unexpected_keys)
|
391
426
|
|
@@ -423,7 +458,21 @@ class ModelCompressor:
|
|
423
458
|
|
424
459
|
# quantization first
|
425
460
|
if prefix in module_to_scheme:
|
426
|
-
|
461
|
+
if (
|
462
|
+
not hasattr(module.quantization_scheme, "format")
|
463
|
+
or module.quantization_scheme.format is None
|
464
|
+
):
|
465
|
+
if len(self.compression_formats) > 1:
|
466
|
+
raise ValueError(
|
467
|
+
"Applying multiple compressors without defining "
|
468
|
+
"per module formats is not supported "
|
469
|
+
)
|
470
|
+
format = self.compression_formats[0]
|
471
|
+
else:
|
472
|
+
format = module.quantization_scheme.format
|
473
|
+
|
474
|
+
quant_compressor = self.quantization_compressor.get(format)
|
475
|
+
state_dict = quant_compressor.compress(
|
427
476
|
state_dict,
|
428
477
|
names_to_scheme=module_to_scheme,
|
429
478
|
show_progress=False,
|
@@ -494,12 +543,24 @@ class ModelCompressor:
|
|
494
543
|
|
495
544
|
# quantization second
|
496
545
|
if prefix in module_to_scheme:
|
497
|
-
|
498
|
-
|
499
|
-
|
500
|
-
|
501
|
-
|
502
|
-
)
|
546
|
+
|
547
|
+
if (
|
548
|
+
not hasattr(module.quantization_scheme, "format")
|
549
|
+
or module.quantization_scheme.format is None
|
550
|
+
):
|
551
|
+
if len(self.compression_formats) > 1:
|
552
|
+
raise ValueError(
|
553
|
+
"Applying multiple compressors without defining "
|
554
|
+
"per module formats is not supported "
|
555
|
+
)
|
556
|
+
format = self.compression_formats[0]
|
557
|
+
else:
|
558
|
+
format = module.quantization_scheme.format
|
559
|
+
quant_compressor = self.quantization_compressor.get(format)
|
560
|
+
state_dict = quant_compressor.decompress_module_from_state_dict(
|
561
|
+
prefix,
|
562
|
+
state_dict,
|
563
|
+
scheme=module_to_scheme[prefix],
|
503
564
|
)
|
504
565
|
|
505
566
|
# remove any existing parameters
|
@@ -538,7 +599,9 @@ class ModelCompressor:
|
|
538
599
|
|
539
600
|
if self.quantization_compressor is not None:
|
540
601
|
module_to_scheme = map_module_to_scheme(model)
|
541
|
-
|
602
|
+
# Note - compress only supports one compression format atm
|
603
|
+
quant_compressor = next(iter(self.quantization_compressor.values()))
|
604
|
+
state_dict = quant_compressor.compress(
|
542
605
|
state_dict,
|
543
606
|
names_to_scheme=module_to_scheme,
|
544
607
|
show_progress=show_progress,
|
@@ -587,14 +650,20 @@ class ModelCompressor:
|
|
587
650
|
"""
|
588
651
|
model_path = get_safetensors_folder(model_path)
|
589
652
|
sparse_decompressed = False
|
653
|
+
quant_compressor = (
|
654
|
+
next(iter(self.quantization_compressor.values()))
|
655
|
+
if self.quantization_compressor is not None
|
656
|
+
else None
|
657
|
+
)
|
590
658
|
|
591
659
|
if (
|
592
660
|
self.sparsity_compressor is not None
|
593
661
|
and self.sparsity_config.format != CompressionFormat.dense.value
|
594
662
|
):
|
663
|
+
# note - decompress only supports one compressor atm
|
595
664
|
params_to_ignore = None
|
596
|
-
if
|
597
|
-
params_to_ignore =
|
665
|
+
if quant_compressor is not None:
|
666
|
+
params_to_ignore = quant_compressor.compression_param_names
|
598
667
|
# Sparse decompression is applied on the model_path
|
599
668
|
# The compressor will try and load any quantization parameters as well
|
600
669
|
# params_to_skip_load will skip over quantization params from being loaded
|
@@ -605,7 +674,7 @@ class ModelCompressor:
|
|
605
674
|
setattr(model, SPARSITY_CONFIG_NAME, self.sparsity_compressor.config)
|
606
675
|
sparse_decompressed = True
|
607
676
|
|
608
|
-
if
|
677
|
+
if quant_compressor is not None:
|
609
678
|
# Temporarily set quantization status to FROZEN to prevent
|
610
679
|
# quantization during apply_quantization_config. This ensures
|
611
680
|
# that the dtypes of the weights are not unintentionally updated.
|
@@ -628,7 +697,7 @@ class ModelCompressor:
|
|
628
697
|
# including initialization
|
629
698
|
load_weight_quantization=(
|
630
699
|
sparse_decompressed
|
631
|
-
or isinstance(
|
700
|
+
or isinstance(quant_compressor, DenseCompressor)
|
632
701
|
),
|
633
702
|
)
|
634
703
|
|
@@ -636,7 +705,7 @@ class ModelCompressor:
|
|
636
705
|
model.state_dict() if sparse_decompressed else model_path
|
637
706
|
)
|
638
707
|
|
639
|
-
dense_gen =
|
708
|
+
dense_gen = quant_compressor.decompress(
|
640
709
|
model_path_or_state_dict, names_to_scheme=names_to_scheme
|
641
710
|
)
|
642
711
|
# TODO: all weight quantization params will be moved to the compressor
|
@@ -674,7 +743,7 @@ class ModelCompressor:
|
|
674
743
|
|
675
744
|
# serialize configs into json
|
676
745
|
qconfig_data = (
|
677
|
-
self.quantization_config.model_dump(exclude=["quant_method"
|
746
|
+
self.quantization_config.model_dump(exclude=["quant_method"])
|
678
747
|
if self.quantization_config is not None
|
679
748
|
else {}
|
680
749
|
)
|
@@ -234,6 +234,12 @@ class QuantizationConfig(BaseModel):
|
|
234
234
|
format = CompressionFormat.int_quantized.value
|
235
235
|
else:
|
236
236
|
format = CompressionFormat.dense.value
|
237
|
+
elif isinstance(format, list):
|
238
|
+
format = (
|
239
|
+
CompressionFormat.mixed_precision.value
|
240
|
+
if len(format) > 1
|
241
|
+
else format[0]
|
242
|
+
)
|
237
243
|
|
238
244
|
return QuantizationConfig(
|
239
245
|
config_groups=config_groups,
|
@@ -16,6 +16,7 @@ import warnings
|
|
16
16
|
from copy import deepcopy
|
17
17
|
from typing import List, Optional
|
18
18
|
|
19
|
+
from compressed_tensors.config import CompressionFormat
|
19
20
|
from compressed_tensors.quantization.quant_args import (
|
20
21
|
DynamicType,
|
21
22
|
QuantizationArgs,
|
@@ -42,12 +43,14 @@ class QuantizationScheme(BaseModel):
|
|
42
43
|
:param weights: quantization config for layer weights
|
43
44
|
:param input_activations: quantization config for layer inputs
|
44
45
|
:param output_activations: quantization config for layer outputs
|
46
|
+
:param format: CompressionFormat for the layer
|
45
47
|
"""
|
46
48
|
|
47
49
|
targets: List[str]
|
48
50
|
weights: Optional[QuantizationArgs] = None
|
49
51
|
input_activations: Optional[QuantizationArgs] = None
|
50
52
|
output_activations: Optional[QuantizationArgs] = None
|
53
|
+
format: Optional[str] = None
|
51
54
|
|
52
55
|
@model_validator(mode="after")
|
53
56
|
def validate_model_after(model: "QuantizationScheme") -> "QuantizationScheme":
|
@@ -86,6 +86,7 @@ __all__ = [
|
|
86
86
|
"offloaded_dispatch",
|
87
87
|
"disable_offloading",
|
88
88
|
"remove_dispatch",
|
89
|
+
"cast_to_device",
|
89
90
|
]
|
90
91
|
|
91
92
|
|
@@ -169,6 +170,19 @@ def update_parameter_data(
|
|
169
170
|
""" Candidates for Upstreaming """
|
170
171
|
|
171
172
|
|
173
|
+
def cast_to_device(device_spec: Union[int, torch.device]) -> torch.device:
|
174
|
+
"""
|
175
|
+
Convert an integer device index or torch.device into a torch.device object.
|
176
|
+
|
177
|
+
:param device_spec: Device index (int) or torch.device object.
|
178
|
+
Negative integers map to CPU.
|
179
|
+
:return: torch.device corresponding to the given device specification.
|
180
|
+
"""
|
181
|
+
if isinstance(device_spec, int):
|
182
|
+
return torch.device(f"cuda:{device_spec}" if device_spec >= 0 else "cpu")
|
183
|
+
return device_spec
|
184
|
+
|
185
|
+
|
172
186
|
def get_execution_device(module: torch.nn.Module) -> torch.device:
|
173
187
|
"""
|
174
188
|
Get the device which inputs should be moved to before module execution.
|
@@ -179,7 +193,7 @@ def get_execution_device(module: torch.nn.Module) -> torch.device:
|
|
179
193
|
"""
|
180
194
|
for submodule in module.modules():
|
181
195
|
if has_offloaded_params(submodule):
|
182
|
-
return submodule._hf_hook.execution_device
|
196
|
+
return cast_to_device(submodule._hf_hook.execution_device)
|
183
197
|
|
184
198
|
param = next(submodule.parameters(recurse=False), None)
|
185
199
|
if param is not None:
|
@@ -1,6 +1,6 @@
|
|
1
1
|
Metadata-Version: 2.4
|
2
2
|
Name: compressed-tensors
|
3
|
-
Version: 0.10.
|
3
|
+
Version: 0.10.3a20250814
|
4
4
|
Summary: Library for utilization of compressed safetensors of neural network models
|
5
5
|
Home-page: https://github.com/neuralmagic/compressed-tensors
|
6
6
|
Author: Neuralmagic, Inc.
|
@@ -20,8 +20,12 @@ import pytest
|
|
20
20
|
import torch
|
21
21
|
import torch.nn as nn
|
22
22
|
from compressed_tensors.compressors import ModelCompressor
|
23
|
-
from compressed_tensors.config import SparsityCompressionConfig
|
24
|
-
from compressed_tensors.quantization import
|
23
|
+
from compressed_tensors.config import CompressionFormat, SparsityCompressionConfig
|
24
|
+
from compressed_tensors.quantization import (
|
25
|
+
QuantizationArgs,
|
26
|
+
QuantizationConfig,
|
27
|
+
QuantizationScheme,
|
28
|
+
)
|
25
29
|
from safetensors.torch import save_file
|
26
30
|
from tests.testing_utils import induce_sparsity, requires_hf_quantizer
|
27
31
|
from transformers import AutoModelForCausalLM
|
@@ -395,7 +399,7 @@ def _get_combined_config(s_config, q_config):
|
|
395
399
|
)
|
396
400
|
def test_compress_model(model_stub, q_format, s_config, tmpdir):
|
397
401
|
model = AutoModelForCausalLM.from_pretrained(model_stub, torch_dtype=torch.float32)
|
398
|
-
compressor = ModelCompressor.from_pretrained_model(model, s_config, q_format)
|
402
|
+
compressor = ModelCompressor.from_pretrained_model(model, s_config, [q_format])
|
399
403
|
|
400
404
|
# compress model by eagerly compressing state dict
|
401
405
|
true_compressed = dict(compressor.compress(model))
|
@@ -443,7 +447,7 @@ def test_compress_model_meta(model_stub, q_format, s_config):
|
|
443
447
|
model_stub, torch_dtype=torch.float32
|
444
448
|
)
|
445
449
|
reference_compressor = ModelCompressor.from_pretrained_model(
|
446
|
-
cpu_model, s_config, q_format
|
450
|
+
cpu_model, s_config, [q_format]
|
447
451
|
)
|
448
452
|
# Only stores dtype because meta model does not store values
|
449
453
|
expected = {k: v.dtype for k, v in reference_compressor.compress(cpu_model).items()}
|
@@ -459,7 +463,7 @@ def test_compress_model_meta(model_stub, q_format, s_config):
|
|
459
463
|
module.to_empty(device="meta")
|
460
464
|
|
461
465
|
# Compress in-place on meta model
|
462
|
-
compressor = ModelCompressor.from_pretrained_model(meta_model, s_config, q_format)
|
466
|
+
compressor = ModelCompressor.from_pretrained_model(meta_model, s_config, [q_format])
|
463
467
|
compressor.compress_model(meta_model)
|
464
468
|
|
465
469
|
# Compare keys and dtypes
|
@@ -469,6 +473,43 @@ def test_compress_model_meta(model_stub, q_format, s_config):
|
|
469
473
|
assert compressed[key].dtype == dtype, f"{key} has incorrect dtype"
|
470
474
|
|
471
475
|
|
476
|
+
def test_multiple_quant_compressors():
|
477
|
+
model = torch.nn.Sequential(torch.nn.Linear(1, 2), torch.nn.Linear(2, 3))
|
478
|
+
input_activations = QuantizationArgs(num_bits=8, type="float")
|
479
|
+
weights = QuantizationArgs(num_bits=8, type="float")
|
480
|
+
|
481
|
+
scheme_fp8 = QuantizationScheme(
|
482
|
+
targets=["Linear"],
|
483
|
+
weights=weights,
|
484
|
+
input_activations=input_activations,
|
485
|
+
format=CompressionFormat.float_quantized.value,
|
486
|
+
)
|
487
|
+
|
488
|
+
input_activations = QuantizationArgs(num_bits=4, type="float")
|
489
|
+
weights = QuantizationArgs(num_bits=4, type="float")
|
490
|
+
|
491
|
+
scheme_nvfp4 = QuantizationScheme(
|
492
|
+
targets=["Linear"],
|
493
|
+
weights=weights,
|
494
|
+
input_activations=input_activations,
|
495
|
+
format=CompressionFormat.nvfp4_pack_quantized.value,
|
496
|
+
)
|
497
|
+
|
498
|
+
model[0].quantization_scheme = scheme_fp8
|
499
|
+
model[0].quantization_status = "frozen"
|
500
|
+
model[1].quantization_scheme = scheme_nvfp4
|
501
|
+
model[1].quantization_status = "frozen"
|
502
|
+
|
503
|
+
formats = [scheme_fp8.format, scheme_nvfp4.format]
|
504
|
+
|
505
|
+
compressor = ModelCompressor.from_pretrained_model(model, None, formats)
|
506
|
+
assert isinstance(compressor.quantization_compressor, dict)
|
507
|
+
assert (
|
508
|
+
compressor.quantization_config.format == CompressionFormat.mixed_precision.value
|
509
|
+
)
|
510
|
+
assert all(format in compressor.quantization_compressor for format in formats)
|
511
|
+
|
512
|
+
|
472
513
|
@pytest.mark.parametrize(
|
473
514
|
"model_stub,comp_stub",
|
474
515
|
[
|
@@ -26,12 +26,13 @@ def test_basic_scheme():
|
|
26
26
|
assert scheme.weights == weights
|
27
27
|
assert scheme.input_activations is None
|
28
28
|
assert scheme.output_activations is None
|
29
|
+
assert scheme.format is None
|
29
30
|
|
30
31
|
|
31
32
|
def test_full_scheme():
|
32
33
|
targets = ["Linear"]
|
33
34
|
weights = QuantizationArgs()
|
34
|
-
input_activations = QuantizationArgs(num_bits=
|
35
|
+
input_activations = QuantizationArgs(num_bits=8)
|
35
36
|
output_activations = QuantizationArgs(num_bits=8, type="float", symmetric=False)
|
36
37
|
|
37
38
|
scheme = QuantizationScheme(
|
@@ -39,11 +40,13 @@ def test_full_scheme():
|
|
39
40
|
weights=weights,
|
40
41
|
input_activations=input_activations,
|
41
42
|
output_activations=output_activations,
|
43
|
+
format="float-quantized",
|
42
44
|
)
|
43
45
|
assert scheme.targets == targets
|
44
46
|
assert scheme.weights == weights
|
45
47
|
assert scheme.input_activations == input_activations
|
46
48
|
assert scheme.output_activations == output_activations
|
49
|
+
assert scheme.format is "float-quantized"
|
47
50
|
|
48
51
|
|
49
52
|
def test_needs_targets():
|
@@ -57,3 +60,4 @@ def test_defaults():
|
|
57
60
|
assert output.weights is None
|
58
61
|
assert output.input_activations is None
|
59
62
|
assert output.output_activations is None
|
63
|
+
assert output.format is None
|
File without changes
|
File without changes
|
File without changes
|
File without changes
|
File without changes
|
File without changes
|
File without changes
|
{compressed_tensors-0.10.3a20250811 → compressed_tensors-0.10.3a20250814}/.github/workflows/test.yml
RENAMED
File without changes
|
File without changes
|
File without changes
|
File without changes
|
File without changes
|
File without changes
|
File without changes
|
File without changes
|
File without changes
|
File without changes
|
File without changes
|
File without changes
|
File without changes
|
File without changes
|
File without changes
|
File without changes
|
File without changes
|
File without changes
|
File without changes
|
File without changes
|
File without changes
|
File without changes
|
File without changes
|
File without changes
|
File without changes
|
File without changes
|
File without changes
|
File without changes
|
File without changes
|
File without changes
|
File without changes
|
File without changes
|
File without changes
|
File without changes
|
File without changes
|
File without changes
|
File without changes
|
File without changes
|
File without changes
|
File without changes
|
File without changes
|
File without changes
|
File without changes
|
File without changes
|
File without changes
|
File without changes
|
File without changes
|
File without changes
|
File without changes
|
File without changes
|
File without changes
|
File without changes
|
File without changes
|
File without changes
|
File without changes
|
File without changes
|
File without changes
|
File without changes
|
File without changes
|
File without changes
|
File without changes
|
File without changes
|
File without changes
|
File without changes
|
File without changes
|
File without changes
|
File without changes
|
File without changes
|
File without changes
|
File without changes
|
File without changes
|
File without changes
|
File without changes
|
File without changes
|
File without changes
|
File without changes
|
File without changes
|
File without changes
|
File without changes
|
File without changes
|
File without changes
|
File without changes
|
File without changes
|
File without changes
|
File without changes
|
File without changes
|
File without changes
|
File without changes
|
File without changes
|
File without changes
|
File without changes
|
File without changes
|
File without changes
|
File without changes
|
File without changes
|
File without changes
|
File without changes
|
File without changes
|
File without changes
|
File without changes
|
File without changes
|
File without changes
|
File without changes
|
File without changes
|
File without changes
|
File without changes
|
File without changes
|
File without changes
|
File without changes
|
File without changes
|
File without changes
|
File without changes
|
File without changes
|
File without changes
|
File without changes
|
File without changes
|
File without changes
|
File without changes
|
{compressed_tensors-0.10.3a20250811 → compressed_tensors-0.10.3a20250814}/tests/test_registry.py
RENAMED
File without changes
|
File without changes
|
File without changes
|
File without changes
|
File without changes
|
File without changes
|
File without changes
|
File without changes
|
File without changes
|
File without changes
|
File without changes
|
File without changes
|
File without changes
|
File without changes
|
File without changes
|
{compressed_tensors-0.10.3a20250811 → compressed_tensors-0.10.3a20250814}/tests/testing_utils.py
RENAMED
File without changes
|
{compressed_tensors-0.10.3a20250811 → compressed_tensors-0.10.3a20250814}/utils/copyright.py
RENAMED
File without changes
|