mmgp 3.0.0__py3-none-any.whl → 3.0.1__py3-none-any.whl

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.

Potentially problematic release.


This version of mmgp might be problematic. Click here for more details.

@@ -1,2 +1,2 @@
1
- GNU GENERAL PUBLIC LICENSE
1
+ GNU GENERAL PUBLIC LICENSE
2
2
  Version 3, 29 June 2007
@@ -1,155 +1,157 @@
1
- Metadata-Version: 2.1
2
- Name: mmgp
3
- Version: 3.0.0
4
- Summary: Memory Management for the GPU Poor
5
- Author-email: deepbeepmeep <deepbeepmeep@yahoo.com>
6
- License: GNU GENERAL PUBLIC LICENSE
7
- Version 3, 29 June 2007
8
- Requires-Python: >=3.10
9
- Description-Content-Type: text/markdown
10
- License-File: LICENSE.md
11
- Requires-Dist: torch>=2.1.0
12
- Requires-Dist: optimum-quanto
13
- Requires-Dist: accelerate
14
- Requires-Dist: safetensors
15
- Requires-Dist: psutil
16
-
17
-
18
- <p align="center">
19
- <H2>Memory Management 3.0 for the GPU Poor by DeepBeepMeep</H2>
20
- </p>
21
-
22
-
23
- This module contains multiples optimisations so that models such as Flux (and derived), Mochi, CogView, HunyuanVideo, ... can run smoothly on a 12 to 24 GB GPU limited card.
24
- This a replacement for the accelerate library that should in theory manage offloading, but doesn't work properly with models that are loaded / unloaded several
25
- times in a pipe (eg VAE).
26
-
27
- Requirements:
28
- - VRAM: minimum 12 GB, recommended 24 GB (RTX 3090/ RTX 4090)
29
- - RAM: minimum 24 GB, recommended 48 GB
30
-
31
- This module features 5 profiles in order to able to run the model at a decent speed on a low end consumer config (32 GB of RAM and 12 VRAM) and to run it at a very good speed on a high end consumer config (48 GB of RAM and 24 GB of VRAM).\
32
- These RAM requirements are for Linux systems. Due to different memory management Windows will require an extra 16 GB of RAM to run the corresponding profile.
33
-
34
- Each profile may use the following:
35
- - Low RAM consumption (thanks to a rewritten safetensors library) that allows low RAM on the fly quantization
36
- - Smart automated loading / unloading of models in the GPU to avoid unloading models that may be needed again soon
37
- - Smart slicing of models to reduce memory occupied by models in the VRAM
38
- - Ability to pin models to reserved RAM to accelerate transfers to VRAM
39
- - Async transfers to VRAM to avoid a pause when loading a new slice of a model
40
- - Automated on the fly quantization or ability to load pre quantized models
41
-
42
- ## Installation
43
- First you need to install the module in your current project with:
44
- ```shell
45
- pip install mmgp
46
- ```
47
-
48
-
49
- ## Usage
50
-
51
- It is almost plug and play and just needs to be invoked from the main app just after the model pipeline has been created.
52
- 1) First make sure that the pipeline explictly loads the models in the CPU device, for instance:
53
- ```
54
- pipe = FluxPipeline.from_pretrained("black-forest-labs/FLUX.1-schnell", torch_dtype=torch.bfloat16).to("cpu")
55
- ```
56
-
57
- 2) Once every potential Lora has been loaded and merged, add the following lines for a quick setup:
58
- ```
59
- from mmgp import offload, profile_type
60
- offload.profile(pipe, profile_type.HighRAM_LowVRAM_Fast)
61
- ```
62
-
63
- You can choose between 5 profiles depending on your hardware:
64
- - HighRAM_HighVRAM_Fastest: at least 48 GB of RAM and 24 GB of VRAM : the fastest well suited for a RTX 3090 / RTX 4090
65
- - HighRAM_LowVRAM_Fast (recommended): at least 48 GB of RAM and 12 GB of VRAM : a bit slower, better suited for RTX 3070/3080/4070/4080
66
- or for RTX 3090 / RTX 4090 with large pictures batches or long videos
67
- - LowRAM_HighVRAM_Medium: at least 32 GB of RAM and 24 GB of VRAM : so so speed but adapted for RTX 3090 / RTX 4090 with limited RAM
68
- - LowRAM_LowVRAM_Slow: at least 32 GB of RAM and 12 GB of VRAM : if have little VRAM or generate longer videos
69
- - VerylowRAM_LowVRAM_Slowest: at least 24 GB of RAM and 10 GB of VRAM : if you don't have much it won't be fast but maybe it will work
70
-
71
- By default the 'transformer' will be quantized to 8 bits for all profiles. If you don't want that you may specify the optional parameter *quantizeTransformer = False*.
72
-
73
- Every parameter set automatically by a profile can be overridden with one or multiple parameters accepted by *offload.all* (see below):
74
- ```
75
- from mmgp import offload, profile_type
76
- offload.profile(pipe, profile_type.HighRAM_LowVRAM_Fast, budgets = 1000)
77
- ```
78
- If you want to know which parameter are set by one specific profile you can use the parameter *verboseLevel=2*
79
-
80
- ## Alternatively you may want to create your own profile with specific parameters:
81
-
82
- For example:
83
- ```
84
- from mmgp import offload
85
- offload.all(pipe, pinnedMemory=True, ExtraModelsToQuantize = ["text_encoder_2"] )
86
- ```
87
- - pinnedMemory: Boolean (for all models) or List of models ids to pin to RAM. Every model pinned to RAM will load much faster (up to 2 times) but this requires more RAM
88
- - quantizeTransformer: boolean by default True. The 'transformer' model in the pipe contains usually the video or image generator is by defaut; quantized on the fly by default to 8 bits. If you want to save time on disk and reduce the loading time, you may want to load directly a prequantized model. If you don't want to quantize the image generator, you need to set the option *quantizeTransformer* to *False* to turn off on the fly quantization.
89
- - extraModelsToQuantize: list of additional modelids of models to quantize on the fly. If the corresponding model is already quantized, this option will be ignored.
90
- - budgets: either a number in mega bytes (for all models, if 0 unlimited budget) or a dictionary that maps model ids to mega bytes : define the budget in VRAM (in fact the real number is 1.5 this number or 2.5 if asyncTransfers are also enabled) that is allocated in VRAM for each model.
91
- The smaller this number, the more VRAM left for image data / longer video but also the slower because there will be lots of loading / unloading between the RAM and the VRAM. If model is too big to fit in a budget, it will be broken down in multiples parts that will be unloaded / loaded consequently. The speed of low budget can be increased (up to 2 times) by turning on the options pinnedMemory and asyncTransfers.
92
- - asyncTransfers: boolean, load to the GPU the next model part while the current part is being processed. This requires twice the budget if any is defined. This may increase speed by 20% (mostly visible on fast modern GPUs).
93
- - verboseLevel: number between 0 and 2 (1 by default), provides various level of feedback of the different processes
94
- - perc_reserved_mem_max: a float below 0.5 (or 0 for auto), may be reduced to a lower number if any out of memory is triggered while using pinnedMemory
95
- - compile (experimental): list of model ids to compile, may accelerate processing (or not) depending on the type of GPU. As of 01/01/2025 it will work only on Linux since compilation relies on Triton which is not yet supported on Windows
96
-
97
- If you are short on RAM and plan to work with quantized models, it is recommended to load pre-quantized models direclty rather than using on the fly quantization (especially on Windows) as due to the way safetensors work almost twice the amount of RAM may be needed for the loading of the model.
98
-
99
- ## Going further
100
-
101
- The module includes several tools to package a light version of your favorite video / image generator:
102
- - *save_model(model, file_path, do_quantize = False, quantization_type = qint8 )*\
103
- Save tensors of a model already loaded in memory in a safetensor format (much faster to reload). You can save it in a quantized format (default qint8 quantization recommended).
104
- The resulting safetensor file will contain extra fields in its metadata such as the quantization map and its configuration, so you will be able to move the file around without files such as *config.json* or *file_map.json*.
105
- You will need *load_model_data* or *fast_load_transformers_model* to read the file again . You may also load it using the default *safetensor* librar however you will need to provide in the same directory any complementary file that are usually requested (for instance *config.json*)
106
-
107
- - *load_model_data(model, file_path: str, do_quantize = False, quantization_type = qint8, pinToRAM = False, partialPin = False)*\
108
- Load the tensors data of a model in RAM of a model already initialized with no data. Detect and handle quantized models saved previously with *save_model*.A model can also be quantized on the fly while being loaded. The model which is loaded can be pinned to RAM while it is loaded, this is more RAM efficient than pinning tensors later using *offline.all* or *offline.profile*
109
-
110
- - *fast_load_transformers_model(model_path: str, do_quantize = False, quantization_type = qint8, pinToRAM = False, partialPin = False)*\
111
- Initialize (build the model hierarchy in memory) and fast load the corresponding tensors of a 'transformers' or 'diffusers' library model.
112
- The advantages over the original *from_pretrained* method is that a full model can fit into a single file with a filename of your choosing (thefore you can have multiple 'transformers' versions of the same model in the same directory) and prequantized models are processed in a transparent way.
113
- Last but not least, you can also on the fly pin to RAM the whole model or the most important part of it (partialPin = True) in a more efficient way (faster and requires less RAM) than if you did through *offload.all* or *offload.profile*.
114
-
115
-
116
-
117
- The typical workflow wil be:
118
- 1) temporarly insert the *save_model* function just after a model has been fully loaded to save a copy of the model / quantized model.
119
- 2) replace the full initalizing / loading logic with *fast_load_transformers_model* (if there is a *from_pretrained* call to a transformers object) or only the tensor loading functions (*torch.load_model_file* and *torch.load_state_dict*) with *load_model_data after* the initializing logic.
120
-
121
- ## Special cases
122
- Sometime there isn't an explicit pipe object as each submodel is loaded separately in the main app. If this is the case, you need to create a dictionary that manually maps all the models.\
123
- For instance :
124
-
125
-
126
- - for flux derived models:
127
- ```
128
- pipe = { "text_encoder": clip, "text_encoder_2": t5, "transformer": model, "vae":ae }
129
- ```
130
- - for mochi:
131
- ```
132
- pipe = { "text_encoder": self.text_encoder, "transformer": self.dit, "vae":self.decoder }
133
- ```
134
-
135
-
136
- Please note that there should be always one model whose Id is 'transformer'. It corresponds to the main image / video model which usually needs to be quantized (this is done on the fly by default when loading the model).
137
-
138
- Becareful, lots of models use the T5 XXL as a text encoder. However, quite often their corresponding pipeline configurations point at the official Google T5 XXL repository
139
- where there is a huge 40GB model to download and load. It is cumbersorme as it is a 32 bits model and contains the decoder part of T5 that is not used.
140
- I suggest you use instead one of the 16 bits encoder only version available around, for instance:
141
- ```
142
- text_encoder_2 = T5EncoderModel.from_pretrained("black-forest-labs/FLUX.1-dev", subfolder="text_encoder_2", torch_dtype=torch.float16)
143
- ```
144
-
145
- Sometime just providing the pipe won't be sufficient as you will need to change the content of the core model:
146
- - For instance you may need to disable an existing CPU offload logic that already exists (such as manual calls to move tensors between cuda and the cpu)
147
- - mmpg to tries to fake the device as being "cuda" but sometimes some code won't be fooled and it will create tensors in the cpu device and this may cause some issues.
148
-
149
- You are free to use my module for non commercial use as long you give me proper credits. You may contact me on twitter @deepbeepmeep
150
-
151
- Thanks to
152
- ---------
153
- - Huggingface / accelerate for the hooking examples
154
- - Huggingface / quanto for their very useful quantizer
155
- - gau-nernst for his Pinnig RAM samples
1
+ Metadata-Version: 2.1
2
+ Name: mmgp
3
+ Version: 3.0.1
4
+ Summary: Memory Management for the GPU Poor
5
+ Author-email: deepbeepmeep <deepbeepmeep@yahoo.com>
6
+ License: GNU GENERAL PUBLIC LICENSE
7
+ Version 3, 29 June 2007
8
+ Requires-Python: >=3.10
9
+ Description-Content-Type: text/markdown
10
+ License-File: LICENSE.md
11
+ Requires-Dist: torch>=2.1.0
12
+ Requires-Dist: optimum-quanto
13
+ Requires-Dist: accelerate
14
+ Requires-Dist: safetensors
15
+ Requires-Dist: psutil
16
+
17
+
18
+ <p align="center">
19
+ <H2>Memory Management 3.0 for the GPU Poor by DeepBeepMeep</H2>
20
+ </p>
21
+
22
+
23
+ This module contains multiples optimisations so that models such as Flux (and derived), Mochi, CogView, HunyuanVideo, ... can run smoothly on a 12 to 24 GB GPU limited card.
24
+ This a replacement for the accelerate library that should in theory manage offloading, but doesn't work properly with models that are loaded / unloaded several
25
+ times in a pipe (eg VAE).
26
+
27
+ Requirements:
28
+ - VRAM: minimum 12 GB, recommended 24 GB (RTX 3090/ RTX 4090)
29
+ - RAM: minimum 24 GB, recommended 48 GB
30
+
31
+ This module features 5 profiles in order to able to run the model at a decent speed on a low end consumer config (32 GB of RAM and 12 VRAM) and to run it at a very good speed (if not the best) on a high end consumer config (48 GB of RAM and 24 GB of VRAM).\
32
+ These RAM requirements are for Linux systems. Due to different memory management Windows will require an extra 16 GB of RAM to run the corresponding profile.
33
+
34
+ Each profile may use a combination of the following:
35
+ - Low RAM consumption (thanks to a rewritten safetensors library) that allows low RAM on the fly quantization
36
+ - Smart automated loading / unloading of models in the GPU to avoid unloading models that may be needed again soon
37
+ - Smart slicing of models to reduce memory occupied by models in the VRAM
38
+ - Ability to pin models to reserved RAM to accelerate transfers to VRAM
39
+ - Async transfers to VRAM to avoid a pause when loading a new slice of a model
40
+ - Automated on the fly quantization or ability to load pre quantized models
41
+ - support for pytorch compilation on Linux and WSL (not supported so far on pure Windows).
42
+
43
+ ## Installation
44
+ First you need to install the module in your current project with:
45
+ ```shell
46
+ pip install mmgp
47
+ ```
48
+
49
+
50
+ ## Usage
51
+
52
+ It is almost plug and play and just needs to be invoked from the main app just after the model pipeline has been created.
53
+ 1) First make sure that the pipeline explictly loads the models in the CPU device, for instance:
54
+ ```
55
+ pipe = FluxPipeline.from_pretrained("black-forest-labs/FLUX.1-schnell", torch_dtype=torch.bfloat16).to("cpu")
56
+ ```
57
+
58
+ 2) Once every potential Lora has been loaded and merged, add the following lines for a quick setup:
59
+ ```
60
+ from mmgp import offload, profile_type
61
+ offload.profile(pipe, profile_type.HighRAM_LowVRAM_Fast)
62
+ ```
63
+
64
+ You can choose between 5 profiles depending on your hardware:
65
+ - HighRAM_HighVRAM (1): at least 48 GB of RAM and 24 GB of VRAM : the fastest well suited for a RTX 3090 / RTX 4090 but consumes much more VRAM, adapted for fast shorter video or small batches of pictures
66
+ - HighRAM_LowVRAM (2): at least 48 GB of RAM and 12 GB of VRAM : a bit slower, better suited for RTX 3070/3080/4070/4080 or for RTX 3090 / RTX 4090 with large pictures batches or long videos
67
+ - LowRAM_HighVRAM (3): at least 32 GB of RAM and 24 GB of VRAM : adapted for RTX 3090 / RTX 4090 with limited RAM but at the cost of VRAM (shorter videos / fewer images)
68
+ - LowRAM_LowVRAM (4): at least 32 GB of RAM and 12 GB of VRAM : if you have little VRAM or want to generate longer videos / more images
69
+ - VerylowRAM_LowVRAM (5): at least 24 GB of RAM and 10 GB of VRAM : if you don't have much it won't be fast but maybe it will work
70
+
71
+ Profile 2 (High RAM) and 4 (Low RAM)are the most recommended profiles since they are versatile (support for long videos for a slight performance cost).\
72
+ However, a safe approach is to start from profile 5 (default profile) and then go down progressively to profile 4 and then to profile 2 as long as the app remains responsive or doesn't trigger any out of memory error.
73
+
74
+ By default the 'transformer' will be quantized to 8 bits for all profiles. If you don't want that you may specify the optional parameter *quantizeTransformer = False*.
75
+
76
+ Every parameter set automatically by a profile can be overridden with one or multiple parameters accepted by *offload.all* (see below):
77
+ ```
78
+ from mmgp import offload, profile_type
79
+ offload.profile(pipe, profile_type.HighRAM_LowVRAM, budgets = 1000)
80
+ ```
81
+ If you want to know which parameter are set by one specific profile you can use the parameter *verboseLevel=2*
82
+
83
+ ## Alternatively you may want to create your own profile with specific parameters:
84
+
85
+ For example:
86
+ ```
87
+ from mmgp import offload
88
+ offload.all(pipe, pinnedMemory=True, ExtraModelsToQuantize = ["text_encoder_2"] )
89
+ ```
90
+ - pinnedMemory: Boolean (for all models) or List of models ids to pin to RAM. Every model pinned to RAM will load much faster (up to 2 times) but this requires more RAM
91
+ - quantizeTransformer: boolean by default True. The 'transformer' model in the pipe contains usually the video or image generator is by defaut; quantized on the fly by default to 8 bits. If you want to save time on disk and reduce the loading time, you may want to load directly a prequantized model. If you don't want to quantize the image generator, you need to set the option *quantizeTransformer* to *False* to turn off on the fly quantization.
92
+ - extraModelsToQuantize: list of additional modelids of models to quantize on the fly. If the corresponding model is already quantized, this option will be ignored.
93
+ - budgets: either a number in mega bytes (for all models, if 0 unlimited budget) or a dictionary that maps model ids to mega bytes : define the budget in VRAM (in fact the real number is 1.5 this number or 2.5 if asyncTransfers are also enabled) that is allocated in VRAM for each model.
94
+ The smaller this number, the more VRAM left for image data / longer video but also the slower because there will be lots of loading / unloading between the RAM and the VRAM. If model is too big to fit in a budget, it will be broken down in multiples parts that will be unloaded / loaded consequently. The speed of low budget can be increased (up to 2 times) by turning on the options pinnedMemory and asyncTransfers.
95
+ - asyncTransfers: boolean, load to the GPU the next model part while the current part is being processed. This requires twice the budget if any is defined. This may increase speed by 20% (mostly visible on fast modern GPUs).
96
+ - verboseLevel: number between 0 and 2 (1 by default), provides various level of feedback of the different processes
97
+ - compile: list of model ids to compile, may accelerate up x2 depending on the type of GPU. As of 01/01/2025 it will work only on Linux or WSL since compilation relies on Triton which is not yet supported on Windows
98
+
99
+ If you are short on RAM and plan to work with quantized models, it is recommended to load pre-quantized models direclty rather than using on the fly quantization, it will be faster and consume slightly less RAM.
100
+
101
+ ## Going further
102
+
103
+ The module includes several tools to package a light version of your favorite video / image generator:
104
+ - *save_model(model, file_path, do_quantize = False, quantization_type = qint8 )*\
105
+ Save tensors of a model already loaded in memory in a safetensor format (much faster to reload). You can save it in a quantized format (default qint8 quantization recommended).
106
+ The resulting safetensor file will contain extra fields in its metadata such as the quantization map and its configuration, so you will be able to move the file around without files such as *config.json* or *file_map.json*.
107
+ You will need *load_model_data* or *fast_load_transformers_model* to read the file again . You may also load it using the default *safetensor* librar however you will need to provide in the same directory any complementary file that are usually requested (for instance *config.json*)
108
+
109
+ - *load_model_data(model, file_path: str, do_quantize = False, quantization_type = qint8, pinToRAM = False, partialPin = False)*\
110
+ Load the tensors data of a model in RAM of a model already initialized with no data. Detect and handle quantized models saved previously with *save_model*.A model can also be quantized on the fly while being loaded. The model which is loaded can be pinned to RAM while it is loaded, this is more RAM efficient than pinning tensors later using *offline.all* or *offline.profile*
111
+
112
+ - *fast_load_transformers_model(model_path: str, do_quantize = False, quantization_type = qint8, pinToRAM = False, partialPin = False)*\
113
+ Initialize (build the model hierarchy in memory) and fast load the corresponding tensors of a 'transformers' or 'diffusers' library model.
114
+ The advantages over the original *from_pretrained* method is that a full model can fit into a single file with a filename of your choosing (thefore you can have multiple 'transformers' versions of the same model in the same directory) and prequantized models are processed in a transparent way.
115
+ Last but not least, you can also on the fly pin to RAM the whole model or the most important part of it (partialPin = True) in a more efficient way (faster and requires less RAM) than if you did through *offload.all* or *offload.profile*.
116
+
117
+
118
+
119
+ The typical workflow wil be:
120
+ 1) temporarly insert the *save_model* function just after a model has been fully loaded to save a copy of the model / quantized model.
121
+ 2) replace the full initalizing / loading logic with *fast_load_transformers_model* (if there is a *from_pretrained* call to a transformers object) or only the tensor loading functions (*torch.load_model_file* and *torch.load_state_dict*) with *load_model_data after* the initializing logic.
122
+
123
+ ## Special cases
124
+ Sometime there isn't an explicit pipe object as each submodel is loaded separately in the main app. If this is the case, you need to create a dictionary that manually maps all the models.\
125
+ For instance :
126
+
127
+
128
+ - for flux derived models:
129
+ ```
130
+ pipe = { "text_encoder": clip, "text_encoder_2": t5, "transformer": model, "vae":ae }
131
+ ```
132
+ - for mochi:
133
+ ```
134
+ pipe = { "text_encoder": self.text_encoder, "transformer": self.dit, "vae":self.decoder }
135
+ ```
136
+
137
+
138
+ Please note that there should be always one model whose Id is 'transformer'. It corresponds to the main image / video model which usually needs to be quantized (this is done on the fly by default when loading the model).
139
+
140
+ Becareful, lots of models use the T5 XXL as a text encoder. However, quite often their corresponding pipeline configurations point at the official Google T5 XXL repository
141
+ where there is a huge 40GB model to download and load. It is cumbersorme as it is a 32 bits model and contains the decoder part of T5 that is not used.
142
+ I suggest you use instead one of the 16 bits encoder only version available around, for instance:
143
+ ```
144
+ text_encoder_2 = T5EncoderModel.from_pretrained("black-forest-labs/FLUX.1-dev", subfolder="text_encoder_2", torch_dtype=torch.float16)
145
+ ```
146
+
147
+ Sometime just providing the pipe won't be sufficient as you will need to change the content of the core model:
148
+ - For instance you may need to disable an existing CPU offload logic that already exists (such as manual calls to move tensors between cuda and the cpu)
149
+ - mmpg to tries to fake the device as being "cuda" but sometimes some code won't be fooled and it will create tensors in the cpu device and this may cause some issues.
150
+
151
+ You are free to use my module for non commercial use as long you give me proper credits. You may contact me on twitter @deepbeepmeep
152
+
153
+ Thanks to
154
+ ---------
155
+ - Huggingface / accelerate for the hooking examples
156
+ - Huggingface / quanto for their very useful quantizer
157
+ - gau-nernst for his Pinnig RAM samples
@@ -0,0 +1,9 @@
1
+ __init__.py,sha256=47DEQpj8HBSa-_TImW-5JCeuQeRkm5NMpJWZG3hSuFU,0
2
+ mmgp/__init__.py,sha256=A9qBwyQMd1M7vshSTOBnFGP1MQvS2hXmTcTCMUcmyzE,509
3
+ mmgp/offload.py,sha256=T9RBAibAyAnKV-8AiYmop_UOGl_N1l5EJo5ucCZfxK8,61611
4
+ mmgp/safetensors2.py,sha256=CSv8HdrjURUzBazpaBDU1WNwUL1lhzpCyzG0GWygbGE,13602
5
+ mmgp-3.0.1.dist-info/LICENSE.md,sha256=HjzvY2grdtdduZclbZ46B2M-XpT4MDCxFub5ZwTWq2g,93
6
+ mmgp-3.0.1.dist-info/METADATA,sha256=uSsBc5pBaYBL4Ek3TR99J9hP7AQQlwnnUM_JQlkNwbE,11765
7
+ mmgp-3.0.1.dist-info/WHEEL,sha256=PZUExdf71Ui_so67QXpySuHtCi3-J3wvF4ORK6k_S8U,91
8
+ mmgp-3.0.1.dist-info/top_level.txt,sha256=waGaepj2qVfnS2yAOkaMu4r9mJaVjGbEi6AwOUogU_U,14
9
+ mmgp-3.0.1.dist-info/RECORD,,
@@ -1,9 +0,0 @@
1
- __init__.py,sha256=47DEQpj8HBSa-_TImW-5JCeuQeRkm5NMpJWZG3hSuFU,0
2
- mmgp/__init__.py,sha256=tpo6gl8CKe1guWxcJJ5Xwq6OUJfEeFD7Mkw2IKOrq48,592
3
- mmgp/offload.py,sha256=-1Rn_XtXswjoCmUBBjdhU4e0qUqvDVVnCnjAUmGHwh8,62859
4
- mmgp/safetensors2.py,sha256=blCnOF1qNJ27vqbiX5jKJxv5vVdvqEtEwdm0KXwbM68,13482
5
- mmgp-3.0.0.dist-info/LICENSE.md,sha256=DD-WIS0BkPoWJ_8hQO3J8hMP9K_1-dyrYv1YCbkxcDU,94
6
- mmgp-3.0.0.dist-info/METADATA,sha256=sZ0Sf1ZEXSa72qKrm5jLFRkcufxUPsZd1yu0kwdVPYE,11565
7
- mmgp-3.0.0.dist-info/WHEEL,sha256=PZUExdf71Ui_so67QXpySuHtCi3-J3wvF4ORK6k_S8U,91
8
- mmgp-3.0.0.dist-info/top_level.txt,sha256=waGaepj2qVfnS2yAOkaMu4r9mJaVjGbEi6AwOUogU_U,14
9
- mmgp-3.0.0.dist-info/RECORD,,
File without changes