mmgp 3.0.3__tar.gz → 3.1.0__tar.gz

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.

Potentially problematic release.


This version of mmgp might be problematic. Click here for more details.

@@ -1,6 +1,6 @@
1
- Metadata-Version: 2.1
1
+ Metadata-Version: 2.2
2
2
  Name: mmgp
3
- Version: 3.0.3
3
+ Version: 3.1.0
4
4
  Summary: Memory Management for the GPU Poor
5
5
  Author-email: deepbeepmeep <deepbeepmeep@yahoo.com>
6
6
  License: GNU GENERAL PUBLIC LICENSE
@@ -13,10 +13,11 @@ Requires-Dist: optimum-quanto
13
13
  Requires-Dist: accelerate
14
14
  Requires-Dist: safetensors
15
15
  Requires-Dist: psutil
16
+ Requires-Dist: peft
16
17
 
17
18
 
18
19
  <p align="center">
19
- <H2>Memory Management 3.0 for the GPU Poor by DeepBeepMeep</H2>
20
+ <H2>Memory Management 3.1.0 for the GPU Poor by DeepBeepMeep</H2>
20
21
  </p>
21
22
 
22
23
 
@@ -38,8 +39,9 @@ Each profile may use a combination of the following:
38
39
  - Ability to pin models to reserved RAM to accelerate transfers to VRAM
39
40
  - Async transfers to VRAM to avoid a pause when loading a new slice of a model
40
41
  - Automated on the fly quantization or ability to load pre quantized models
41
- - support for pytorch compilation on Linux and WSL (not supported so far on pure Windows).
42
-
42
+ - Pretrained Lora support with low RAM requirements
43
+ - Support for pytorch compilation on Linux and WSL (supported on pure Windows but requires a complex Triton Installation).
44
+ -
43
45
  ## Installation
44
46
  First you need to install the module in your current project with:
45
47
  ```shell
@@ -98,27 +100,29 @@ For example:
98
100
  The smaller this number, the more VRAM left for image data / longer video but also the slower because there will be lots of loading / unloading between the RAM and the VRAM. If model is too big to fit in a budget, it will be broken down in multiples parts that will be unloaded / loaded consequently. The speed of low budget can be increased (up to 2 times) by turning on the options pinnedMemory and asyncTransfers.
99
101
  - asyncTransfers: boolean, load to the GPU the next model part while the current part is being processed. This requires twice the budget if any is defined. This may increase speed by 20% (mostly visible on fast modern GPUs).
100
102
  - verboseLevel: number between 0 and 2 (1 by default), provides various level of feedback of the different processes
101
- - compile: list of model ids to compile, may accelerate up x2 depending on the type of GPU. As of 01/01/2025 it will work only on Linux or WSL since compilation relies on Triton which is not yet supported on Windows
103
+ - compile: list of model ids to compile, may accelerate up x2 depending on the type of GPU. It makes sens to compile only the model that is frequently used such as the "transformer" model in the case of video or image generation. As of 01/01/2025 it will work only on Linux or WSL since compilation relies on Triton which is not yet supported on Windows
102
104
 
103
105
  If you are short on RAM and plan to work with quantized models, it is recommended to load pre-quantized models direclty rather than using on the fly quantization, it will be faster and consume slightly less RAM.
104
106
 
105
107
  ## Going further
106
108
 
107
109
  The module includes several tools to package a light version of your favorite video / image generator:
108
- - *save_model(model, file_path, do_quantize = False, quantization_type = qint8 )*\
110
+ - *save_model(model, file_path, do_quantize = False, quantizationType = qint8 )*\
109
111
  Save tensors of a model already loaded in memory in a safetensor format (much faster to reload). You can save it in a quantized format (default qint8 quantization recommended).
110
112
  The resulting safetensor file will contain extra fields in its metadata such as the quantization map and its configuration, so you will be able to move the file around without files such as *config.json* or *file_map.json*.
111
113
  You will need *load_model_data* or *fast_load_transformers_model* to read the file again . You may also load it using the default *safetensor* librar however you will need to provide in the same directory any complementary file that are usually requested (for instance *config.json*)
112
114
 
113
- - *load_model_data(model, file_path: str, do_quantize = False, quantization_type = qint8, pinToRAM = False, partialPin = False)*\
115
+ - *load_model_data(model, file_path: str, do_quantize = False, quantizationType = qint8, pinToRAM = False, partialPin = False)*\
114
116
  Load the tensors data of a model in RAM of a model already initialized with no data. Detect and handle quantized models saved previously with *save_model*.A model can also be quantized on the fly while being loaded. The model which is loaded can be pinned to RAM while it is loaded, this is more RAM efficient than pinning tensors later using *offline.all* or *offline.profile*
115
117
 
116
- - *fast_load_transformers_model(model_path: str, do_quantize = False, quantization_type = qint8, pinToRAM = False, partialPin = False)*\
118
+ - *fast_load_transformers_model(model_path: str, do_quantize = False, quantizationType = qint8, pinToRAM = False, partialPin = False)*\
117
119
  Initialize (build the model hierarchy in memory) and fast load the corresponding tensors of a 'transformers' or 'diffusers' library model.
118
120
  The advantages over the original *from_pretrained* method is that a full model can fit into a single file with a filename of your choosing (thefore you can have multiple 'transformers' versions of the same model in the same directory) and prequantized models are processed in a transparent way.
119
121
  Last but not least, you can also on the fly pin to RAM the whole model or the most important part of it (partialPin = True) in a more efficient way (faster and requires less RAM) than if you did through *offload.all* or *offload.profile*.
120
122
 
121
-
123
+ - *load_loras_into_model(model, lora_path, lora_multi)
124
+ Load in a model a list of Lora described by a list of path *lora_path* and a list of *weights coefficients*.
125
+ The Lora file must be in the *diffusers* format. This function works also on non diffusers models. However if there is already an official Lora support for a model it is recommended to use the official diffusers functions.
122
126
 
123
127
  The typical workflow wil be:
124
128
  1) temporarly insert the *save_model* function just after a model has been fully loaded to save a copy of the model / quantized model.
@@ -1,6 +1,6 @@
1
1
 
2
2
  <p align="center">
3
- <H2>Memory Management 3.0 for the GPU Poor by DeepBeepMeep</H2>
3
+ <H2>Memory Management 3.1.0 for the GPU Poor by DeepBeepMeep</H2>
4
4
  </p>
5
5
 
6
6
 
@@ -22,8 +22,9 @@ Each profile may use a combination of the following:
22
22
  - Ability to pin models to reserved RAM to accelerate transfers to VRAM
23
23
  - Async transfers to VRAM to avoid a pause when loading a new slice of a model
24
24
  - Automated on the fly quantization or ability to load pre quantized models
25
- - support for pytorch compilation on Linux and WSL (not supported so far on pure Windows).
26
-
25
+ - Pretrained Lora support with low RAM requirements
26
+ - Support for pytorch compilation on Linux and WSL (supported on pure Windows but requires a complex Triton Installation).
27
+ -
27
28
  ## Installation
28
29
  First you need to install the module in your current project with:
29
30
  ```shell
@@ -82,27 +83,29 @@ For example:
82
83
  The smaller this number, the more VRAM left for image data / longer video but also the slower because there will be lots of loading / unloading between the RAM and the VRAM. If model is too big to fit in a budget, it will be broken down in multiples parts that will be unloaded / loaded consequently. The speed of low budget can be increased (up to 2 times) by turning on the options pinnedMemory and asyncTransfers.
83
84
  - asyncTransfers: boolean, load to the GPU the next model part while the current part is being processed. This requires twice the budget if any is defined. This may increase speed by 20% (mostly visible on fast modern GPUs).
84
85
  - verboseLevel: number between 0 and 2 (1 by default), provides various level of feedback of the different processes
85
- - compile: list of model ids to compile, may accelerate up x2 depending on the type of GPU. As of 01/01/2025 it will work only on Linux or WSL since compilation relies on Triton which is not yet supported on Windows
86
+ - compile: list of model ids to compile, may accelerate up x2 depending on the type of GPU. It makes sens to compile only the model that is frequently used such as the "transformer" model in the case of video or image generation. As of 01/01/2025 it will work only on Linux or WSL since compilation relies on Triton which is not yet supported on Windows
86
87
 
87
88
  If you are short on RAM and plan to work with quantized models, it is recommended to load pre-quantized models direclty rather than using on the fly quantization, it will be faster and consume slightly less RAM.
88
89
 
89
90
  ## Going further
90
91
 
91
92
  The module includes several tools to package a light version of your favorite video / image generator:
92
- - *save_model(model, file_path, do_quantize = False, quantization_type = qint8 )*\
93
+ - *save_model(model, file_path, do_quantize = False, quantizationType = qint8 )*\
93
94
  Save tensors of a model already loaded in memory in a safetensor format (much faster to reload). You can save it in a quantized format (default qint8 quantization recommended).
94
95
  The resulting safetensor file will contain extra fields in its metadata such as the quantization map and its configuration, so you will be able to move the file around without files such as *config.json* or *file_map.json*.
95
96
  You will need *load_model_data* or *fast_load_transformers_model* to read the file again . You may also load it using the default *safetensor* librar however you will need to provide in the same directory any complementary file that are usually requested (for instance *config.json*)
96
97
 
97
- - *load_model_data(model, file_path: str, do_quantize = False, quantization_type = qint8, pinToRAM = False, partialPin = False)*\
98
+ - *load_model_data(model, file_path: str, do_quantize = False, quantizationType = qint8, pinToRAM = False, partialPin = False)*\
98
99
  Load the tensors data of a model in RAM of a model already initialized with no data. Detect and handle quantized models saved previously with *save_model*.A model can also be quantized on the fly while being loaded. The model which is loaded can be pinned to RAM while it is loaded, this is more RAM efficient than pinning tensors later using *offline.all* or *offline.profile*
99
100
 
100
- - *fast_load_transformers_model(model_path: str, do_quantize = False, quantization_type = qint8, pinToRAM = False, partialPin = False)*\
101
+ - *fast_load_transformers_model(model_path: str, do_quantize = False, quantizationType = qint8, pinToRAM = False, partialPin = False)*\
101
102
  Initialize (build the model hierarchy in memory) and fast load the corresponding tensors of a 'transformers' or 'diffusers' library model.
102
103
  The advantages over the original *from_pretrained* method is that a full model can fit into a single file with a filename of your choosing (thefore you can have multiple 'transformers' versions of the same model in the same directory) and prequantized models are processed in a transparent way.
103
104
  Last but not least, you can also on the fly pin to RAM the whole model or the most important part of it (partialPin = True) in a more efficient way (faster and requires less RAM) than if you did through *offload.all* or *offload.profile*.
104
105
 
105
-
106
+ - *load_loras_into_model(model, lora_path, lora_multi)
107
+ Load in a model a list of Lora described by a list of path *lora_path* and a list of *weights coefficients*.
108
+ The Lora file must be in the *diffusers* format. This function works also on non diffusers models. However if there is already an official Lora support for a model it is recommended to use the official diffusers functions.
106
109
 
107
110
  The typical workflow wil be:
108
111
  1) temporarly insert the *save_model* function just after a model has been fully loaded to save a copy of the model / quantized model.
@@ -1,6 +1,6 @@
1
1
  [project]
2
2
  name = "mmgp"
3
- version = "3.0.3"
3
+ version = "3.1.0"
4
4
  authors = [
5
5
  { name = "deepbeepmeep", email = "deepbeepmeep@yahoo.com" },
6
6
  ]
@@ -13,6 +13,7 @@ dependencies = [
13
13
  "optimum-quanto",
14
14
  "accelerate",
15
15
  "safetensors",
16
- "psutil"
16
+ "psutil",
17
+ "peft"
17
18
  ]
18
19