congrads 0.1.0__py3-none-any.whl → 1.0.1__py3-none-any.whl
This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
- congrads/__init__.py +21 -13
- congrads/checkpoints.py +232 -0
- congrads/constraints.py +728 -316
- congrads/core.py +525 -139
- congrads/datasets.py +273 -516
- congrads/descriptor.py +95 -30
- congrads/metrics.py +185 -38
- congrads/networks.py +51 -28
- congrads/requirements.txt +6 -0
- congrads/transformations.py +139 -0
- congrads/utils.py +710 -0
- congrads-1.0.1.dist-info/LICENSE +26 -0
- congrads-1.0.1.dist-info/METADATA +208 -0
- congrads-1.0.1.dist-info/RECORD +16 -0
- {congrads-0.1.0.dist-info → congrads-1.0.1.dist-info}/WHEEL +1 -1
- congrads/learners.py +0 -233
- congrads-0.1.0.dist-info/LICENSE +0 -34
- congrads-0.1.0.dist-info/METADATA +0 -196
- congrads-0.1.0.dist-info/RECORD +0 -13
- {congrads-0.1.0.dist-info → congrads-1.0.1.dist-info}/top_level.txt +0 -0
|
@@ -0,0 +1,26 @@
|
|
|
1
|
+
Copyright 2024 DTAI - KU Leuven
|
|
2
|
+
|
|
3
|
+
Redistribution and use in source and binary forms, with or without modification,
|
|
4
|
+
are permitted provided that the following conditions are met:
|
|
5
|
+
|
|
6
|
+
1. Redistributions of source code must retain the above copyright notice,
|
|
7
|
+
this list of conditions and the following disclaimer.
|
|
8
|
+
|
|
9
|
+
2. Redistributions in binary form must reproduce the above copyright notice,
|
|
10
|
+
this list of conditions and the following disclaimer in the documentation
|
|
11
|
+
and/or other materials provided with the distribution.
|
|
12
|
+
|
|
13
|
+
3. Neither the name of the copyright holder nor the names of its
|
|
14
|
+
contributors may be used to endorse or promote products derived from
|
|
15
|
+
this software without specific prior written permission.
|
|
16
|
+
|
|
17
|
+
THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS “AS IS”
|
|
18
|
+
AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
|
|
19
|
+
IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
|
|
20
|
+
ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE
|
|
21
|
+
LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL
|
|
22
|
+
DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR
|
|
23
|
+
SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER
|
|
24
|
+
CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY,
|
|
25
|
+
OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
|
|
26
|
+
OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
|
|
@@ -0,0 +1,208 @@
|
|
|
1
|
+
Metadata-Version: 2.2
|
|
2
|
+
Name: congrads
|
|
3
|
+
Version: 1.0.1
|
|
4
|
+
Summary: A toolbox for using Constraint Guided Gradient Descent when training neural networks.
|
|
5
|
+
Author-email: Wout Rombouts <wout.rombouts@kuleuven.be>, Quinten Van Baelen <quinten.vanbaelen@kuleuven.be>, Peter Karsmakers <peter.karsmakers@kuleuven.be>
|
|
6
|
+
License: Copyright 2024 DTAI - KU Leuven
|
|
7
|
+
|
|
8
|
+
Redistribution and use in source and binary forms, with or without modification,
|
|
9
|
+
are permitted provided that the following conditions are met:
|
|
10
|
+
|
|
11
|
+
1. Redistributions of source code must retain the above copyright notice,
|
|
12
|
+
this list of conditions and the following disclaimer.
|
|
13
|
+
|
|
14
|
+
2. Redistributions in binary form must reproduce the above copyright notice,
|
|
15
|
+
this list of conditions and the following disclaimer in the documentation
|
|
16
|
+
and/or other materials provided with the distribution.
|
|
17
|
+
|
|
18
|
+
3. Neither the name of the copyright holder nor the names of its
|
|
19
|
+
contributors may be used to endorse or promote products derived from
|
|
20
|
+
this software without specific prior written permission.
|
|
21
|
+
|
|
22
|
+
THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS “AS IS”
|
|
23
|
+
AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
|
|
24
|
+
IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
|
|
25
|
+
ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE
|
|
26
|
+
LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL
|
|
27
|
+
DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR
|
|
28
|
+
SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER
|
|
29
|
+
CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY,
|
|
30
|
+
OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
|
|
31
|
+
OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
|
|
32
|
+
|
|
33
|
+
Requires-Python: >=3.9
|
|
34
|
+
Description-Content-Type: text/markdown
|
|
35
|
+
License-File: LICENSE
|
|
36
|
+
Requires-Dist: numpy>=1.26.4
|
|
37
|
+
Requires-Dist: pandas>=2.2.2
|
|
38
|
+
Requires-Dist: torch>=2.5.0
|
|
39
|
+
Requires-Dist: torchvision>=0.20.0
|
|
40
|
+
Requires-Dist: tensorboard>=2.18.0
|
|
41
|
+
Requires-Dist: tqdm>=4.66.5
|
|
42
|
+
|
|
43
|
+
<div align="center">
|
|
44
|
+
<img src="docs/_static/congrads_export.png" height="200">
|
|
45
|
+
<p>
|
|
46
|
+
<b>Incorporate constraints into neural network training for more reliable and robust models.</b>
|
|
47
|
+
</p>
|
|
48
|
+
<br/>
|
|
49
|
+
|
|
50
|
+
[](https://pypi.org/project/congrads)
|
|
51
|
+
[](https://congrads.readthedocs.io)
|
|
52
|
+
[](https://pypi.org/project/congrads)
|
|
53
|
+
[](https://pypistats.org/packages/congrads)
|
|
54
|
+
[](https://opensource.org/licenses/BSD-3-Clause)
|
|
55
|
+
|
|
56
|
+
<br/>
|
|
57
|
+
<br/>
|
|
58
|
+
</div>
|
|
59
|
+
|
|
60
|
+
**Congrads** is a Python toolbox that brings **constraint-guided gradient descent** capabilities to your machine learning projects. Built with seamless integration into PyTorch, Congrads empowers you to enhance the training and optimization process by incorporating constraints into your training pipeline.
|
|
61
|
+
|
|
62
|
+
Whether you're working with simple inequality constraints, combinations of input-output relations, or custom constraint formulations, Congrads provides the tools and flexibility needed to build more robust and generalized models.
|
|
63
|
+
|
|
64
|
+
## Key Features
|
|
65
|
+
|
|
66
|
+
- **Constraint-Guided Training**: Add constraints to guide the optimization process, ensuring that your model generalizes better by trying to satisfy the constraints.
|
|
67
|
+
- **Flexible Constraint Definition**: Define constraints on inputs, outputs, or combinations thereof, using an intuitive and extendable interface. Make use of pre-programmed constraint classes or write your own.
|
|
68
|
+
- **Seamless PyTorch Integration**: Use Congrads within your existing PyTorch workflows with minimal setup.
|
|
69
|
+
- **Flexible and extendible**: Write your own custom networks, constraints and dataset classes to easily extend the functionality of the toolbox.
|
|
70
|
+
|
|
71
|
+
## Getting Started
|
|
72
|
+
|
|
73
|
+
### 1. **Installation**
|
|
74
|
+
|
|
75
|
+
First, make sure to install PyTorch since Congrads heavily relies on its deep learning framework. Please refer to the [PyTorch's getting started guide](https://pytorch.org/get-started/locally/). Make sure to install with CUDA support for GPU training.
|
|
76
|
+
|
|
77
|
+
Next, install the Congrads toolbox. The recommended way to install it is to use pip:
|
|
78
|
+
|
|
79
|
+
```bash
|
|
80
|
+
pip install congrads
|
|
81
|
+
```
|
|
82
|
+
|
|
83
|
+
This should automatically install all required dependencies for you. If you would like to install dependencies manually, Congrads depends on the following:
|
|
84
|
+
|
|
85
|
+
- Python 3.9 - 3.12
|
|
86
|
+
- **PyTorch** (install with CUDA support for GPU training, refer to [PyTorch's getting started guide](https://pytorch.org/get-started/locally/))
|
|
87
|
+
- **NumPy** (install with `pip install numpy`, or refer to [NumPy's install guide](https://numpy.org/install/).)
|
|
88
|
+
- **Pandas** (install with `pip install pandas`, or refer to [Panda's install guide](https://pandas.pydata.org/docs/getting_started/install.html).)
|
|
89
|
+
- **Tqdm** (install with `pip install tqdm`)
|
|
90
|
+
- **Torchvision** (install with `pip install torchvision`)
|
|
91
|
+
- **Tensorboard** (install with `pip install tensorboard`)
|
|
92
|
+
|
|
93
|
+
### 2. **Core concepts**
|
|
94
|
+
|
|
95
|
+
Before diving into the toolbox, it is recommended to familiarize yourself with Congrads's core concept and topics.
|
|
96
|
+
Please read the documentation at https://congrads.readthedocs.io/en/latest/ to get up-to-date.
|
|
97
|
+
|
|
98
|
+
### 3. **Basic Usage**
|
|
99
|
+
|
|
100
|
+
Below, a basic example can be found that illustrates how to work with the Congrads toolbox.
|
|
101
|
+
For additional examples, refer to the [examples](https://github.com/ML-KULeuven/congrads/examples) and [notebooks](https://github.com/ML-KULeuven/congrads/notebooks) folders in the repository.
|
|
102
|
+
|
|
103
|
+
#### 1. First, select the device to run your code on with.
|
|
104
|
+
|
|
105
|
+
```python
|
|
106
|
+
use_cuda = torch.cuda.is_available()
|
|
107
|
+
device = torch.device("cuda:0" if use_cuda else "cpu")
|
|
108
|
+
```
|
|
109
|
+
|
|
110
|
+
#### 2. Next, load your data and split it into training, validation and testing subsets.
|
|
111
|
+
|
|
112
|
+
```python
|
|
113
|
+
data = BiasCorrection(
|
|
114
|
+
"./datasets", preprocess_BiasCorrection, download=True
|
|
115
|
+
)
|
|
116
|
+
loaders = split_data_loaders(
|
|
117
|
+
data,
|
|
118
|
+
loader_args={"batch_size": 100, "shuffle": True},
|
|
119
|
+
valid_loader_args={"shuffle": False},
|
|
120
|
+
test_loader_args={"shuffle": False},
|
|
121
|
+
)
|
|
122
|
+
```
|
|
123
|
+
|
|
124
|
+
#### 3. Instantiate your neural network, make sure the dimensions match up with your data.
|
|
125
|
+
|
|
126
|
+
```python
|
|
127
|
+
network = MLPNetwork(25, 2, n_hidden_layers=3, hidden_dim=35)
|
|
128
|
+
network = network.to(device)
|
|
129
|
+
```
|
|
130
|
+
|
|
131
|
+
#### 4. Choose your loss function and optimizer.
|
|
132
|
+
|
|
133
|
+
```python
|
|
134
|
+
criterion = MSELoss()
|
|
135
|
+
optimizer = Adam(network.parameters(), lr=0.001)
|
|
136
|
+
```
|
|
137
|
+
|
|
138
|
+
#### 5. Then, setup the descriptor, that will attach names to specific parts of your network.
|
|
139
|
+
|
|
140
|
+
```python
|
|
141
|
+
descriptor = Descriptor()
|
|
142
|
+
descriptor.add("output", 0, "Tmax")
|
|
143
|
+
descriptor.add("output", 1, "Tmin")
|
|
144
|
+
```
|
|
145
|
+
|
|
146
|
+
#### 6. Define your constraints on the network.
|
|
147
|
+
|
|
148
|
+
```python
|
|
149
|
+
Constraint.descriptor = descriptor
|
|
150
|
+
Constraint.device = device
|
|
151
|
+
constraints = [
|
|
152
|
+
ScalarConstraint("Tmin", ge, 0),
|
|
153
|
+
ScalarConstraint("Tmin", le, 1),
|
|
154
|
+
ScalarConstraint("Tmax", ge, 0),
|
|
155
|
+
ScalarConstraint("Tmax", le, 1),
|
|
156
|
+
BinaryConstraint("Tmax", gt, "Tmin"),
|
|
157
|
+
]
|
|
158
|
+
```
|
|
159
|
+
|
|
160
|
+
#### 7. Instantiate metric manager and core, and start the training.
|
|
161
|
+
|
|
162
|
+
```python
|
|
163
|
+
metric_manager = MetricManager()
|
|
164
|
+
core = CongradsCore(
|
|
165
|
+
descriptor,
|
|
166
|
+
constraints,
|
|
167
|
+
loaders,
|
|
168
|
+
network,
|
|
169
|
+
criterion,
|
|
170
|
+
optimizer,
|
|
171
|
+
metric_manager,
|
|
172
|
+
device,
|
|
173
|
+
checkpoint_manager,
|
|
174
|
+
)
|
|
175
|
+
|
|
176
|
+
core.fit(max_epochs=50)
|
|
177
|
+
```
|
|
178
|
+
|
|
179
|
+
## Example Use Cases
|
|
180
|
+
|
|
181
|
+
- **Optimization with Domain Knowledge**: Ensure outputs meet real-world restrictions or safety standards.
|
|
182
|
+
- **Improve Training Process**: Inject domain knowledge in the training stage, increasing learning efficiency.
|
|
183
|
+
- **Physics-Informed Neural Networks (PINNs)**: Coming soon, Enforce physical laws as constraints in your models.
|
|
184
|
+
|
|
185
|
+
## Roadmap
|
|
186
|
+
|
|
187
|
+
- [ ] Add ODE/PDE constraints to support PINNs
|
|
188
|
+
- [ ] Add support for constraint parser that can interpret equations
|
|
189
|
+
- [ ] Determine if it is feasible to add unit and or functional tests
|
|
190
|
+
|
|
191
|
+
## Research
|
|
192
|
+
|
|
193
|
+
If you make use of this package or it's concepts in your research, please consider citing the following papers.
|
|
194
|
+
|
|
195
|
+
- Van Baelen, Q., & Karsmakers, P. (2023). **Constraint guided gradient descent: Training with inequality constraints with applications in regression and semantic segmentation.**
|
|
196
|
+
Neurocomputing, 556, 126636. doi:10.1016/j.neucom.2023.126636 <br/>[ [pdf](https://www.sciencedirect.com/science/article/abs/pii/S0925231223007592) | [bibtex](https://raw.githubusercontent.com/ML-KULeuven/congrads/main/docs/_static/VanBaelen2023.bib) ]
|
|
197
|
+
|
|
198
|
+
## Contributing
|
|
199
|
+
|
|
200
|
+
We welcome contributions to Congrads! Whether you want to report issues, suggest features, or contribute code via issues and pull requests.
|
|
201
|
+
|
|
202
|
+
## License
|
|
203
|
+
|
|
204
|
+
Congrads is licensed under the [The 3-Clause BSD License](LICENSE). We encourage companies that are interested in a collaboration for a specific topic to contact the authors for more information or to set up joint research projects.
|
|
205
|
+
|
|
206
|
+
---
|
|
207
|
+
|
|
208
|
+
Elevate your neural networks with Congrads! 🚀
|
|
@@ -0,0 +1,16 @@
|
|
|
1
|
+
congrads/__init__.py,sha256=uj36sGjM_ldPgD-0aaWh1b-HspZxqUsC2St97sg_6jg,759
|
|
2
|
+
congrads/checkpoints.py,sha256=AnP5lMT94BiOpT2e0b8QvxhW8bacy_U_eGInBGND6tU,7897
|
|
3
|
+
congrads/constraints.py,sha256=NjuRlquJaZHxj0K3A1wW1DQXJKUKM5jBaeSqrhjwCqg,33350
|
|
4
|
+
congrads/core.py,sha256=qcoK_P95j-TY17PWlR0zYbExwe19e391LIMbxZiq5Ek,21061
|
|
5
|
+
congrads/datasets.py,sha256=mfpMKfiJjc6tmeez6EPuyd94O54qZt5KFI4Gs5RAhlc,15855
|
|
6
|
+
congrads/descriptor.py,sha256=ml4IRiEcnRoRYiFgIV2BKpfKjWcLpPsTf0f4l0fTt38,4829
|
|
7
|
+
congrads/metrics.py,sha256=nQuOOVVUeWbxmiFHni9hHFeUd58Gm-Lo0875KG5bHgk,6774
|
|
8
|
+
congrads/networks.py,sha256=fW-1YuscWGSDQwjRItcD8-6R37k1-Do6E2g0HsghB4s,3914
|
|
9
|
+
congrads/requirements.txt,sha256=Cvw0YgcvHcIBeXDzopjuARE3_xEvV6rwajGO9jWOjcE,92
|
|
10
|
+
congrads/transformations.py,sha256=0mbEGdanF7_nFh0lnuBVdImtj3wwIGBMsbg8mkFZ-kw,4485
|
|
11
|
+
congrads/utils.py,sha256=uKOxudT0VgOQ1KCa4uXDADt7KIQISLxzwCipdlfchwo,26252
|
|
12
|
+
congrads-1.0.1.dist-info/LICENSE,sha256=hDkSuSj1L5IpO9uhrag5zd29HicibbYX8tUbY3RXF40,1480
|
|
13
|
+
congrads-1.0.1.dist-info/METADATA,sha256=w-_eel-hPXFmqBBdpL6ksaUTwFGm21Y3bkkxvBQSlgs,9243
|
|
14
|
+
congrads-1.0.1.dist-info/WHEEL,sha256=In9FTNxeP60KnTkGw7wk6mJPYd_dQSjEZmXdBdMCI-8,91
|
|
15
|
+
congrads-1.0.1.dist-info/top_level.txt,sha256=B8M9NmtHbmzp-3APHe4C0oo7aRIWRHWoba9FIy9XeYM,9
|
|
16
|
+
congrads-1.0.1.dist-info/RECORD,,
|
congrads/learners.py
DELETED
|
@@ -1,233 +0,0 @@
|
|
|
1
|
-
import logging
|
|
2
|
-
from typing import Union
|
|
3
|
-
from torch import Tensor
|
|
4
|
-
from torch.nn import Module
|
|
5
|
-
from torch.nn.modules.loss import _Loss
|
|
6
|
-
from torch.optim import Optimizer
|
|
7
|
-
|
|
8
|
-
from .core import CGGDModule
|
|
9
|
-
from .constraints import Constraint
|
|
10
|
-
from .descriptor import Descriptor
|
|
11
|
-
|
|
12
|
-
|
|
13
|
-
class Learner(CGGDModule):
|
|
14
|
-
def __init__(
|
|
15
|
-
self,
|
|
16
|
-
network: Module,
|
|
17
|
-
descriptor: Descriptor,
|
|
18
|
-
constraints: list[Constraint],
|
|
19
|
-
loss_function: Union[_Loss, dict[str, _Loss]],
|
|
20
|
-
optimizer: Optimizer,
|
|
21
|
-
):
|
|
22
|
-
"""
|
|
23
|
-
A class that integrates a neural network with a training and validation loop,
|
|
24
|
-
supporting single or multi-output loss functions. The class manages the forward pass,
|
|
25
|
-
training step, and validation step while also configuring the optimizer.
|
|
26
|
-
|
|
27
|
-
Args:
|
|
28
|
-
network (Module): The neural network model to be trained.
|
|
29
|
-
descriptor (Descriptor): An object that defines the structure of the network,
|
|
30
|
-
including the output layers.
|
|
31
|
-
constraints (list[Constraint]): A list of constraints that can be applied during training.
|
|
32
|
-
loss_function (Union[_Loss, dict[str, _Loss]]): A loss function or a dictionary of loss functions
|
|
33
|
-
for each output layer.
|
|
34
|
-
optimizer (Optimizer): The optimizer used for training the model.
|
|
35
|
-
|
|
36
|
-
Raises:
|
|
37
|
-
ValueError: If the descriptor does not contain any output layers or if the number of loss functions
|
|
38
|
-
does not match the number of output layers when using a dictionary of loss functions.
|
|
39
|
-
"""
|
|
40
|
-
|
|
41
|
-
# Init parent class
|
|
42
|
-
super().__init__(descriptor, constraints)
|
|
43
|
-
|
|
44
|
-
# Init object variables
|
|
45
|
-
self.network = network
|
|
46
|
-
self.descriptor = descriptor
|
|
47
|
-
self.loss_function = loss_function
|
|
48
|
-
self.optimizer = optimizer
|
|
49
|
-
|
|
50
|
-
# Perform checks
|
|
51
|
-
if len(self.descriptor.output_layers) == 0:
|
|
52
|
-
raise ValueError(
|
|
53
|
-
'The descriptor class must contain one or more output layers. Mark a layer as output by setting descriptor.add("layer", ..., output=True).'
|
|
54
|
-
)
|
|
55
|
-
|
|
56
|
-
if isinstance(loss_function, _Loss):
|
|
57
|
-
if len(self.descriptor.output_layers) > 1:
|
|
58
|
-
logging.warning(
|
|
59
|
-
f"Multiple layers were marked as output, but only one loss function is defined. Only the loss of layer {list(self.descriptor.output_layers)[0]} will be calculated and used. To use the same loss function for all output layers, please specify then explicitly."
|
|
60
|
-
)
|
|
61
|
-
|
|
62
|
-
if isinstance(loss_function, dict):
|
|
63
|
-
if len(self.descriptor.output_layers) != len(loss_function):
|
|
64
|
-
raise ValueError(
|
|
65
|
-
f"The number of marked output layers does not match the number of provided loss functions."
|
|
66
|
-
)
|
|
67
|
-
|
|
68
|
-
# Assign proper step function based on if one or multiple loss functions are assigned
|
|
69
|
-
if isinstance(loss_function, _Loss):
|
|
70
|
-
self.training_step = self.training_step_single
|
|
71
|
-
self.validation_step = self.validation_step_single
|
|
72
|
-
|
|
73
|
-
if isinstance(loss_function, dict):
|
|
74
|
-
self.training_step = self.training_step_multi
|
|
75
|
-
self.validation_step = self.validation_step_multi
|
|
76
|
-
|
|
77
|
-
def forward(self, x):
|
|
78
|
-
"""
|
|
79
|
-
Perform a forward pass through the network.
|
|
80
|
-
|
|
81
|
-
Args:
|
|
82
|
-
x (Tensor): The input tensor to pass through the network.
|
|
83
|
-
|
|
84
|
-
Returns:
|
|
85
|
-
Tensor: The model's output for the given input.
|
|
86
|
-
"""
|
|
87
|
-
|
|
88
|
-
return self.network(x)
|
|
89
|
-
|
|
90
|
-
def training_step_single(self, batch, batch_idx):
|
|
91
|
-
"""
|
|
92
|
-
Perform a single training step using a single loss function.
|
|
93
|
-
|
|
94
|
-
Args:
|
|
95
|
-
batch (tuple): A tuple containing the input and target output tensors.
|
|
96
|
-
batch_idx (int): The index of the batch in the current epoch.
|
|
97
|
-
|
|
98
|
-
Returns:
|
|
99
|
-
Tensor: The loss value for the batch.
|
|
100
|
-
"""
|
|
101
|
-
|
|
102
|
-
self.train()
|
|
103
|
-
|
|
104
|
-
inputs, outputs = batch
|
|
105
|
-
prediction: dict[str, Tensor] = self(inputs)
|
|
106
|
-
|
|
107
|
-
layer = list(self.descriptor.output_layers)[0]
|
|
108
|
-
loss = self.loss_function(prediction[layer], outputs)
|
|
109
|
-
|
|
110
|
-
self.log(
|
|
111
|
-
"train_loss",
|
|
112
|
-
loss,
|
|
113
|
-
on_step=False,
|
|
114
|
-
on_epoch=True,
|
|
115
|
-
)
|
|
116
|
-
|
|
117
|
-
return super().training_step(prediction, loss)
|
|
118
|
-
|
|
119
|
-
def training_step_multi(self, batch, batch_idx):
|
|
120
|
-
"""
|
|
121
|
-
Perform a training step using multiple loss functions, one for each output layer.
|
|
122
|
-
|
|
123
|
-
Args:
|
|
124
|
-
batch (tuple): A tuple containing the input and target output tensors.
|
|
125
|
-
batch_idx (int): The index of the batch in the current epoch.
|
|
126
|
-
|
|
127
|
-
Returns:
|
|
128
|
-
Tensor: The total loss value for the batch, combining the losses from all output layers.
|
|
129
|
-
"""
|
|
130
|
-
|
|
131
|
-
self.train()
|
|
132
|
-
|
|
133
|
-
inputs, outputs = batch
|
|
134
|
-
prediction: dict[str, Tensor] = self(inputs)
|
|
135
|
-
|
|
136
|
-
# TODO add hyperparameter to scale loss per function
|
|
137
|
-
loss = 0
|
|
138
|
-
for layer in self.descriptor.output_layers:
|
|
139
|
-
layer_loss = self.loss_function[layer](prediction[layer], outputs)
|
|
140
|
-
loss += layer_loss
|
|
141
|
-
|
|
142
|
-
self.log(
|
|
143
|
-
f"train_loss_{layer}",
|
|
144
|
-
layer_loss,
|
|
145
|
-
on_step=False,
|
|
146
|
-
on_epoch=True,
|
|
147
|
-
)
|
|
148
|
-
|
|
149
|
-
self.log(
|
|
150
|
-
"train_loss",
|
|
151
|
-
loss,
|
|
152
|
-
on_step=False,
|
|
153
|
-
on_epoch=True,
|
|
154
|
-
)
|
|
155
|
-
|
|
156
|
-
return super().training_step(prediction, loss)
|
|
157
|
-
|
|
158
|
-
def validation_step_single(self, batch, batch_idx):
|
|
159
|
-
"""
|
|
160
|
-
Perform a single validation step using a single loss function.
|
|
161
|
-
|
|
162
|
-
Args:
|
|
163
|
-
batch (tuple): A tuple containing the input and target output tensors.
|
|
164
|
-
batch_idx (int): The index of the batch in the current epoch.
|
|
165
|
-
|
|
166
|
-
Returns:
|
|
167
|
-
Tensor: The validation loss for the batch.
|
|
168
|
-
"""
|
|
169
|
-
|
|
170
|
-
self.eval()
|
|
171
|
-
|
|
172
|
-
inputs, outputs = batch
|
|
173
|
-
prediction: dict[str, Tensor] = self(inputs)
|
|
174
|
-
|
|
175
|
-
layer = list(self.descriptor.output_layers)[0]
|
|
176
|
-
loss = self.loss_function(prediction[layer], outputs)
|
|
177
|
-
|
|
178
|
-
self.log(
|
|
179
|
-
"valid_loss",
|
|
180
|
-
loss,
|
|
181
|
-
on_step=False,
|
|
182
|
-
on_epoch=True,
|
|
183
|
-
)
|
|
184
|
-
|
|
185
|
-
return super().validation_step(prediction, loss)
|
|
186
|
-
|
|
187
|
-
def validation_step_multi(self, batch, batch_idx):
|
|
188
|
-
"""
|
|
189
|
-
Perform a validation step using multiple loss functions, one for each output layer.
|
|
190
|
-
|
|
191
|
-
Args:
|
|
192
|
-
batch (tuple): A tuple containing the input and target output tensors.
|
|
193
|
-
batch_idx (int): The index of the batch in the current epoch.
|
|
194
|
-
|
|
195
|
-
Returns:
|
|
196
|
-
Tensor: The total validation loss for the batch, combining the losses from all output layers.
|
|
197
|
-
"""
|
|
198
|
-
|
|
199
|
-
self.eval()
|
|
200
|
-
|
|
201
|
-
inputs, outputs = batch
|
|
202
|
-
prediction: dict[str, Tensor] = self(inputs)
|
|
203
|
-
|
|
204
|
-
loss = 0
|
|
205
|
-
for layer in self.descriptor.output_layers:
|
|
206
|
-
layer_loss = self.loss_function[layer](prediction[layer], outputs)
|
|
207
|
-
loss += layer_loss
|
|
208
|
-
|
|
209
|
-
self.log(
|
|
210
|
-
f"valid_loss_{layer}",
|
|
211
|
-
layer_loss,
|
|
212
|
-
on_step=False,
|
|
213
|
-
on_epoch=True,
|
|
214
|
-
)
|
|
215
|
-
|
|
216
|
-
self.log(
|
|
217
|
-
"valid_loss",
|
|
218
|
-
loss,
|
|
219
|
-
on_step=False,
|
|
220
|
-
on_epoch=True,
|
|
221
|
-
)
|
|
222
|
-
|
|
223
|
-
return super().validation_step(prediction, loss)
|
|
224
|
-
|
|
225
|
-
def configure_optimizers(self):
|
|
226
|
-
"""
|
|
227
|
-
Configure the optimizer for training.
|
|
228
|
-
|
|
229
|
-
Returns:
|
|
230
|
-
Optimizer: The optimizer used to update the model's parameters during training.
|
|
231
|
-
"""
|
|
232
|
-
|
|
233
|
-
return self.optimizer
|
congrads-0.1.0.dist-info/LICENSE
DELETED
|
@@ -1,34 +0,0 @@
|
|
|
1
|
-
MIT License
|
|
2
|
-
|
|
3
|
-
Copyright (c) 2024 DTAI - KU Leuven
|
|
4
|
-
|
|
5
|
-
Permission is hereby granted, free of charge, to any person obtaining a copy
|
|
6
|
-
of this software and associated documentation files (the "Software"), to deal
|
|
7
|
-
in the Software without restriction, including without limitation the rights
|
|
8
|
-
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
|
|
9
|
-
copies of the Software, and to permit persons to whom the Software is
|
|
10
|
-
furnished to do so, subject to the following conditions:
|
|
11
|
-
|
|
12
|
-
The above copyright notice and this permission notice shall be included in all
|
|
13
|
-
copies or substantial portions of the Software.
|
|
14
|
-
|
|
15
|
-
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
|
|
16
|
-
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
|
|
17
|
-
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
|
|
18
|
-
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
|
|
19
|
-
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
|
|
20
|
-
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
|
|
21
|
-
SOFTWARE.
|
|
22
|
-
|
|
23
|
-
|
|
24
|
-
"Commons Clause" License Condition v1.0
|
|
25
|
-
|
|
26
|
-
The Software is provided to you by the Licensor under the License, as defined below, subject to the following condition.
|
|
27
|
-
|
|
28
|
-
Without limiting other conditions in the License, the grant of rights under the License will not include, and the License does not grant to you, the right to Sell the Software.
|
|
29
|
-
|
|
30
|
-
For purposes of the foregoing, "Sell" means practicing any or all of the rights granted to you under the License to provide to third parties, for a fee or other consideration (including without limitation fees for hosting or consulting/ support services related to the Software), a product or service whose value derives, entirely or substantially, from the functionality of the Software. Any license notice or attribution required by the License must also include this Commons Clause License Condition notice.
|
|
31
|
-
|
|
32
|
-
Software: All CGGD-Toolbox associated files.
|
|
33
|
-
License: MIT
|
|
34
|
-
Licensor: DTAI - KU Leuven
|