congrads 0.2.0__py3-none-any.whl → 1.0.2__py3-none-any.whl
This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
- congrads/__init__.py +17 -10
- congrads/checkpoints.py +232 -0
- congrads/constraints.py +664 -134
- congrads/core.py +482 -110
- congrads/datasets.py +315 -11
- congrads/descriptor.py +100 -20
- congrads/metrics.py +178 -16
- congrads/networks.py +47 -23
- congrads/transformations.py +139 -0
- congrads/utils.py +439 -39
- congrads-1.0.2.dist-info/METADATA +208 -0
- congrads-1.0.2.dist-info/RECORD +15 -0
- {congrads-0.2.0.dist-info → congrads-1.0.2.dist-info}/WHEEL +1 -1
- congrads-0.2.0.dist-info/METADATA +0 -222
- congrads-0.2.0.dist-info/RECORD +0 -13
- {congrads-0.2.0.dist-info → congrads-1.0.2.dist-info}/LICENSE +0 -0
- {congrads-0.2.0.dist-info → congrads-1.0.2.dist-info}/top_level.txt +0 -0
|
@@ -0,0 +1,208 @@
|
|
|
1
|
+
Metadata-Version: 2.2
|
|
2
|
+
Name: congrads
|
|
3
|
+
Version: 1.0.2
|
|
4
|
+
Summary: A toolbox for using Constraint Guided Gradient Descent when training neural networks.
|
|
5
|
+
Author-email: Wout Rombouts <wout.rombouts@kuleuven.be>, Quinten Van Baelen <quinten.vanbaelen@kuleuven.be>, Peter Karsmakers <peter.karsmakers@kuleuven.be>
|
|
6
|
+
License: Copyright 2024 DTAI - KU Leuven
|
|
7
|
+
|
|
8
|
+
Redistribution and use in source and binary forms, with or without modification,
|
|
9
|
+
are permitted provided that the following conditions are met:
|
|
10
|
+
|
|
11
|
+
1. Redistributions of source code must retain the above copyright notice,
|
|
12
|
+
this list of conditions and the following disclaimer.
|
|
13
|
+
|
|
14
|
+
2. Redistributions in binary form must reproduce the above copyright notice,
|
|
15
|
+
this list of conditions and the following disclaimer in the documentation
|
|
16
|
+
and/or other materials provided with the distribution.
|
|
17
|
+
|
|
18
|
+
3. Neither the name of the copyright holder nor the names of its
|
|
19
|
+
contributors may be used to endorse or promote products derived from
|
|
20
|
+
this software without specific prior written permission.
|
|
21
|
+
|
|
22
|
+
THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS “AS IS”
|
|
23
|
+
AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
|
|
24
|
+
IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
|
|
25
|
+
ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE
|
|
26
|
+
LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL
|
|
27
|
+
DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR
|
|
28
|
+
SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER
|
|
29
|
+
CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY,
|
|
30
|
+
OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
|
|
31
|
+
OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
|
|
32
|
+
|
|
33
|
+
Requires-Python: >=3.9
|
|
34
|
+
Description-Content-Type: text/markdown
|
|
35
|
+
License-File: LICENSE
|
|
36
|
+
Requires-Dist: numpy>=1.26.4
|
|
37
|
+
Requires-Dist: pandas>=2.2.2
|
|
38
|
+
Requires-Dist: torch>=2.5.0
|
|
39
|
+
Requires-Dist: torchvision>=0.20.0
|
|
40
|
+
Requires-Dist: tensorboard>=2.18.0
|
|
41
|
+
Requires-Dist: tqdm>=4.66.5
|
|
42
|
+
|
|
43
|
+
<div align="center">
|
|
44
|
+
<img src="https://github.com/ML-KULeuven/congrads/blob/main/docs/_static/congrads_export.png?raw=true" height="200">
|
|
45
|
+
<p>
|
|
46
|
+
<b>Incorporate constraints into neural network training for more reliable and robust models.</b>
|
|
47
|
+
</p>
|
|
48
|
+
<br/>
|
|
49
|
+
|
|
50
|
+
[](https://pypi.org/project/congrads)
|
|
51
|
+
[](https://congrads.readthedocs.io)
|
|
52
|
+
[](https://pypi.org/project/congrads)
|
|
53
|
+
[](https://pypistats.org/packages/congrads)
|
|
54
|
+
[](https://opensource.org/licenses/BSD-3-Clause)
|
|
55
|
+
|
|
56
|
+
<br/>
|
|
57
|
+
<br/>
|
|
58
|
+
</div>
|
|
59
|
+
|
|
60
|
+
**Congrads** is a Python toolbox that brings **constraint-guided gradient descent** capabilities to your machine learning projects. Built with seamless integration into PyTorch, Congrads empowers you to enhance the training and optimization process by incorporating constraints into your training pipeline.
|
|
61
|
+
|
|
62
|
+
Whether you're working with simple inequality constraints, combinations of input-output relations, or custom constraint formulations, Congrads provides the tools and flexibility needed to build more robust and generalized models.
|
|
63
|
+
|
|
64
|
+
## Key Features
|
|
65
|
+
|
|
66
|
+
- **Constraint-Guided Training**: Add constraints to guide the optimization process, ensuring that your model generalizes better by trying to satisfy the constraints.
|
|
67
|
+
- **Flexible Constraint Definition**: Define constraints on inputs, outputs, or combinations thereof, using an intuitive and extendable interface. Make use of pre-programmed constraint classes or write your own.
|
|
68
|
+
- **Seamless PyTorch Integration**: Use Congrads within your existing PyTorch workflows with minimal setup.
|
|
69
|
+
- **Flexible and extendible**: Write your own custom networks, constraints and dataset classes to easily extend the functionality of the toolbox.
|
|
70
|
+
|
|
71
|
+
## Getting Started
|
|
72
|
+
|
|
73
|
+
### 1. **Installation**
|
|
74
|
+
|
|
75
|
+
First, make sure to install PyTorch since Congrads heavily relies on its deep learning framework. Please refer to the [PyTorch's getting started guide](https://pytorch.org/get-started/locally/). Make sure to install with CUDA support for GPU training.
|
|
76
|
+
|
|
77
|
+
Next, install the Congrads toolbox. The recommended way to install it is to use pip:
|
|
78
|
+
|
|
79
|
+
```bash
|
|
80
|
+
pip install congrads
|
|
81
|
+
```
|
|
82
|
+
|
|
83
|
+
This should automatically install all required dependencies for you. If you would like to install dependencies manually, Congrads depends on the following:
|
|
84
|
+
|
|
85
|
+
- Python 3.9 - 3.12
|
|
86
|
+
- **PyTorch** (install with CUDA support for GPU training, refer to [PyTorch's getting started guide](https://pytorch.org/get-started/locally/))
|
|
87
|
+
- **NumPy** (install with `pip install numpy`, or refer to [NumPy's install guide](https://numpy.org/install/).)
|
|
88
|
+
- **Pandas** (install with `pip install pandas`, or refer to [Panda's install guide](https://pandas.pydata.org/docs/getting_started/install.html).)
|
|
89
|
+
- **Tqdm** (install with `pip install tqdm`)
|
|
90
|
+
- **Torchvision** (install with `pip install torchvision`)
|
|
91
|
+
- **Tensorboard** (install with `pip install tensorboard`)
|
|
92
|
+
|
|
93
|
+
### 2. **Core concepts**
|
|
94
|
+
|
|
95
|
+
Before diving into the toolbox, it is recommended to familiarize yourself with Congrads's core concept and topics.
|
|
96
|
+
Please read the documentation at https://congrads.readthedocs.io/en/latest/ to get up-to-date.
|
|
97
|
+
|
|
98
|
+
### 3. **Basic Usage**
|
|
99
|
+
|
|
100
|
+
Below, a basic example can be found that illustrates how to work with the Congrads toolbox.
|
|
101
|
+
For additional examples, refer to the [examples](https://github.com/ML-KULeuven/congrads/tree/main/examples) and [notebooks](https://github.com/ML-KULeuven/congrads/tree/main/notebooks) folders in the repository.
|
|
102
|
+
|
|
103
|
+
#### 1. First, select the device to run your code on with.
|
|
104
|
+
|
|
105
|
+
```python
|
|
106
|
+
use_cuda = torch.cuda.is_available()
|
|
107
|
+
device = torch.device("cuda:0" if use_cuda else "cpu")
|
|
108
|
+
```
|
|
109
|
+
|
|
110
|
+
#### 2. Next, load your data and split it into training, validation and testing subsets.
|
|
111
|
+
|
|
112
|
+
```python
|
|
113
|
+
data = BiasCorrection(
|
|
114
|
+
"./datasets", preprocess_BiasCorrection, download=True
|
|
115
|
+
)
|
|
116
|
+
loaders = split_data_loaders(
|
|
117
|
+
data,
|
|
118
|
+
loader_args={"batch_size": 100, "shuffle": True},
|
|
119
|
+
valid_loader_args={"shuffle": False},
|
|
120
|
+
test_loader_args={"shuffle": False},
|
|
121
|
+
)
|
|
122
|
+
```
|
|
123
|
+
|
|
124
|
+
#### 3. Instantiate your neural network, make sure the dimensions match up with your data.
|
|
125
|
+
|
|
126
|
+
```python
|
|
127
|
+
network = MLPNetwork(25, 2, n_hidden_layers=3, hidden_dim=35)
|
|
128
|
+
network = network.to(device)
|
|
129
|
+
```
|
|
130
|
+
|
|
131
|
+
#### 4. Choose your loss function and optimizer.
|
|
132
|
+
|
|
133
|
+
```python
|
|
134
|
+
criterion = MSELoss()
|
|
135
|
+
optimizer = Adam(network.parameters(), lr=0.001)
|
|
136
|
+
```
|
|
137
|
+
|
|
138
|
+
#### 5. Then, setup the descriptor, that will attach names to specific parts of your network.
|
|
139
|
+
|
|
140
|
+
```python
|
|
141
|
+
descriptor = Descriptor()
|
|
142
|
+
descriptor.add("output", 0, "Tmax")
|
|
143
|
+
descriptor.add("output", 1, "Tmin")
|
|
144
|
+
```
|
|
145
|
+
|
|
146
|
+
#### 6. Define your constraints on the network.
|
|
147
|
+
|
|
148
|
+
```python
|
|
149
|
+
Constraint.descriptor = descriptor
|
|
150
|
+
Constraint.device = device
|
|
151
|
+
constraints = [
|
|
152
|
+
ScalarConstraint("Tmin", ge, 0),
|
|
153
|
+
ScalarConstraint("Tmin", le, 1),
|
|
154
|
+
ScalarConstraint("Tmax", ge, 0),
|
|
155
|
+
ScalarConstraint("Tmax", le, 1),
|
|
156
|
+
BinaryConstraint("Tmax", gt, "Tmin"),
|
|
157
|
+
]
|
|
158
|
+
```
|
|
159
|
+
|
|
160
|
+
#### 7. Instantiate metric manager and core, and start the training.
|
|
161
|
+
|
|
162
|
+
```python
|
|
163
|
+
metric_manager = MetricManager()
|
|
164
|
+
core = CongradsCore(
|
|
165
|
+
descriptor,
|
|
166
|
+
constraints,
|
|
167
|
+
loaders,
|
|
168
|
+
network,
|
|
169
|
+
criterion,
|
|
170
|
+
optimizer,
|
|
171
|
+
metric_manager,
|
|
172
|
+
device,
|
|
173
|
+
checkpoint_manager,
|
|
174
|
+
)
|
|
175
|
+
|
|
176
|
+
core.fit(max_epochs=50)
|
|
177
|
+
```
|
|
178
|
+
|
|
179
|
+
## Example Use Cases
|
|
180
|
+
|
|
181
|
+
- **Optimization with Domain Knowledge**: Ensure outputs meet real-world restrictions or safety standards.
|
|
182
|
+
- **Improve Training Process**: Inject domain knowledge in the training stage, increasing learning efficiency.
|
|
183
|
+
- **Physics-Informed Neural Networks (PINNs)**: Coming soon, Enforce physical laws as constraints in your models.
|
|
184
|
+
|
|
185
|
+
## Roadmap
|
|
186
|
+
|
|
187
|
+
- [ ] Add ODE/PDE constraints to support PINNs
|
|
188
|
+
- [ ] Add support for constraint parser that can interpret equations
|
|
189
|
+
- [ ] Determine if it is feasible to add unit and or functional tests
|
|
190
|
+
|
|
191
|
+
## Research
|
|
192
|
+
|
|
193
|
+
If you make use of this package or it's concepts in your research, please consider citing the following papers.
|
|
194
|
+
|
|
195
|
+
- Van Baelen, Q., & Karsmakers, P. (2023). **Constraint guided gradient descent: Training with inequality constraints with applications in regression and semantic segmentation.**
|
|
196
|
+
Neurocomputing, 556, 126636. doi:10.1016/j.neucom.2023.126636 <br/>[ [pdf](https://www.sciencedirect.com/science/article/abs/pii/S0925231223007592) | [bibtex](https://raw.githubusercontent.com/ML-KULeuven/congrads/main/docs/_static/VanBaelen2023.bib) ]
|
|
197
|
+
|
|
198
|
+
## Contributing
|
|
199
|
+
|
|
200
|
+
We welcome contributions to Congrads! Whether you want to report issues, suggest features, or contribute code via issues and pull requests.
|
|
201
|
+
|
|
202
|
+
## License
|
|
203
|
+
|
|
204
|
+
Congrads is licensed under the [The 3-Clause BSD License](LICENSE). We encourage companies that are interested in a collaboration for a specific topic to contact the authors for more information or to set up joint research projects.
|
|
205
|
+
|
|
206
|
+
---
|
|
207
|
+
|
|
208
|
+
Elevate your neural networks with Congrads! 🚀
|
|
@@ -0,0 +1,15 @@
|
|
|
1
|
+
congrads/__init__.py,sha256=uj36sGjM_ldPgD-0aaWh1b-HspZxqUsC2St97sg_6jg,759
|
|
2
|
+
congrads/checkpoints.py,sha256=AnP5lMT94BiOpT2e0b8QvxhW8bacy_U_eGInBGND6tU,7897
|
|
3
|
+
congrads/constraints.py,sha256=NjuRlquJaZHxj0K3A1wW1DQXJKUKM5jBaeSqrhjwCqg,33350
|
|
4
|
+
congrads/core.py,sha256=qcoK_P95j-TY17PWlR0zYbExwe19e391LIMbxZiq5Ek,21061
|
|
5
|
+
congrads/datasets.py,sha256=mfpMKfiJjc6tmeez6EPuyd94O54qZt5KFI4Gs5RAhlc,15855
|
|
6
|
+
congrads/descriptor.py,sha256=ml4IRiEcnRoRYiFgIV2BKpfKjWcLpPsTf0f4l0fTt38,4829
|
|
7
|
+
congrads/metrics.py,sha256=nQuOOVVUeWbxmiFHni9hHFeUd58Gm-Lo0875KG5bHgk,6774
|
|
8
|
+
congrads/networks.py,sha256=fW-1YuscWGSDQwjRItcD8-6R37k1-Do6E2g0HsghB4s,3914
|
|
9
|
+
congrads/transformations.py,sha256=0mbEGdanF7_nFh0lnuBVdImtj3wwIGBMsbg8mkFZ-kw,4485
|
|
10
|
+
congrads/utils.py,sha256=uKOxudT0VgOQ1KCa4uXDADt7KIQISLxzwCipdlfchwo,26252
|
|
11
|
+
congrads-1.0.2.dist-info/LICENSE,sha256=hDkSuSj1L5IpO9uhrag5zd29HicibbYX8tUbY3RXF40,1480
|
|
12
|
+
congrads-1.0.2.dist-info/METADATA,sha256=iUXuCCe9gjmbyvC0D97ZDERxEZclwLETBB3bu36b99w,9322
|
|
13
|
+
congrads-1.0.2.dist-info/WHEEL,sha256=In9FTNxeP60KnTkGw7wk6mJPYd_dQSjEZmXdBdMCI-8,91
|
|
14
|
+
congrads-1.0.2.dist-info/top_level.txt,sha256=B8M9NmtHbmzp-3APHe4C0oo7aRIWRHWoba9FIy9XeYM,9
|
|
15
|
+
congrads-1.0.2.dist-info/RECORD,,
|
|
@@ -1,222 +0,0 @@
|
|
|
1
|
-
Metadata-Version: 2.1
|
|
2
|
-
Name: congrads
|
|
3
|
-
Version: 0.2.0
|
|
4
|
-
Summary: A toolbox for using Constraint Guided Gradient Descent when training neural networks.
|
|
5
|
-
Author-email: Wout Rombouts <wout.rombouts@kuleuven.be>, Quinten Van Baelen <quinten.vanbaelen@kuleuven.be>, Peter Karsmakers <peter.karsmakers@kuleuven.be>
|
|
6
|
-
License: Copyright 2024 DTAI - KU Leuven
|
|
7
|
-
|
|
8
|
-
Redistribution and use in source and binary forms, with or without modification,
|
|
9
|
-
are permitted provided that the following conditions are met:
|
|
10
|
-
|
|
11
|
-
1. Redistributions of source code must retain the above copyright notice,
|
|
12
|
-
this list of conditions and the following disclaimer.
|
|
13
|
-
|
|
14
|
-
2. Redistributions in binary form must reproduce the above copyright notice,
|
|
15
|
-
this list of conditions and the following disclaimer in the documentation
|
|
16
|
-
and/or other materials provided with the distribution.
|
|
17
|
-
|
|
18
|
-
3. Neither the name of the copyright holder nor the names of its
|
|
19
|
-
contributors may be used to endorse or promote products derived from
|
|
20
|
-
this software without specific prior written permission.
|
|
21
|
-
|
|
22
|
-
THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS “AS IS”
|
|
23
|
-
AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
|
|
24
|
-
IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
|
|
25
|
-
ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE
|
|
26
|
-
LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL
|
|
27
|
-
DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR
|
|
28
|
-
SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER
|
|
29
|
-
CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY,
|
|
30
|
-
OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
|
|
31
|
-
OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
|
|
32
|
-
|
|
33
|
-
Requires-Python: >=3.9
|
|
34
|
-
Description-Content-Type: text/markdown
|
|
35
|
-
License-File: LICENSE
|
|
36
|
-
Requires-Dist: torch>=1.12.0
|
|
37
|
-
Requires-Dist: pandas>=2.2.2
|
|
38
|
-
Requires-Dist: numpy>=1.26.4
|
|
39
|
-
|
|
40
|
-
# Congrads
|
|
41
|
-
|
|
42
|
-
**Congrads** is a Python toolbox that brings **constraint-guided gradient descent** capabilities to your machine learning projects. Built with seamless integration into PyTorch, Congrads empowers you to enhance the training and optimization process by incorporating constraints into your training pipeline.
|
|
43
|
-
|
|
44
|
-
Whether you're working with simple inequality constraints, combinations of input-output relations, or custom constraint formulations, Congrads provides the tools and flexibility needed to build more robust and generalized models.
|
|
45
|
-
|
|
46
|
-
> **Note:** The Congrads toolbox is **currently in alpha phase**. Expect significant changes, potential bugs, and incomplete features as we continue to develop and improve the functionality. Feedback is highly appreciated during this phase to help us refine the toolbox and ensure its reliability in later stages.
|
|
47
|
-
|
|
48
|
-
## Key Features
|
|
49
|
-
|
|
50
|
-
- **Constraint-Guided Training**: Add constraints to guide the optimization process, ensuring that your model generalizes better by trying to satisfy the constraints.
|
|
51
|
-
- **Flexible Constraint Definition**: Define constraints on inputs, outputs, or combinations thereof, using an intuitive and extendable interface. Make use of pre-programmed constraint classes or write your own.
|
|
52
|
-
- **Seamless PyTorch Integration**: Use Congrads within your existing PyTorch workflows with minimal setup.
|
|
53
|
-
- **Flexible and extendible**: Write your own custom networks, constraints and dataset classes to easily extend the functionality of the toolbox.
|
|
54
|
-
|
|
55
|
-
## Installation
|
|
56
|
-
|
|
57
|
-
Currently, the **Congrads** toolbox can only be installed using pip. We will later expand to other package managers such as conda.
|
|
58
|
-
|
|
59
|
-
```bash
|
|
60
|
-
pip install congrads
|
|
61
|
-
```
|
|
62
|
-
|
|
63
|
-
## Getting Started
|
|
64
|
-
|
|
65
|
-
### 1. **Prerequisites**
|
|
66
|
-
|
|
67
|
-
Before you can use **Congrads**, make sure you have the following installed:
|
|
68
|
-
|
|
69
|
-
- Python 3.6+ (preffered version 3.11)
|
|
70
|
-
- **PyTorch** (install with CUDA support for GPU training, refer to [PyTorch's getting started guide](https://pytorch.org/get-started/locally/))
|
|
71
|
-
- **NumPy** (install with ```pip install numpy```, or refer to [NumPy's install guide](https://numpy.org/install/).)
|
|
72
|
-
- **Pandas** (install with ```pip install pandas```, or refer to [Panda's install guide](https://pandas.pydata.org/docs/getting_started/install.html).)
|
|
73
|
-
|
|
74
|
-
### 2. **Installation**
|
|
75
|
-
|
|
76
|
-
Please install **Congrads** via pip:
|
|
77
|
-
|
|
78
|
-
```bash
|
|
79
|
-
pip install congrads
|
|
80
|
-
```
|
|
81
|
-
|
|
82
|
-
### 3. **Basic Usage**
|
|
83
|
-
|
|
84
|
-
#### 1. Import necessary classes and functions from the toolbox
|
|
85
|
-
|
|
86
|
-
To start using the toolbox, import the required modules and functions. This includes classes for defining constraints, data processing, network setup, and training utilities.
|
|
87
|
-
|
|
88
|
-
```python
|
|
89
|
-
from congrads.constraints import BinaryConstraint, ScalarConstraint, Constraint
|
|
90
|
-
from congrads.core import CongradsCore
|
|
91
|
-
from congrads.datasets import BiasCorrection
|
|
92
|
-
from congrads.descriptor import Descriptor
|
|
93
|
-
from congrads.metrics import MetricManager
|
|
94
|
-
from congrads.networks import MLPNetwork
|
|
95
|
-
from congrads.utils import preprocess_BiasCorrection, splitDataLoaders
|
|
96
|
-
|
|
97
|
-
```
|
|
98
|
-
|
|
99
|
-
#### 2. Set up data and preprocessing
|
|
100
|
-
|
|
101
|
-
The toolbox works with various datasets, and for this example, we are using the **BiasCorrection** dataset. After loading the dataset, it is preprocessed using a utility function and split into train, validation, and test sets using DataLoader instances.
|
|
102
|
-
|
|
103
|
-
```python
|
|
104
|
-
# Load and preprocess data
|
|
105
|
-
data = BiasCorrection("./datasets", preprocess_BiasCorrection)
|
|
106
|
-
loaders = splitDataLoaders(
|
|
107
|
-
data, loader_args={"batch_size": 100, "shuffle": True, "num_workers": 6}
|
|
108
|
-
)
|
|
109
|
-
```
|
|
110
|
-
|
|
111
|
-
#### 3. Configure the network
|
|
112
|
-
|
|
113
|
-
The model architecture used here is a Multi-Layer Perceptron (MLP) with 25 input features, 2 output features, and 3 hidden layers, each containing 35 neurons. The network outputs are later mapped to meaningful names using the descriptor.
|
|
114
|
-
|
|
115
|
-
```python
|
|
116
|
-
# Instantiate network and push to correct device
|
|
117
|
-
network = MLPNetwork(25, 2, n_hidden_layers=3, hidden_dim=35)
|
|
118
|
-
network = network.to(device)
|
|
119
|
-
```
|
|
120
|
-
|
|
121
|
-
#### 4. Instantiate loss and optimizer
|
|
122
|
-
|
|
123
|
-
Define the loss function and optimizer, which are critical for training the model. In this example, we use the Mean Squared Error (MSE) loss function and the Adam optimizer with a learning rate of 0.001.
|
|
124
|
-
|
|
125
|
-
```python
|
|
126
|
-
# Instantiate loss and optimizer
|
|
127
|
-
criterion = MSELoss()
|
|
128
|
-
optimizer = Adam(network.parameters(), lr=0.001)
|
|
129
|
-
```
|
|
130
|
-
|
|
131
|
-
#### 5. Set up the descriptor
|
|
132
|
-
|
|
133
|
-
The descriptor serves as a mapping between network layers and their semantic meanings. For this example, the network's two outputs are named ```Tmax``` (maximum temperature) and ```Tmin``` (minimum temperature), which correspond to specific columns in the dataset.
|
|
134
|
-
|
|
135
|
-
```python
|
|
136
|
-
# Descriptor setup
|
|
137
|
-
descriptor = Descriptor()
|
|
138
|
-
descriptor.add("output", 0, "Tmax", output=True)
|
|
139
|
-
descriptor.add("output", 1, "Tmin", output=True)
|
|
140
|
-
```
|
|
141
|
-
|
|
142
|
-
#### 6. Define constraints on your network
|
|
143
|
-
|
|
144
|
-
Constraints are rules applied to the network's behavior, ensuring its outputs meet specific criteria. Using the descriptor, constraints can be defined for named outputs. In this case, constraints enforce bounds (e.g., ```0 <= Tmin <= 1```) and relationships (```Tmax > Tmin```) on the outputs.
|
|
145
|
-
|
|
146
|
-
```python
|
|
147
|
-
# Constraints definition
|
|
148
|
-
Constraint.descriptor = descriptor
|
|
149
|
-
constraints = [
|
|
150
|
-
ScalarConstraint("Tmin", ge, 0), # Tmin >= 0
|
|
151
|
-
ScalarConstraint("Tmin", le, 1), # Tmin <= 1
|
|
152
|
-
ScalarConstraint("Tmax", ge, 0), # Tmax >= 0
|
|
153
|
-
ScalarConstraint("Tmax", le, 1), # Tmax <= 1
|
|
154
|
-
BinaryConstraint("Tmax", gt, "Tmin"), # Tmax > Tmin
|
|
155
|
-
]
|
|
156
|
-
```
|
|
157
|
-
|
|
158
|
-
#### 7. Set up trainer
|
|
159
|
-
|
|
160
|
-
Metrics are used to evaluate and track the model's performance during training. A ```MetricManager``` is instantiated with a TensorBoard writer to log metrics and visualize training progress.
|
|
161
|
-
|
|
162
|
-
```python
|
|
163
|
-
# Initialize metrics
|
|
164
|
-
writer = SummaryWriter()
|
|
165
|
-
metric_manager = MetricManager(writer, device)
|
|
166
|
-
```
|
|
167
|
-
|
|
168
|
-
#### 8. Initialize and configure the core learner
|
|
169
|
-
|
|
170
|
-
The core of the toolbox is the ```CongradsCore``` class, which integrates the descriptor, constraints, data loaders, network, loss function, optimizer, and metrics to manage the learning process.
|
|
171
|
-
|
|
172
|
-
```python
|
|
173
|
-
# Instantiate core
|
|
174
|
-
core = CongradsCore(
|
|
175
|
-
descriptor,
|
|
176
|
-
constraints,
|
|
177
|
-
loaders,
|
|
178
|
-
network,
|
|
179
|
-
criterion,
|
|
180
|
-
optimizer,
|
|
181
|
-
metric_manager,
|
|
182
|
-
device,
|
|
183
|
-
)
|
|
184
|
-
```
|
|
185
|
-
|
|
186
|
-
#### 9. Start training
|
|
187
|
-
|
|
188
|
-
The ```fit``` method of the core class starts the training loop for the specified number of epochs. At the end of training, the TensorBoard writer is closed to finalize the logs.
|
|
189
|
-
|
|
190
|
-
```python
|
|
191
|
-
# Start training
|
|
192
|
-
core.fit(max_epochs=150)
|
|
193
|
-
|
|
194
|
-
# Close writer
|
|
195
|
-
writer.close()
|
|
196
|
-
```
|
|
197
|
-
|
|
198
|
-
## Example Use Cases
|
|
199
|
-
|
|
200
|
-
- **Optimization with Domain Knowledge**: Ensure outputs meet real-world restrictions or safety standards.
|
|
201
|
-
- **Physics-Informed Neural Networks (PINNs)**: Enforce physical laws as constraints in your models.
|
|
202
|
-
- **Improve Training Process**: Inject domain knowledge in the training stage, increasing learning efficiency.
|
|
203
|
-
|
|
204
|
-
## Roadmap
|
|
205
|
-
|
|
206
|
-
- [ ] Documentation and Notebook examples
|
|
207
|
-
- [ ] Add support for constraint parser that can interpret equations
|
|
208
|
-
- [x] Add better handling of metric logging and visualization
|
|
209
|
-
- [x] Revise if Pytorch Lightning is preferable over plain Pytorch
|
|
210
|
-
- [ ] Determine if it is feasible to add unit and or functional tests
|
|
211
|
-
|
|
212
|
-
## Contributing
|
|
213
|
-
|
|
214
|
-
We welcome contributions to Congrads! Whether you want to report issues, suggest features, or contribute code via issues and pull requests.
|
|
215
|
-
|
|
216
|
-
## License
|
|
217
|
-
|
|
218
|
-
Congrads is licensed under the [The 3-Clause BSD License](LICENSE). We encourage companies that are interested in a collaboration for a specific topic to contact the authors for more information or to set up joint research projects.
|
|
219
|
-
|
|
220
|
-
---
|
|
221
|
-
|
|
222
|
-
Elevate your neural networks with Congrads! 🚀
|
congrads-0.2.0.dist-info/RECORD
DELETED
|
@@ -1,13 +0,0 @@
|
|
|
1
|
-
congrads/__init__.py,sha256=XnRKk4VTheZJj6Z1f8x5Iq2YPtd2fycUgQazOqiqOEw,458
|
|
2
|
-
congrads/constraints.py,sha256=JgO8SUhSKHoBH-WvFdwYnhkVl_jO-RqwgHCVOR1_F-8,13488
|
|
3
|
-
congrads/core.py,sha256=egFph4MhKncwOi6pcTzDFGqh9bIxb7-z68899TxcEQM,7696
|
|
4
|
-
congrads/datasets.py,sha256=uTDwnjwA52wwT6Hv4Kw0WqKi2dMDJE3nP-xKB6AjCNw,5470
|
|
5
|
-
congrads/descriptor.py,sha256=FzmFZBHZ3nhd8NS951EJqcM97C-XsoRIA9qK6rmeBU4,1520
|
|
6
|
-
congrads/metrics.py,sha256=ct4wj8q-GL3lYXxBeNCsCvCLn0TPBbs_8ybiMe-Fw5w,1471
|
|
7
|
-
congrads/networks.py,sha256=QpuEgHmkXDCrTbonHoXbbRblZIdpYsopqMST--Ki9i4,3256
|
|
8
|
-
congrads/utils.py,sha256=Z4ElFFreacRN7qPXh7Gv5lIzdAs5gtvVloJnHag2E9g,13890
|
|
9
|
-
congrads-0.2.0.dist-info/LICENSE,sha256=hDkSuSj1L5IpO9uhrag5zd29HicibbYX8tUbY3RXF40,1480
|
|
10
|
-
congrads-0.2.0.dist-info/METADATA,sha256=jaWNoJ4AeWB4aCL47Tw4OsXwOb-q5w4rwYKxSE76CKM,9804
|
|
11
|
-
congrads-0.2.0.dist-info/WHEEL,sha256=PZUExdf71Ui_so67QXpySuHtCi3-J3wvF4ORK6k_S8U,91
|
|
12
|
-
congrads-0.2.0.dist-info/top_level.txt,sha256=B8M9NmtHbmzp-3APHe4C0oo7aRIWRHWoba9FIy9XeYM,9
|
|
13
|
-
congrads-0.2.0.dist-info/RECORD,,
|
|
File without changes
|
|
File without changes
|