congrads 0.2.0__py3-none-any.whl → 0.3.0__py3-none-any.whl

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
@@ -0,0 +1,234 @@
1
+ Metadata-Version: 2.3
2
+ Name: congrads
3
+ Version: 0.3.0
4
+ Summary: A toolbox for using Constraint Guided Gradient Descent when training neural networks.
5
+ Author: Wout Rombouts, Quinten Van Baelen, Peter Karsmakers
6
+ Author-email: Wout Rombouts <wout.rombouts@kuleuven.be>, Quinten Van Baelen <quinten.vanbaelen@kuleuven.be>, Peter Karsmakers <peter.karsmakers@kuleuven.be>
7
+ License: Copyright 2024 DTAI - KU Leuven
8
+
9
+ Redistribution and use in source and binary forms, with or without modification,
10
+ are permitted provided that the following conditions are met:
11
+
12
+ 1. Redistributions of source code must retain the above copyright notice,
13
+ this list of conditions and the following disclaimer.
14
+
15
+ 2. Redistributions in binary form must reproduce the above copyright notice,
16
+ this list of conditions and the following disclaimer in the documentation
17
+ and/or other materials provided with the distribution.
18
+
19
+ 3. Neither the name of the copyright holder nor the names of its
20
+ contributors may be used to endorse or promote products derived from
21
+ this software without specific prior written permission.
22
+
23
+ THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS “AS IS”
24
+ AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
25
+ IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
26
+ ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE
27
+ LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL
28
+ DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR
29
+ SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER
30
+ CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY,
31
+ OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
32
+ OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
33
+ Requires-Dist: numpy>=1.24.0
34
+ Requires-Dist: pandas>=1.5.0
35
+ Requires-Dist: torch>=2.0.0
36
+ Requires-Dist: torchvision>=0.15.1
37
+ Requires-Dist: tqdm>=4.65.0
38
+ Requires-Dist: matplotlib>=3.7.0 ; extra == 'examples'
39
+ Requires-Dist: tensorboard>=2.18.0 ; extra == 'examples'
40
+ Requires-Python: >=3.11
41
+ Provides-Extra: examples
42
+ Description-Content-Type: text/markdown
43
+
44
+ <div align="center">
45
+ <img src="https://github.com/ML-KULeuven/congrads/blob/main/docs/_static/congrads_export.png?raw=true" height="200">
46
+ <p>
47
+ <b>Incorporate constraints into neural network training for more reliable and robust models.</b>
48
+ </p>
49
+ <br/>
50
+
51
+ [![PyPi](https://img.shields.io/pypi/v/congrads.svg)](https://pypi.org/project/congrads)
52
+ [![Read the Docs](https://img.shields.io/readthedocs/congrads/latest.svg?label=Read%20the%20Docs)](https://congrads.readthedocs.io)
53
+ [![Python Version: 3.11+](https://img.shields.io/badge/Python-3.11+-blue.svg)](https://pypi.org/project/congrads)
54
+ [![Downloads](https://img.shields.io/pypi/dm/congrads.svg)](https://pypistats.org/packages/congrads)
55
+ [![License](https://img.shields.io/badge/License-BSD%203--Clause-blue.svg)](https://opensource.org/licenses/BSD-3-Clause)
56
+
57
+ <br/>
58
+ <br/>
59
+ </div>
60
+
61
+ **Congrads** is a Python toolbox that brings **constraint-guided gradient descent** capabilities to your machine learning projects. Built with seamless integration into PyTorch, Congrads empowers you to enhance the training and optimization process by incorporating constraints into your training pipeline.
62
+
63
+ Whether you're working with simple inequality constraints, combinations of input-output relations, or custom constraint formulations, Congrads provides the tools and flexibility needed to build more robust and generalized models.
64
+
65
+
66
+ > **Notice:** All previous `v1.x` releases are **yanked**.
67
+ > The library is still in active development, and backwards compatibility is not guaranteed.
68
+ > Please use the new `v0.x` series for ongoing updates.
69
+
70
+ ## Key Features
71
+
72
+ - **Constraint-Guided Training**: Add constraints to guide the optimization process, ensuring that your model generalizes better by trying to satisfy the constraints.
73
+ - **Flexible Constraint Definition**: Define constraints on inputs, outputs, or combinations thereof, using an intuitive and extendable interface. Make use of pre-programmed constraint classes or write your own.
74
+ - **Seamless PyTorch Integration**: Use Congrads within your existing PyTorch workflows with minimal setup.
75
+ - **Flexible and extendible**: Write your own custom networks, constraints and dataset classes to easily extend the functionality of the toolbox.
76
+
77
+ ## Getting Started
78
+
79
+ ### 1. **Installation**
80
+
81
+ First, make sure to install PyTorch since Congrads heavily relies on its deep learning framework. Please refer to the [PyTorch's getting started guide](https://pytorch.org/get-started/locally/). Make sure to install with CUDA support for GPU training.
82
+
83
+ Next, install the Congrads toolbox. The recommended way to install it is to use pip:
84
+
85
+ ```bash
86
+ pip install congrads
87
+ ```
88
+
89
+ You can also install Congrads together with extra packages required to run the examples:
90
+
91
+ ```bash
92
+ pip install congrads[examples]
93
+ ```
94
+
95
+ This should automatically install all required dependencies for you. If you would like to install dependencies manually, Congrads depends on the following:
96
+
97
+ - Python 3.11 - 3.13
98
+ - **PyTorch** (install with CUDA support for GPU training, refer to [PyTorch's getting started guide](https://pytorch.org/get-started/locally/))
99
+ - **NumPy** (install with `pip install numpy`, or refer to [NumPy's install guide](https://numpy.org/install/).)
100
+ - **Pandas** (install with `pip install pandas`, or refer to [Panda's install guide](https://pandas.pydata.org/docs/getting_started/install.html).)
101
+ - **Tqdm** (install with `pip install tqdm`)
102
+ - **Torchvision** (install with `pip install torchvision`)
103
+ - Optional: **Tensorboard** (install with `pip install tensorboard`)
104
+
105
+ ### 2. **Core concepts**
106
+
107
+ Before diving into the toolbox, it is recommended to familiarize yourself with Congrads's core concept and topics.
108
+ Please read the documentation at https://congrads.readthedocs.io/en/latest/ to get up-to-date.
109
+
110
+ ### 3. **Basic Usage**
111
+
112
+ Below, a basic example can be found that illustrates how to work with the Congrads toolbox.
113
+ For additional examples, refer to the [examples](https://github.com/ML-KULeuven/congrads/tree/main/examples) and [notebooks](https://github.com/ML-KULeuven/congrads/tree/main/notebooks) folders in the repository.
114
+
115
+ #### 1. First, select the device to run your code on with.
116
+
117
+ ```python
118
+ use_cuda = torch.cuda.is_available()
119
+ device = torch.device("cuda:0" if use_cuda else "cpu")
120
+ ```
121
+
122
+ #### 2. Next, load your data and split it into training, validation and testing subsets.
123
+
124
+ ```python
125
+ data = BiasCorrection(
126
+ "./datasets", preprocess_BiasCorrection, download=True
127
+ )
128
+ loaders = split_data_loaders(
129
+ data,
130
+ loader_args={"batch_size": 100, "shuffle": True},
131
+ valid_loader_args={"shuffle": False},
132
+ test_loader_args={"shuffle": False},
133
+ )
134
+ ```
135
+
136
+ #### 3. Instantiate your neural network, make sure the dimensions match up with your data.
137
+
138
+ ```python
139
+ network = MLPNetwork(25, 2, n_hidden_layers=3, hidden_dim=35)
140
+ network = network.to(device)
141
+ ```
142
+
143
+ #### 4. Choose your loss function and optimizer.
144
+
145
+ ```python
146
+ criterion = MSELoss()
147
+ optimizer = Adam(network.parameters(), lr=0.001)
148
+ ```
149
+
150
+ #### 5. Then, setup the descriptor, that will attach names to specific parts of your network.
151
+
152
+ ```python
153
+ descriptor = Descriptor()
154
+ descriptor.add("output", 0, "Tmax")
155
+ descriptor.add("output", 1, "Tmin")
156
+ ```
157
+
158
+ #### 6. Define your constraints on the network.
159
+
160
+ ```python
161
+ Constraint.descriptor = descriptor
162
+ Constraint.device = device
163
+ constraints = [
164
+ ScalarConstraint("Tmin", ge, 0),
165
+ ScalarConstraint("Tmin", le, 1),
166
+ ScalarConstraint("Tmax", ge, 0),
167
+ ScalarConstraint("Tmax", le, 1),
168
+ BinaryConstraint("Tmax", gt, "Tmin"),
169
+ ]
170
+ ```
171
+
172
+ #### 7. Instantiate metric manager and core, and start the training.
173
+
174
+ ```python
175
+ metric_manager = MetricManager()
176
+ core = CongradsCore(
177
+ descriptor,
178
+ constraints,
179
+ loaders,
180
+ network,
181
+ criterion,
182
+ optimizer,
183
+ metric_manager,
184
+ device,
185
+ checkpoint_manager,
186
+ )
187
+
188
+ core.fit(max_epochs=50)
189
+ ```
190
+
191
+ ## Example Use Cases
192
+
193
+ - **Optimization with Domain Knowledge**: Ensure outputs meet real-world restrictions or safety standards.
194
+ - **Improve Training Process**: Inject domain knowledge in the training stage, increasing learning efficiency.
195
+ - **Physics-Informed Neural Networks (PINNs)**: Coming soon, Enforce physical laws as constraints in your models.
196
+
197
+ ## Planned changes / Roadmap
198
+
199
+ - [ ] Add ODE/PDE constraints to support PINNs
200
+ - [x] Rework callback system
201
+ - [ ] Add support for constraint parser that can interpret equations
202
+
203
+ ## Research
204
+
205
+ If you make use of this package or it's concepts in your research, please consider citing the following papers.
206
+
207
+ - Van Baelen, Q., & Karsmakers, P. (2023). **Constraint guided gradient descent: Training with inequality constraints with applications in regression and semantic segmentation.**
208
+ Neurocomputing, 556, 126636. doi:10.1016/j.neucom.2023.126636 <br/>[ [pdf](https://www.sciencedirect.com/science/article/abs/pii/S0925231223007592) | [bibtex](https://raw.githubusercontent.com/ML-KULeuven/congrads/main/docs/_static/VanBaelen2023.bib) ]
209
+
210
+ ## Contributing
211
+
212
+ We welcome contributions to Congrads! Whether you want to report issues, suggest features, or contribute code via issues and pull requests.
213
+
214
+ ## License
215
+
216
+ Congrads is licensed under the [The 3-Clause BSD License](LICENSE). We encourage companies that are interested in a collaboration for a specific topic to contact the authors for more information or to set up joint research projects.
217
+
218
+ ## Contacts
219
+
220
+ Feel free to contact any of the below contact persons for more information or details about the project. Companies interested in a collaboration, or to set up joint research projects are also encouraged to get in touch with us.
221
+
222
+ - Peter Karsmakers [ [email](mailto:peter.karsmakers@kuleuven.be) | [website](https://www.kuleuven.be/wieiswie/en/person/00047893) ]
223
+ - Quinten Van Baelen [ [email](mailto:quinten.vanbaelen@kuleuven.be) | [website](https://www.kuleuven.be/wieiswie/en/person/00125540) ]
224
+
225
+ ## Contributors
226
+
227
+ Below you find a list of people who contributed in making the toolbox. Feel free to contact them for any repository- or code-specific questions, suggestions or remarks.
228
+
229
+ - Wout Rombouts [ [email](mailto:wout.rombouts@kuleuven.be) | [github profile](https://github.com/rombie18) ]
230
+ - Quinten Van Baelen [ [email](mailto:quinten.vanbaelen@kuleuven.be) | [github profile](https://github.com/quinten-vb) ]
231
+
232
+ ---
233
+
234
+ Elevate your neural networks with Congrads! 🚀
@@ -0,0 +1,23 @@
1
+ congrads/__init__.py,sha256=XJKWRteSvmTYawgS1Pon8kWhhd3haKo4RGAsyjEGS8Q,383
2
+ congrads/callbacks/base.py,sha256=OChXls-tndgJQOXNfqavnPywHZHn3N87yLKD4kbHDHk,13714
3
+ congrads/callbacks/registry.py,sha256=KkzjDqMS3CkE__PpGrmAEYwRngqGSdQNE8NVWl7ogeA,3898
4
+ congrads/checkpoints.py,sha256=V79n3mqjB48nbNkBELqKDg9iou0b1vc5eRrlcu8aIA4,7228
5
+ congrads/constraints/base.py,sha256=k9OyPS2A4bP3fSEAEANGuw7zofiWlGIxqb5ows1LQWs,10105
6
+ congrads/constraints/registry.py,sha256=k__RfcXle-qDL9OJ-nfwgL9zeM6-ISwgQyIzmx-lsgc,52302
7
+ congrads/core/batch_runner.py,sha256=emc7smJLDHq0J8_J9t9X0RtqrXaYwOP9mmhlX_M78e4,6967
8
+ congrads/core/congradscore.py,sha256=9ZKUVMB9RbmudtuC-MQcNColBYUjb6XLqb0eISzfrGk,12070
9
+ congrads/core/constraint_engine.py,sha256=UEt-tmtJeJX0Wu3ol17Z0A9hacL0F8oouJUwHgIIoDE,8994
10
+ congrads/core/epoch_runner.py,sha256=l0x3uLXQ5I5o1C63wXgL4_QkhFmXxW-jeejNJK6sf18,4093
11
+ congrads/datasets/registry.py,sha256=RfffRiA7Qijc69cJTBJhItTZ8x9B-p1kXMjvcfEC_nA,31102
12
+ congrads/descriptor.py,sha256=tUHF4vvyNzJP5vpq1xn0uhKnOlAkElwG2R9gG4glHvQ,6914
13
+ congrads/metrics.py,sha256=e52QC8yNKsxAndjC3U4WMUnQ_0GmiSlExKtxRRShHao,4625
14
+ congrads/networks/registry.py,sha256=UPzPDU0wI2zoOEvi697QBSDOtaa3Rc0rgCb-tCxbjak,2252
15
+ congrads/py.typed,sha256=47DEQpj8HBSa-_TImW-5JCeuQeRkm5NMpJWZG3hSuFU,0
16
+ congrads/transformations/base.py,sha256=KZQkloaDGcqAp8EhlUnHL8VfZqSq8OCqp_Iy_a2Nfns,900
17
+ congrads/transformations/registry.py,sha256=p2cLnt3X1bspEPfR7IVd31qPXQimVe_bRu2VhUOIZj0,2607
18
+ congrads/utils/preprocessors.py,sha256=oqW3hV_yoUd-6I-NSoE61e_JDNEPnBJvvvdsuKd9Ekg,18190
19
+ congrads/utils/utility.py,sha256=zvOAVjQjtmsvyuJm0rF0cy_jApR6qluQsFxk9ItalzE,18893
20
+ congrads/utils/validation.py,sha256=Jj8ZJGJrrH9B02cIaScsQpne3zjyarkPldDdT1pejVA,6208
21
+ congrads-0.3.0.dist-info/WHEEL,sha256=eh7sammvW2TypMMMGKgsM83HyA_3qQ5Lgg3ynoecH3M,79
22
+ congrads-0.3.0.dist-info/METADATA,sha256=UCutFzaiD6CaeSr9BqtfiyMS-LsDs6Iwpw8QcXSS2jc,10748
23
+ congrads-0.3.0.dist-info/RECORD,,
@@ -0,0 +1,4 @@
1
+ Wheel-Version: 1.0
2
+ Generator: uv 0.8.24
3
+ Root-Is-Purelib: true
4
+ Tag: py3-none-any
congrads/constraints.py DELETED
@@ -1,389 +0,0 @@
1
- from abc import ABC, abstractmethod
2
- from numbers import Number
3
- import random
4
- import string
5
- from typing import Callable, Dict
6
- from torch import (
7
- Tensor,
8
- ge,
9
- gt,
10
- lt,
11
- le,
12
- reshape,
13
- stack,
14
- ones,
15
- tensor,
16
- zeros_like,
17
- )
18
- import logging
19
- from torch.nn.functional import normalize
20
-
21
- from .descriptor import Descriptor
22
-
23
-
24
- class Constraint(ABC):
25
-
26
- descriptor: Descriptor = None
27
- device = None
28
-
29
- def __init__(
30
- self,
31
- neurons: set[str],
32
- name: str = None,
33
- rescale_factor: float = 1.5,
34
- ) -> None:
35
-
36
- # Init parent class
37
- super().__init__()
38
-
39
- # Init object variables
40
- self.neurons = neurons
41
- self.rescale_factor = rescale_factor
42
-
43
- # Perform checks
44
- if rescale_factor <= 1:
45
- logging.warning(
46
- f"Rescale factor for constraint {name} is <= 1. The network will favor general loss over the constraint-adjusted loss. Is this intended behaviour? Normally, the loss should always be larger than 1."
47
- )
48
-
49
- # If no constraint_name is set, generate one based on the class name and a random suffix
50
- if name:
51
- self.name = name
52
- else:
53
- random_suffix = "".join(
54
- random.choices(string.ascii_uppercase + string.digits, k=6)
55
- )
56
- self.name = f"{self.__class__.__name__}_{random_suffix}"
57
- logging.warning(f"Name for constraint is not set. Using {self.name}.")
58
-
59
- # If rescale factor is not larger than 1, warn user and adjust
60
- if not rescale_factor > 1:
61
- self.rescale_factor = abs(rescale_factor) + 1.5
62
- logging.warning(
63
- f"Rescale factor for constraint {name} is < 1, adjusted value {rescale_factor} to {self.rescale_factor}."
64
- )
65
- else:
66
- self.rescale_factor = rescale_factor
67
-
68
- # Infer layers from descriptor and neurons
69
- self.layers = set()
70
- for neuron in self.neurons:
71
- if neuron not in self.descriptor.neuron_to_layer.keys():
72
- raise ValueError(
73
- f'The neuron name {neuron} used with constraint {self.name} is not defined in the descriptor. Please add it to the correct layer using descriptor.add("layer", ...).'
74
- )
75
-
76
- self.layers.add(self.descriptor.neuron_to_layer[neuron])
77
-
78
- # TODO only denormalize if required for efficiency
79
- def _denormalize(self, input: Tensor, neuron_names: list[str]):
80
- # Extract min and max for each neuron
81
- min_values = tensor(
82
- [self.descriptor.neuron_to_minmax[name][0] for name in neuron_names],
83
- device=input.device,
84
- )
85
- max_values = tensor(
86
- [self.descriptor.neuron_to_minmax[name][1] for name in neuron_names],
87
- device=input.device,
88
- )
89
-
90
- # Apply vectorized denormalization
91
- return input * (max_values - min_values) + min_values
92
-
93
- @abstractmethod
94
- def check_constraint(self, prediction: dict[str, Tensor]) -> Tensor:
95
- raise NotImplementedError
96
-
97
- @abstractmethod
98
- def calculate_direction(self, prediction: dict[str, Tensor]) -> Dict[str, Tensor]:
99
- raise NotImplementedError
100
-
101
-
102
- class ScalarConstraint(Constraint):
103
-
104
- def __init__(
105
- self,
106
- neuron_name: str,
107
- comparator: Callable[[Tensor, Number], Tensor],
108
- scalar: Number,
109
- name: str = None,
110
- rescale_factor: float = 1.5,
111
- ) -> None:
112
-
113
- # Compose constraint name
114
- name = f"{neuron_name}_{comparator.__name__}_{str(scalar)}"
115
-
116
- # Init parent class
117
- super().__init__({neuron_name}, name, rescale_factor)
118
-
119
- # Init variables
120
- self.comparator = comparator
121
- self.scalar = scalar
122
-
123
- # Get layer name and feature index from neuron_name
124
- self.layer = self.descriptor.neuron_to_layer[neuron_name]
125
- self.index = self.descriptor.neuron_to_index[neuron_name]
126
-
127
- # If comparator function is not supported, raise error
128
- if comparator not in [ge, le, gt, lt]:
129
- raise ValueError(
130
- f"Comparator {str(comparator)} used for constraint {name} is not supported. Only ge, le, gt, lt are allowed."
131
- )
132
-
133
- # Calculate directions based on constraint operator
134
- if self.comparator in [lt, le]:
135
- self.direction = -1
136
- elif self.comparator in [gt, ge]:
137
- self.direction = 1
138
-
139
- def check_constraint(self, prediction: dict[str, Tensor]) -> Tensor:
140
-
141
- return ~self.comparator(prediction[self.layer][:, self.index], self.scalar)
142
-
143
- def calculate_direction(self, prediction: dict[str, Tensor]) -> Dict[str, Tensor]:
144
- # NOTE currently only works for dense layers due to neuron to index translation
145
-
146
- output = {}
147
-
148
- for layer in self.layers:
149
- output[layer] = zeros_like(prediction[layer][0])
150
-
151
- output[self.layer][self.index] = self.direction
152
-
153
- for layer in self.layers:
154
- output[layer] = normalize(reshape(output[layer], [1, -1]), dim=1)
155
-
156
- return output
157
-
158
-
159
- class BinaryConstraint(Constraint):
160
-
161
- def __init__(
162
- self,
163
- neuron_name_left: str,
164
- comparator: Callable[[Tensor, Number], Tensor],
165
- neuron_name_right: str,
166
- name: str = None,
167
- rescale_factor: float = 1.5,
168
- ) -> None:
169
-
170
- # Compose constraint name
171
- name = f"{neuron_name_left}_{comparator.__name__}_{neuron_name_right}"
172
-
173
- # Init parent class
174
- super().__init__(
175
- {neuron_name_left, neuron_name_right},
176
- name,
177
- rescale_factor,
178
- )
179
-
180
- # Init variables
181
- self.comparator = comparator
182
-
183
- # Get layer name and feature index from neuron_name
184
- self.layer_left = self.descriptor.neuron_to_layer[neuron_name_left]
185
- self.layer_right = self.descriptor.neuron_to_layer[neuron_name_right]
186
- self.index_left = self.descriptor.neuron_to_index[neuron_name_left]
187
- self.index_right = self.descriptor.neuron_to_index[neuron_name_right]
188
-
189
- # If comparator function is not supported, raise error
190
- if comparator not in [ge, le, gt, lt]:
191
- raise RuntimeError(
192
- f"Comparator {str(comparator)} used for constraint {name} is not supported. Only ge, le, gt, lt are allowed."
193
- )
194
-
195
- # Calculate directions based on constraint operator
196
- if self.comparator in [lt, le]:
197
- self.direction_left = -1
198
- self.direction_right = 1
199
- else:
200
- self.direction_left = 1
201
- self.direction_right = -1
202
-
203
- def check_constraint(self, prediction: dict[str, Tensor]) -> Tensor:
204
-
205
- return ~self.comparator(
206
- prediction[self.layer_left][:, self.index_left],
207
- prediction[self.layer_right][:, self.index_right],
208
- )
209
-
210
- def calculate_direction(self, prediction: dict[str, Tensor]) -> Dict[str, Tensor]:
211
- # NOTE currently only works for dense layers due to neuron to index translation
212
-
213
- output = {}
214
-
215
- for layer in self.layers:
216
- output[layer] = zeros_like(prediction[layer][0])
217
-
218
- output[self.layer_left][self.index_left] = self.direction_left
219
- output[self.layer_right][self.index_right] = self.direction_right
220
-
221
- for layer in self.layers:
222
- output[layer] = normalize(reshape(output[layer], [1, -1]), dim=1)
223
-
224
- return output
225
-
226
-
227
- class SumConstraint(Constraint):
228
- def __init__(
229
- self,
230
- neuron_names_left: list[str],
231
- comparator: Callable[[Tensor, Number], Tensor],
232
- neuron_names_right: list[str],
233
- weights_left: list[float] = None,
234
- weights_right: list[float] = None,
235
- name: str = None,
236
- rescale_factor: float = 1.5,
237
- ) -> None:
238
-
239
- # Init parent class
240
- neuron_names = set(neuron_names_left) | set(neuron_names_right)
241
- super().__init__(neuron_names, name, rescale_factor)
242
-
243
- # Init variables
244
- self.comparator = comparator
245
- self.neuron_names_left = neuron_names_left
246
- self.neuron_names_right = neuron_names_right
247
-
248
- # If comparator function is not supported, raise error
249
- if comparator not in [ge, le, gt, lt]:
250
- raise ValueError(
251
- f"Comparator {str(comparator)} used for constraint {name} is not supported. Only ge, le, gt, lt are allowed."
252
- )
253
-
254
- # If feature list dimensions don't match weight list dimensions, raise error
255
- if weights_left and (len(neuron_names_left) != len(weights_left)):
256
- raise ValueError(
257
- "The dimensions of neuron_names_left don't match with the dimensions of weights_left."
258
- )
259
- if weights_right and (len(neuron_names_right) != len(weights_right)):
260
- raise ValueError(
261
- "The dimensions of neuron_names_right don't match with the dimensions of weights_right."
262
- )
263
-
264
- # If weights are provided for summation, transform them to Tensors
265
- if weights_left:
266
- self.weights_left = tensor(weights_left, device=self.device)
267
- else:
268
- self.weights_left = ones(len(neuron_names_left), device=self.device)
269
- if weights_right:
270
- self.weights_right = tensor(weights_right, device=self.device)
271
- else:
272
- self.weights_right = ones(len(neuron_names_right), device=self.device)
273
-
274
- # Calculate directions based on constraint operator
275
- if self.comparator in [lt, le]:
276
- self.direction_left = -1
277
- self.direction_right = 1
278
- else:
279
- self.direction_left = 1
280
- self.direction_right = -1
281
-
282
- def check_constraint(self, prediction: dict[str, Tensor]) -> Tensor:
283
-
284
- def compute_weighted_sum(neuron_names: list[str], weights: tensor) -> tensor:
285
- layers = [
286
- self.descriptor.neuron_to_layer[neuron_name]
287
- for neuron_name in neuron_names
288
- ]
289
- indices = [
290
- self.descriptor.neuron_to_index[neuron_name]
291
- for neuron_name in neuron_names
292
- ]
293
-
294
- # Extract predictions for all neurons and apply weights in bulk
295
- predictions = stack(
296
- [prediction[layer][:, index] for layer, index in zip(layers, indices)],
297
- dim=1,
298
- )
299
-
300
- # Denormalize if required
301
- predictions_denorm = self._denormalize(predictions, neuron_names)
302
-
303
- # Calculate weighted sum
304
- weighted_sum = (predictions_denorm * weights.unsqueeze(0)).sum(dim=1)
305
-
306
- return weighted_sum
307
-
308
- weighted_sum_left = compute_weighted_sum(
309
- self.neuron_names_left, self.weights_left
310
- )
311
- weighted_sum_right = compute_weighted_sum(
312
- self.neuron_names_right, self.weights_right
313
- )
314
-
315
- # Apply the comparator and calculate the result
316
- return ~self.comparator(weighted_sum_left, weighted_sum_right)
317
-
318
- def calculate_direction(self, prediction: dict[str, Tensor]) -> Dict[str, Tensor]:
319
- # NOTE currently only works for dense layers due to neuron to index translation
320
-
321
- output = {}
322
-
323
- for layer in self.layers:
324
- output[layer] = zeros_like(prediction[layer][0])
325
-
326
- for neuron_name_left in self.neuron_names_left:
327
- layer = self.descriptor.neuron_to_layer[neuron_name_left]
328
- index = self.descriptor.neuron_to_index[neuron_name_left]
329
- output[layer][index] = self.direction_left
330
-
331
- for neuron_name_right in self.neuron_names_right:
332
- layer = self.descriptor.neuron_to_layer[neuron_name_right]
333
- index = self.descriptor.neuron_to_index[neuron_name_right]
334
- output[layer][index] = self.direction_right
335
-
336
- for layer in self.layers:
337
- output[layer] = normalize(reshape(output[layer], [1, -1]), dim=1)
338
-
339
- return output
340
-
341
-
342
- # class MonotonicityConstraint(Constraint):
343
- # # TODO docstring
344
-
345
- # def __init__(
346
- # self,
347
- # neuron_name: str,
348
- # name: str = None,
349
- # descriptor: Descriptor = None,
350
- # rescale_factor: float = 1.5,
351
- # ) -> None:
352
-
353
- # # Compose constraint name
354
- # name = f"Monotonicity_{neuron_name}"
355
-
356
- # # Init parent class
357
- # super().__init__({neuron_name}, name, rescale_factor)
358
-
359
- # # Init variables
360
- # if descriptor != None:
361
- # self.descriptor = descriptor
362
- # self.run_init_descriptor()
363
-
364
- # # Get layer name and feature index from neuron_name
365
- # self.layer = self.descriptor.neuron_to_layer[neuron_name]
366
- # self.index = self.descriptor.neuron_to_index[neuron_name]
367
-
368
- # def check_constraint(self, prediction: dict[str, Tensor]) -> Dict[str, Tensor]:
369
- # # Check if values for column in batch are only increasing
370
- # result = ~ge(
371
- # diff(
372
- # prediction[self.layer][:, self.index],
373
- # prepend=zeros_like(
374
- # prediction[self.layer][:, self.index][:1],
375
- # device=prediction[self.layer].device,
376
- # ),
377
- # ),
378
- # 0,
379
- # )
380
-
381
- # return {self.layer: result}
382
-
383
- # def calculate_direction(self, prediction: dict[str, Tensor]) -> Dict[str, Tensor]:
384
- # # TODO implement
385
-
386
- # output = {self.layer: zeros_like(prediction[self.layer][0])}
387
- # output[self.layer][self.index] = 1
388
-
389
- # return output