signal-grad-cam 0.0.2__tar.gz → 0.1.1__tar.gz

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.

Potentially problematic release.


This version of signal-grad-cam might be problematic. Click here for more details.

@@ -1,6 +1,6 @@
1
1
  Metadata-Version: 2.1
2
2
  Name: signal_grad_cam
3
- Version: 0.0.2
3
+ Version: 0.1.1
4
4
  Summary: SignalGrad-CAM aims at generalising Grad-CAM to one-dimensional applications, while enhancing usability and efficiency.
5
5
  Home-page: https://github.com/samuelepe11/signal_grad_cam
6
6
  Author: Samuele Pe
@@ -63,7 +63,7 @@ Requires-Dist: tensorflow
63
63
 
64
64
  <p align="justify">Deep learning models have demonstrated remarkable performance across various domains; however, their black-box nature hinders interpretability and trust. As a result, the demand for explanation algorithms has grown, driving advancements in the field of eXplainable AI (XAI). However, relatively few efforts have been dedicated to developing interpretability methods for signal-based models. We introduce SignalGrad-CAM (SGrad-CAM), a versatile and efficient interpretability tool that extends the principles of Grad-CAM to both 1D- and 2D-convolutional neural networks for signal processing. SGrad-CAM is designed to interpret models for either image or signal elaboration, supporting both PyTorch and TensorFlow/Keras frameworks, and provides diagnostic and visualization tools to enhance model transparency. The package is also designed for batch processing, ensuring efficiency even for large-scale applications, while maintaining a simple and user-friendly structure.</p>
65
65
 
66
- <p align="justify">**Keywords:** *eXplainable AI, explanations, local explanation, fidelity, interpretability, transparency, trustworthy AI, feature importance, saliency maps, CAM, Grad-CAM, black-box, deep learning, CNN, signals, time series*</p>
66
+ <p align="justify"><i><b>Keywords:</b> eXplainable AI, explanations, local explanation, fidelity, interpretability, transparency, trustworthy AI, feature importance, saliency maps, CAM, Grad-CAM, black-box, deep learning, CNN, signals, time series</i></p>
67
67
 
68
68
  <p align="right"><a href="#top">Back To Top</a></p>
69
69
 
@@ -114,13 +114,13 @@ class_labels = ["Class 1", "Class 2", "Class 3"]
114
114
  cam_builder = TorchCamBuilder(model=model, transform_fc=preprocess_fc, class_names=class_labels, time_axs=1)
115
115
  ```
116
116
 
117
- <p align="justify">Now, you can use the `cam_builder` object to generate class activation maps from a list of input data using the *`get_cams`* method. You can specify multiple algorithm names, target layers, or target classes as needed.
117
+ <p align="justify">Now, you can use the `cam_builder` object to generate class activation maps from a list of input data using the <i>`get_cams`</i> method. You can specify multiple algorithm names, target layers, or target classes as needed.
118
118
 
119
119
  The function's attributes allow users to customize the visualization (e.g., setting axis ticks or labels). If a result directory path is provided, the output is stored as a '.png' file; otherwise, it is displayed. In all cases, the function returns a dictionary containing the requested CAMs, along with the model's predictions and importance score ranges.
120
120
 
121
121
  Finally, several visualization tools are available to gain deeper insights into the model's behavior. The display can be customized by adjusting line width, point extension, aspect ratio, and more:
122
- * *`single_channel_output_display`* plots the selected channels using a color scheme that reflects the importance of each input feature.
123
- * *`overlapped_output_display`* superimposes CAMs onto the corresponding input in an image-like format, allowing users to capture the overall distribution of input importance.
122
+ * <i>`single_channel_output_display`</i> plots the selected channels using a color scheme that reflects the importance of each input feature.
123
+ * <i>`overlapped_output_display`</i> superimposes CAMs onto the corresponding input in an image-like format, allowing users to capture the overall distribution of input importance.
124
124
  </p>
125
125
 
126
126
  ```python
@@ -138,7 +138,7 @@ cam_dict, predicted_probs_dict, score_ranges_dict = cam_builder.get_cam(data_lis
138
138
 
139
139
  # Visualize single channel importance
140
140
  selected_channels_indices = [0, 2, 10]
141
- cam_builder.single_channel_output_display(data_list=data_list, data_labels=data_labels_list, predicted_probs=predicted_probs_dict,
141
+ cam_builder.single_channel_output_display(data_list=data_list, data_labels=data_labels_list, predicted_probs_dict=predicted_probs_dict,
142
142
  cams_dict=cam_dict, explainer_types="Grad-CAM", target_classes=target_classes,
143
143
  target_layers="target_layer_name", desired_channels=selected_channels_indices,
144
144
  grid_instructions=(1, len(selected_channels_indices), bar_ranges=score_ranges_dict,
@@ -146,7 +146,7 @@ cam_builder.single_channel_output_display(data_list=data_list, data_labels=data_
146
146
  axes_names=("Time (s)", "Amplitude (mV)"))
147
147
 
148
148
  # Visualize overall importance
149
- cam_builder.overlapped_output_display(data_list=data_list, data_labels=data_labels_list, predicted_probs=predicted_probs_dict,
149
+ cam_builder.overlapped_output_display(data_list=data_list, data_labels=data_labels_list, predicted_probs_dict=predicted_probs_dict,
150
150
  cams_dict=cam_dict, explainer_types="Grad-CAM", target_classes=target_classes,
151
151
  target_layers="target_layer_name", fig_size=(20 * len(your_data_X), 20),
152
152
  grid_instructions=(len(your_data_X), 1), bar_ranges=score_ranges_dict, data_names=item_names
@@ -1,206 +1,206 @@
1
- <div id="top"></div>
2
-
3
- [![Contributors][contributors-shield]][contributors-url] [![Forks][forks-shield]][forks-url] [![Stargazers][stars-shield]][stars-url] [![Issues][issues-shield]][issues-url] [![MIT License][license-shield]][license-url]
4
-
5
-
6
- <br />
7
- <div align="center">
8
- <h1>
9
- SignalGrad-CAM
10
- </h1>
11
-
12
- <h3 align="center">SignalGrad-CAM aims at generalising Grad-CAM to one-dimensional applications, while enhancing usability and efficiency.</h3>
13
-
14
- <p align="center">
15
- <a href="https://github.com/bmi-labmedinfo/signal_grad_cam"><strong>Explore the docs</strong></a>
16
- <br />
17
- <br />
18
- <a href="https://github.com/bmi-labmedinfo/signal_grad_cam/issues">Report Bug or Request Feature</a>
19
- </p>
20
- </div>
21
-
22
-
23
-
24
- <!-- TABLE OF CONTENTS -->
25
- <details>
26
- <summary>Table of Contents</summary>
27
- <ol>
28
- <li><a href="#about-the-project">About The Project</a></li>
29
- <li><a href="#installation">Installation</a></li>
30
- <li><a href="#usage">Usage</a></li>
31
- <li><a href="#publications">Publications</a></li>
32
- <li><a href="#contacts-and-useful-links">Contacts And Useful Links</a></li>
33
- <li><a href="#license">License</a></li>
34
- </ol>
35
- </details>
36
-
37
-
38
-
39
- <!-- ABOUT THE PROJECT -->
40
- ## About The Project
41
-
42
- <p align="justify">Deep learning models have demonstrated remarkable performance across various domains; however, their black-box nature hinders interpretability and trust. As a result, the demand for explanation algorithms has grown, driving advancements in the field of eXplainable AI (XAI). However, relatively few efforts have been dedicated to developing interpretability methods for signal-based models. We introduce SignalGrad-CAM (SGrad-CAM), a versatile and efficient interpretability tool that extends the principles of Grad-CAM to both 1D- and 2D-convolutional neural networks for signal processing. SGrad-CAM is designed to interpret models for either image or signal elaboration, supporting both PyTorch and TensorFlow/Keras frameworks, and provides diagnostic and visualization tools to enhance model transparency. The package is also designed for batch processing, ensuring efficiency even for large-scale applications, while maintaining a simple and user-friendly structure.</p>
43
-
44
- <p align="justify">**Keywords:** *eXplainable AI, explanations, local explanation, fidelity, interpretability, transparency, trustworthy AI, feature importance, saliency maps, CAM, Grad-CAM, black-box, deep learning, CNN, signals, time series*</p>
45
-
46
- <p align="right"><a href="#top">Back To Top</a></p>
47
-
48
- <!-- INSTALLATION -->
49
- ## Installation
50
-
51
- 1. Make sure you have the latest version of pip installed
52
- ```sh
53
- pip install --upgrade pip
54
- ```
55
- 2. Install SignalGrad-CAM through pip
56
- ```sh
57
- pip install signal-grad-cam
58
- ```
59
-
60
- <p align="right"><a href="#top">Back To Top</a></p>
61
-
62
- <!-- USAGE EXAMPLES -->
63
- ## Usage
64
- <p align="justify">
65
- Here's a basic example that illustrates SignalGrad-CAM common usage.
66
-
67
- First, train a classifier on the data or select an already trained model, then instantiate `TorchCamBuilder` (if you are working with a PyTorch model) or `TfCamBuilder` (if the model is built in TensorFlow/Keras).
68
-
69
- Besides the model, `TorchCamBuilder` requires additional information to function effectively. For example, you may provide a list of class labels, a preprocessing function, or an index indicating which dimension corresponds to time. These attributes allow SignalGrad-CAM to be applied to a wide range of models.
70
-
71
- The constructor displays a list of available Grad-CAM algorithms for explanation, as well as a list of layers that can be used as target for the algorithm. It also identifies any Sigmoid/Softmax layer, since its presence or absence will slightly change the algorithm's workflow.
72
- </p>
73
-
74
- ```python
75
- import numpy as np
76
- import torch
77
- from signal_grad_cam import TorchCamBuilder
78
-
79
- # Load model
80
- model = YourTorchModelConstructor()
81
- model.load_state_dict(torch.load("path_to_your_stored_model.pt")
82
- model.eval()
83
-
84
- # Introduce useful information
85
- def preprocess_fn(signal):
86
- signal = torch.from_numpy(signal).float()
87
- # Extra preprocessing: data resizing, reshaping, normalization...
88
- return signal
89
- class_labels = ["Class 1", "Class 2", "Class 3"]
90
-
91
- # Define the CAM builder
92
- cam_builder = TorchCamBuilder(model=model, transform_fc=preprocess_fc, class_names=class_labels, time_axs=1)
93
- ```
94
-
95
- <p align="justify">Now, you can use the `cam_builder` object to generate class activation maps from a list of input data using the *`get_cams`* method. You can specify multiple algorithm names, target layers, or target classes as needed.
96
-
97
- The function's attributes allow users to customize the visualization (e.g., setting axis ticks or labels). If a result directory path is provided, the output is stored as a '.png' file; otherwise, it is displayed. In all cases, the function returns a dictionary containing the requested CAMs, along with the model's predictions and importance score ranges.
98
-
99
- Finally, several visualization tools are available to gain deeper insights into the model's behavior. The display can be customized by adjusting line width, point extension, aspect ratio, and more:
100
- * *`single_channel_output_display`* plots the selected channels using a color scheme that reflects the importance of each input feature.
101
- * *`overlapped_output_display`* superimposes CAMs onto the corresponding input in an image-like format, allowing users to capture the overall distribution of input importance.
102
- </p>
103
-
104
- ```python
105
- # Prepare data
106
- data_list = [x for x in your_numpy_data_x[:2]]
107
- data_labels_list = [1, 0]
108
- item_names = ["Item 1", "Item 2"]
109
- target_classes = [0, 1]
110
-
111
- # Create CAMs
112
- cam_dict, predicted_probs_dict, score_ranges_dict = cam_builder.get_cam(data_list=data_list, data_labels=data_labels_list,
113
- target_classes=target_classes, explainer_types="Grad-CAM",
114
- target_layer="conv1d_layer_1", softmax_final=True,
115
- data_sampling_freq=25, dt=1, axes_names=("Time (s)", "Channels"))
116
-
117
- # Visualize single channel importance
118
- selected_channels_indices = [0, 2, 10]
119
- cam_builder.single_channel_output_display(data_list=data_list, data_labels=data_labels_list, predicted_probs=predicted_probs_dict,
120
- cams_dict=cam_dict, explainer_types="Grad-CAM", target_classes=target_classes,
121
- target_layers="target_layer_name", desired_channels=selected_channels_indices,
122
- grid_instructions=(1, len(selected_channels_indices), bar_ranges=score_ranges_dict,
123
- results_dir="path_to_your_result_directoory", data_sampling_freq=25, dt=1, line_width=0.5,
124
- axes_names=("Time (s)", "Amplitude (mV)"))
125
-
126
- # Visualize overall importance
127
- cam_builder.overlapped_output_display(data_list=data_list, data_labels=data_labels_list, predicted_probs=predicted_probs_dict,
128
- cams_dict=cam_dict, explainer_types="Grad-CAM", target_classes=target_classes,
129
- target_layers="target_layer_name", fig_size=(20 * len(your_data_X), 20),
130
- grid_instructions=(len(your_data_X), 1), bar_ranges=score_ranges_dict, data_names=item_names
131
- results_dir="path_to_your_result_directoory", data_sampling_freq=25, dt=1)
132
- ```
133
-
134
- You can also check the python scripts [here](https://github.com/bmi-labmedinfo/signal_grad_cam/examples).
135
-
136
- See the [open issues](https://github.com/bmi-labmedinfo/signal_grad_cam/issues) for a full list of proposed features (and known issues).
137
-
138
- <p align="right"><a href="#top">Back To Top</a></p>
139
-
140
-
141
- If you use the SignalGrad-CAM software for your projects, please cite it as:
142
-
143
- ```
144
- @software{Pe_SignalGrad_CAM_2025,
145
- author = {Pe, Samuele and Buonocore, Tommaso Mario and Giovanna, Nicora and Enea, Parimbelli},
146
- title = {{SignalGrad-CAM}},
147
- url = {https://github.com/bmi-labmedinfo/signal_grad_cam},
148
- version = {0.0.1},
149
- year = {2025}
150
- }
151
- ```
152
-
153
- <p align="right"><a href="#top">Back To Top</a></p>
154
-
155
- <!-- CONTACTS AND USEFUL LINKS -->
156
- ## Contacts and Useful Links
157
-
158
- * **Repository maintainer**: Samuele Pe [![Gmail][gmail-shield]][gmail-url] [![LinkedIn][linkedin-shield]][linkedin-url]
159
-
160
- * **Project Link**: [https://github.com/bmi-labmedinfo/signal_grad_cam](https://github.com/bmi-labmedinfo/signal_grad_cam)
161
-
162
- * **Package Link**: [https://pypi.org/project/signal-grad-cam/](https://pypi.org/project/signal-grad-cam/)
163
-
164
- <p align="right"><a href="#top">Back To Top</a></p>
165
-
166
- <!-- LICENSE -->
167
- ## License
168
-
169
- Distributed under MIT License. See `LICENSE` for more information.
170
-
171
-
172
- <p align="right"><a href="#top">Back To Top</a></p>
173
-
174
- <!-- MARKDOWN LINKS -->
175
-
176
- [contributors-shield]: https://img.shields.io/github/contributors/bmi-labmedinfo/signal_grad_cam.svg?style=for-the-badge
177
-
178
- [contributors-url]: https://github.com/bmi-labmedinfo/signal_grad_cam/graphs/contributors
179
-
180
- [status-shield]: https://img.shields.io/badge/Status-pre--release-blue
181
-
182
- [status-url]: https://github.com/bmi-labmedinfo/signal_grad_cam/releases
183
-
184
- [forks-shield]: https://img.shields.io/github/forks/bmi-labmedinfo/signal_grad_cam.svg?style=for-the-badge
185
-
186
- [forks-url]: https://github.com/bmi-labmedinfo/signal_grad_cam/network/members
187
-
188
- [stars-shield]: https://img.shields.io/github/stars/bmi-labmedinfo/signal_grad_cam.svg?style=for-the-badge
189
-
190
- [stars-url]: https://github.com/bmi-labmedinfo/signal_grad_cam/stargazers
191
-
192
- [issues-shield]: https://img.shields.io/github/issues/bmi-labmedinfo/signal_grad_cam.svg?style=for-the-badge
193
-
194
- [issues-url]: https://github.com/bmi-labmedinfo/signal_grad_cam/issues
195
-
196
- [license-shield]: https://img.shields.io/github/license/bmi-labmedinfo/signal_grad_cam.svg?style=for-the-badge
197
-
198
- [license-url]: https://github.com/bmi-labmedinfo/signal_grad_cam/LICENSE
199
-
200
- [linkedin-shield]: https://img.shields.io/badge/LinkedIn-0077B5?style=for-the-badge&logo=linkedin&logoColor=white
201
-
202
- [linkedin-url]: https://linkedin.com/in/samuele-pe-818bbb307
203
-
204
- [gmail-shield]: https://img.shields.io/badge/Email-D14836?style=for-the-badge&logo=gmail&logoColor=white
205
-
206
- [gmail-url]: mailto:samuele.pe01@universitadipavia.it
1
+ <div id="top"></div>
2
+
3
+ [![Contributors][contributors-shield]][contributors-url] [![Forks][forks-shield]][forks-url] [![Stargazers][stars-shield]][stars-url] [![Issues][issues-shield]][issues-url] [![MIT License][license-shield]][license-url]
4
+
5
+
6
+ <br />
7
+ <div align="center">
8
+ <h1>
9
+ SignalGrad-CAM
10
+ </h1>
11
+
12
+ <h3 align="center">SignalGrad-CAM aims at generalising Grad-CAM to one-dimensional applications, while enhancing usability and efficiency.</h3>
13
+
14
+ <p align="center">
15
+ <a href="https://github.com/bmi-labmedinfo/signal_grad_cam"><strong>Explore the docs</strong></a>
16
+ <br />
17
+ <br />
18
+ <a href="https://github.com/bmi-labmedinfo/signal_grad_cam/issues">Report Bug or Request Feature</a>
19
+ </p>
20
+ </div>
21
+
22
+
23
+
24
+ <!-- TABLE OF CONTENTS -->
25
+ <details>
26
+ <summary>Table of Contents</summary>
27
+ <ol>
28
+ <li><a href="#about-the-project">About The Project</a></li>
29
+ <li><a href="#installation">Installation</a></li>
30
+ <li><a href="#usage">Usage</a></li>
31
+ <li><a href="#publications">Publications</a></li>
32
+ <li><a href="#contacts-and-useful-links">Contacts And Useful Links</a></li>
33
+ <li><a href="#license">License</a></li>
34
+ </ol>
35
+ </details>
36
+
37
+
38
+
39
+ <!-- ABOUT THE PROJECT -->
40
+ ## About The Project
41
+
42
+ <p align="justify">Deep learning models have demonstrated remarkable performance across various domains; however, their black-box nature hinders interpretability and trust. As a result, the demand for explanation algorithms has grown, driving advancements in the field of eXplainable AI (XAI). However, relatively few efforts have been dedicated to developing interpretability methods for signal-based models. We introduce SignalGrad-CAM (SGrad-CAM), a versatile and efficient interpretability tool that extends the principles of Grad-CAM to both 1D- and 2D-convolutional neural networks for signal processing. SGrad-CAM is designed to interpret models for either image or signal elaboration, supporting both PyTorch and TensorFlow/Keras frameworks, and provides diagnostic and visualization tools to enhance model transparency. The package is also designed for batch processing, ensuring efficiency even for large-scale applications, while maintaining a simple and user-friendly structure.</p>
43
+
44
+ <p align="justify"><i><b>Keywords:</b> eXplainable AI, explanations, local explanation, fidelity, interpretability, transparency, trustworthy AI, feature importance, saliency maps, CAM, Grad-CAM, black-box, deep learning, CNN, signals, time series</i></p>
45
+
46
+ <p align="right"><a href="#top">Back To Top</a></p>
47
+
48
+ <!-- INSTALLATION -->
49
+ ## Installation
50
+
51
+ 1. Make sure you have the latest version of pip installed
52
+ ```sh
53
+ pip install --upgrade pip
54
+ ```
55
+ 2. Install SignalGrad-CAM through pip
56
+ ```sh
57
+ pip install signal-grad-cam
58
+ ```
59
+
60
+ <p align="right"><a href="#top">Back To Top</a></p>
61
+
62
+ <!-- USAGE EXAMPLES -->
63
+ ## Usage
64
+ <p align="justify">
65
+ Here's a basic example that illustrates SignalGrad-CAM common usage.
66
+
67
+ First, train a classifier on the data or select an already trained model, then instantiate `TorchCamBuilder` (if you are working with a PyTorch model) or `TfCamBuilder` (if the model is built in TensorFlow/Keras).
68
+
69
+ Besides the model, `TorchCamBuilder` requires additional information to function effectively. For example, you may provide a list of class labels, a preprocessing function, or an index indicating which dimension corresponds to time. These attributes allow SignalGrad-CAM to be applied to a wide range of models.
70
+
71
+ The constructor displays a list of available Grad-CAM algorithms for explanation, as well as a list of layers that can be used as target for the algorithm. It also identifies any Sigmoid/Softmax layer, since its presence or absence will slightly change the algorithm's workflow.
72
+ </p>
73
+
74
+ ```python
75
+ import numpy as np
76
+ import torch
77
+ from signal_grad_cam import TorchCamBuilder
78
+
79
+ # Load model
80
+ model = YourTorchModelConstructor()
81
+ model.load_state_dict(torch.load("path_to_your_stored_model.pt")
82
+ model.eval()
83
+
84
+ # Introduce useful information
85
+ def preprocess_fn(signal):
86
+ signal = torch.from_numpy(signal).float()
87
+ # Extra preprocessing: data resizing, reshaping, normalization...
88
+ return signal
89
+ class_labels = ["Class 1", "Class 2", "Class 3"]
90
+
91
+ # Define the CAM builder
92
+ cam_builder = TorchCamBuilder(model=model, transform_fc=preprocess_fc, class_names=class_labels, time_axs=1)
93
+ ```
94
+
95
+ <p align="justify">Now, you can use the `cam_builder` object to generate class activation maps from a list of input data using the <i>`get_cams`</i> method. You can specify multiple algorithm names, target layers, or target classes as needed.
96
+
97
+ The function's attributes allow users to customize the visualization (e.g., setting axis ticks or labels). If a result directory path is provided, the output is stored as a '.png' file; otherwise, it is displayed. In all cases, the function returns a dictionary containing the requested CAMs, along with the model's predictions and importance score ranges.
98
+
99
+ Finally, several visualization tools are available to gain deeper insights into the model's behavior. The display can be customized by adjusting line width, point extension, aspect ratio, and more:
100
+ * <i>`single_channel_output_display`</i> plots the selected channels using a color scheme that reflects the importance of each input feature.
101
+ * <i>`overlapped_output_display`</i> superimposes CAMs onto the corresponding input in an image-like format, allowing users to capture the overall distribution of input importance.
102
+ </p>
103
+
104
+ ```python
105
+ # Prepare data
106
+ data_list = [x for x in your_numpy_data_x[:2]]
107
+ data_labels_list = [1, 0]
108
+ item_names = ["Item 1", "Item 2"]
109
+ target_classes = [0, 1]
110
+
111
+ # Create CAMs
112
+ cam_dict, predicted_probs_dict, score_ranges_dict = cam_builder.get_cam(data_list=data_list, data_labels=data_labels_list,
113
+ target_classes=target_classes, explainer_types="Grad-CAM",
114
+ target_layer="conv1d_layer_1", softmax_final=True,
115
+ data_sampling_freq=25, dt=1, axes_names=("Time (s)", "Channels"))
116
+
117
+ # Visualize single channel importance
118
+ selected_channels_indices = [0, 2, 10]
119
+ cam_builder.single_channel_output_display(data_list=data_list, data_labels=data_labels_list, predicted_probs_dict=predicted_probs_dict,
120
+ cams_dict=cam_dict, explainer_types="Grad-CAM", target_classes=target_classes,
121
+ target_layers="target_layer_name", desired_channels=selected_channels_indices,
122
+ grid_instructions=(1, len(selected_channels_indices), bar_ranges=score_ranges_dict,
123
+ results_dir="path_to_your_result_directoory", data_sampling_freq=25, dt=1, line_width=0.5,
124
+ axes_names=("Time (s)", "Amplitude (mV)"))
125
+
126
+ # Visualize overall importance
127
+ cam_builder.overlapped_output_display(data_list=data_list, data_labels=data_labels_list, predicted_probs_dict=predicted_probs_dict,
128
+ cams_dict=cam_dict, explainer_types="Grad-CAM", target_classes=target_classes,
129
+ target_layers="target_layer_name", fig_size=(20 * len(your_data_X), 20),
130
+ grid_instructions=(len(your_data_X), 1), bar_ranges=score_ranges_dict, data_names=item_names
131
+ results_dir="path_to_your_result_directoory", data_sampling_freq=25, dt=1)
132
+ ```
133
+
134
+ You can also check the python scripts [here](https://github.com/bmi-labmedinfo/signal_grad_cam/examples).
135
+
136
+ See the [open issues](https://github.com/bmi-labmedinfo/signal_grad_cam/issues) for a full list of proposed features (and known issues).
137
+
138
+ <p align="right"><a href="#top">Back To Top</a></p>
139
+
140
+
141
+ If you use the SignalGrad-CAM software for your projects, please cite it as:
142
+
143
+ ```
144
+ @software{Pe_SignalGrad_CAM_2025,
145
+ author = {Pe, Samuele and Buonocore, Tommaso Mario and Giovanna, Nicora and Enea, Parimbelli},
146
+ title = {{SignalGrad-CAM}},
147
+ url = {https://github.com/bmi-labmedinfo/signal_grad_cam},
148
+ version = {0.0.1},
149
+ year = {2025}
150
+ }
151
+ ```
152
+
153
+ <p align="right"><a href="#top">Back To Top</a></p>
154
+
155
+ <!-- CONTACTS AND USEFUL LINKS -->
156
+ ## Contacts and Useful Links
157
+
158
+ * **Repository maintainer**: Samuele Pe [![Gmail][gmail-shield]][gmail-url] [![LinkedIn][linkedin-shield]][linkedin-url]
159
+
160
+ * **Project Link**: [https://github.com/bmi-labmedinfo/signal_grad_cam](https://github.com/bmi-labmedinfo/signal_grad_cam)
161
+
162
+ * **Package Link**: [https://pypi.org/project/signal-grad-cam/](https://pypi.org/project/signal-grad-cam/)
163
+
164
+ <p align="right"><a href="#top">Back To Top</a></p>
165
+
166
+ <!-- LICENSE -->
167
+ ## License
168
+
169
+ Distributed under MIT License. See `LICENSE` for more information.
170
+
171
+
172
+ <p align="right"><a href="#top">Back To Top</a></p>
173
+
174
+ <!-- MARKDOWN LINKS -->
175
+
176
+ [contributors-shield]: https://img.shields.io/github/contributors/bmi-labmedinfo/signal_grad_cam.svg?style=for-the-badge
177
+
178
+ [contributors-url]: https://github.com/bmi-labmedinfo/signal_grad_cam/graphs/contributors
179
+
180
+ [status-shield]: https://img.shields.io/badge/Status-pre--release-blue
181
+
182
+ [status-url]: https://github.com/bmi-labmedinfo/signal_grad_cam/releases
183
+
184
+ [forks-shield]: https://img.shields.io/github/forks/bmi-labmedinfo/signal_grad_cam.svg?style=for-the-badge
185
+
186
+ [forks-url]: https://github.com/bmi-labmedinfo/signal_grad_cam/network/members
187
+
188
+ [stars-shield]: https://img.shields.io/github/stars/bmi-labmedinfo/signal_grad_cam.svg?style=for-the-badge
189
+
190
+ [stars-url]: https://github.com/bmi-labmedinfo/signal_grad_cam/stargazers
191
+
192
+ [issues-shield]: https://img.shields.io/github/issues/bmi-labmedinfo/signal_grad_cam.svg?style=for-the-badge
193
+
194
+ [issues-url]: https://github.com/bmi-labmedinfo/signal_grad_cam/issues
195
+
196
+ [license-shield]: https://img.shields.io/github/license/bmi-labmedinfo/signal_grad_cam.svg?style=for-the-badge
197
+
198
+ [license-url]: https://github.com/bmi-labmedinfo/signal_grad_cam/LICENSE
199
+
200
+ [linkedin-shield]: https://img.shields.io/badge/LinkedIn-0077B5?style=for-the-badge&logo=linkedin&logoColor=white
201
+
202
+ [linkedin-url]: https://linkedin.com/in/samuele-pe-818bbb307
203
+
204
+ [gmail-shield]: https://img.shields.io/badge/Email-D14836?style=for-the-badge&logo=gmail&logoColor=white
205
+
206
+ [gmail-url]: mailto:samuele.pe01@universitadipavia.it
@@ -5,7 +5,7 @@ with open("README.md", "r") as f:
5
5
 
6
6
  setup(
7
7
  name="signal_grad_cam",
8
- version="0.0.2",
8
+ version="0.1.1",
9
9
  description="SignalGrad-CAM aims at generalising Grad-CAM to one-dimensional applications, while enhancing usability"
10
10
  " and efficiency.",
11
11
  keywords="XAI, class activation maps, CNN, time series",
@@ -8,7 +8,7 @@ import matplotlib.colors as m_colors
8
8
  import re
9
9
  import torch
10
10
  import tensorflow as tf
11
- from typing import Callable, List, Tuple, Dict, Any
11
+ from typing import Callable, List, Tuple, Dict, Any, Optional
12
12
 
13
13
 
14
14
  # Class
@@ -21,7 +21,7 @@ class CamBuilder:
21
21
  "HiResCAM": "High-Resolution Class Activation Mapping"}
22
22
 
23
23
  def __init__(self, model: torch.nn.Module | tf.keras.Model | Any,
24
- transform_fn: Callable[[np.ndarray], torch.Tensor | tf.Tensor] = None,
24
+ transform_fn: Callable[[np.ndarray, *tuple[Any, ...]], torch.Tensor | tf.Tensor] = None,
25
25
  class_names: List[str] = None, time_axs: int = 1, input_transposed: bool = False,
26
26
  ignore_channel_dim: bool = False, model_output_index: int = None, extend_search: bool = False,
27
27
  padding_dim: int = None, seed: int = 11):
@@ -34,7 +34,8 @@ class CamBuilder:
34
34
  layers among its attributes) representing a convolutional neural network model to be explained.
35
35
  Unconventional models should always be set to inference mode before being provided as inputs.
36
36
  :param transform_fn: (optional, default is None) A callable function to preprocess np.ndarray data before model
37
- evaluation. This function is also expected to convert data into either PyTorch or TensorFlow tensors.
37
+ evaluation. This function is also expected to convert data into either PyTorch or TensorFlow tensors. The
38
+ function may optionally take as a second input a list of objects required by the preprocessing method.
38
39
  :param class_names: (optional, default is None) A list of strings where each string represents the name of an
39
40
  output class.
40
41
  :param time_axs: (optional, default is 1) An integer index indicating whether the input signal's time axis is
@@ -102,7 +103,8 @@ class CamBuilder:
102
103
  explainer_types: str | List[str], target_layers: str | List[str], softmax_final: bool,
103
104
  data_names: List[str] = None, data_sampling_freq: float = None, dt: float = 10,
104
105
  channel_names: List[str | float] = None, results_dir_path: str = None, aspect_factor: float = 100,
105
- data_shape_list: List[Tuple[int, int]] = None, time_names: List[str | float] = None,
106
+ data_shape_list: List[Tuple[int, int]] = None, extra_preprocess_inputs_list: List[List[Any]] = None,
107
+ extra_inputs_list: List[Any] = None, time_names: List[str | float] = None,
106
108
  axes_names: Tuple[str | None, str | None] | List[str | None] = None) \
107
109
  -> Tuple[Dict[str, List[np.ndarray]], Dict[str, np.ndarray], Dict[str, Tuple[np.ndarray, np.ndarray]]]:
108
110
  """
@@ -137,7 +139,11 @@ class CamBuilder:
137
139
  one-dimensional CAM.
138
140
  :param data_shape_list: (optional, default is None) A list of integer tuples storing the original input sizes,
139
141
  used to set the CAM shape after resizing during preprocessing. The expected format is number of rows x
140
- number of columns.
142
+ number of columns.
143
+ :param extra_preprocess_inputs_list: (optional, defaults is None) A list of lists, where the i-th sub-list
144
+ represents the additional input objects required by the preprocessing method for the i-th input.
145
+ :param extra_inputs_list: (optional, default is None) A list of additional input objects required by the model's
146
+ forward method.
141
147
  :param time_names: (optional, default is None) A list of strings representing tick names for the time axis.
142
148
  :param axes_names: (optional, default is None) A tuple of strings representing names for X and Y axes,
143
149
  respectively.
@@ -173,7 +179,11 @@ class CamBuilder:
173
179
  for target_layer in target_layers:
174
180
  cam_list, output_probs, bar_ranges = self.__create_batched_cams(data_list, target_class,
175
181
  target_layer, explainer_type,
176
- softmax_final, data_shape_list)
182
+ softmax_final,
183
+ data_shape_list=data_shape_list,
184
+ extra_preprocess_inputs_list=
185
+ extra_preprocess_inputs_list,
186
+ extra_inputs_list=extra_inputs_list)
177
187
  item_key = explainer_type + "_" + target_layer + "_class" + str(target_class)
178
188
  cams_dict.update({item_key: cam_list})
179
189
  predicted_probs_dict.update({item_key: output_probs})
@@ -438,9 +448,9 @@ class CamBuilder:
438
448
  txt = " - " + addon + f"{name}:\t{type(layer).__name__}"
439
449
  print(txt)
440
450
 
441
- def _create_raw_batched_cams(self, data_list: List[np.ndarray], target_class: int, target_layer: str,
442
- explainer_type: str, softmax_final: bool) \
443
- -> Tuple[List[np.ndarray], np.ndarray]:
451
+ def _create_raw_batched_cams(self, data_list: List[np.ndarray | torch.Tensor | tf.Tensor], target_class: int,
452
+ target_layer: str, explainer_type: str, softmax_final: bool,
453
+ extra_inputs_list: List[Any] = None) -> Tuple[List[np.ndarray], np.ndarray]:
444
454
  """
445
455
  Retrieves raw CAMs from an input data list based on the specified settings (defined by algorithm, target layer,
446
456
  and target class). Additionally, it returns the class probabilities predicted by the model.
@@ -454,6 +464,8 @@ class CamBuilder:
454
464
  should identify one of the CAM algorithms allowed, as listed by the class constructor.
455
465
  :param softmax_final: (mandatory) A boolean indicating whether the network terminates with a Sigmoid/Softmax
456
466
  activation function.
467
+ :param extra_inputs_list: (optional, defaults is None) A list of additional input objects required by the
468
+ model's forward method.
457
469
 
458
470
  :return:
459
471
  - cam_list: A list of np.ndarray containing CAMs for each item in the input data list, corresponding to the
@@ -498,7 +510,8 @@ class CamBuilder:
498
510
  "it.")
499
511
 
500
512
  def __create_batched_cams(self, data_list: List[np.ndarray], target_class: int, target_layer: str,
501
- explainer_type: str, softmax_final: bool, data_shape_list: List[Tuple[int, int]] = None) \
513
+ explainer_type: str, softmax_final: bool, data_shape_list: List[Tuple[int, int]] = None,
514
+ extra_preprocess_inputs_list: List[List[Any]] = None, extra_inputs_list: List[Any] = None) \
502
515
  -> Tuple[List[np.ndarray], np.ndarray, Tuple[np.ndarray, np.ndarray]]:
503
516
  """
504
517
  Prepares the input data list and retrieves CAMs based on the specified settings (defined by algorithm, target
@@ -517,7 +530,11 @@ class CamBuilder:
517
530
  :param data_shape_list: (optional, default is None) A list of integer tuples storing the original input sizes,
518
531
  used to set the CAM shape after resizing during preprocessing. The expected format is number of rows x
519
532
  number of columns.
520
-
533
+ :param extra_preprocess_inputs_list: (optional, defaults is None) A list of lists, where the i-th sub-list
534
+ represents the additional input objects required by the preprocessing method for the i-th input.
535
+ :param extra_inputs_list: (optional, default is None) A list of additional input objects required by the model's
536
+ forward method.
537
+ 0
521
538
  :return:
522
539
  - cam_list: A list of np.ndarray containing CAMs for each item in the input data list, corresponding to the
523
540
  given setting (defined by algorithm, target layer, and target class).
@@ -535,19 +552,23 @@ class CamBuilder:
535
552
  if data_shape_list is None:
536
553
  data_shape_list = [data_element.shape for data_element in data_list]
537
554
  if self.transform_fn is not None:
538
- data_list = [self.transform_fn(data_element) for data_element in data_list]
555
+ if extra_preprocess_inputs_list is not None:
556
+ data_list = [self.transform_fn(data_element, *extra_preprocess_inputs_list[i]) for i, data_element in
557
+ enumerate(data_list)]
558
+ else:
559
+ data_list = [self.transform_fn(data_element) for data_element in data_list]
539
560
 
540
561
  # Ensure data have consistent size for batching
541
562
  if len(data_list) > 1 and self.padding_dim is None:
542
- data_shape_list_processed = [data_element.shape for data_element in data_list]
543
- if len(np.unique(np.array(data_shape_list_processed, dtype=object))) != 1:
563
+ data_shape_list_processed = [tuple(data_element.shape) for data_element in data_list]
564
+ if len(set(data_shape_list_processed)) != 1:
544
565
  data_list = [np.resize(x, data_shape_list_processed[0]) for x in data_list]
545
566
  self.__print_justify("Input data items have different shapes. Each item has been reshaped to match the "
546
567
  "first item's dimensions for batching. To prevent this, provide one item at a "
547
568
  "time.")
548
569
 
549
570
  cam_list, target_probs = self._create_raw_batched_cams(data_list, target_class, target_layer, explainer_type,
550
- softmax_final)
571
+ softmax_final, extra_inputs_list=extra_inputs_list)
551
572
  self.activations = None
552
573
  self.gradients = None
553
574
  cams = np.stack(cam_list)
@@ -2,7 +2,7 @@
2
2
  import numpy as np
3
3
  import torch
4
4
  import torch.nn as nn
5
- from typing import Callable, List, Tuple, Dict, Any
5
+ from typing import Callable, List, Tuple, Dict, Any, Optional
6
6
 
7
7
  from signal_grad_cam import CamBuilder
8
8
 
@@ -13,10 +13,10 @@ class TorchCamBuilder(CamBuilder):
13
13
  Represents a PyTorch Class Activation Map (CAM) builder, supporting multiple methods such as Grad-CAM and HiResCAM.
14
14
  """
15
15
 
16
- def __init__(self, model: nn.Module | Any, transform_fn: Callable = None, class_names: List[str] = None,
17
- time_axs: int = 1, input_transposed: bool = False, ignore_channel_dim: bool = False,
18
- model_output_index: int = None, extend_search: bool = False, use_gpu: bool = False,
19
- padding_dim: int = None, seed: int = 11):
16
+ def __init__(self, model: nn.Module | Any, transform_fn: Callable[[np.ndarray, *tuple[Any, ...]], torch.Tensor]
17
+ = None, class_names: List[str] = None, time_axs: int = 1, input_transposed: bool = False,
18
+ ignore_channel_dim: bool = False, model_output_index: int = None, extend_search: bool = False,
19
+ use_gpu: bool = False, padding_dim: int = None, seed: int = 11):
20
20
  """
21
21
  Initializes the TorchCamBuilder class. The constructor also displays, if present and retrievable, the 1D- and
22
22
  2D-convolutional layers in the network, as well as the final Sigmoid/Softmax activation. Additionally, the CAM
@@ -26,7 +26,8 @@ class TorchCamBuilder(CamBuilder):
26
26
  representing a convolutional neural network model to be explained. Unconventional models should always be
27
27
  set to inference mode before being provided as inputs.
28
28
  :param transform_fn: (optional, default is None) A callable function to preprocess np.ndarray data before model
29
- evaluation. This function is also expected to convert data into either PyTorch or TensorFlow tensors.
29
+ evaluation. This function is also expected to convert data into PyTorch tensors.The function may optionally
30
+ take as a second input a list of objects required by the preprocessing method.
30
31
  :param class_names: (optional, default is None) A list of strings where each string represents the name of an
31
32
  output class.
32
33
  :param time_axs: (optional, default is 1) An integer index indicating whether the input signal's time axis is
@@ -67,7 +68,8 @@ class TorchCamBuilder(CamBuilder):
67
68
  else:
68
69
  print("Your PyTorch model has no 'eval' method. Please verify that the networks has been set to "
69
70
  "evaluation mode before the TorchCamBuilder initialization.")
70
- self.use_gpu = use_gpu
71
+ self.use_gpu = use_gpu and torch.cuda.is_available()
72
+ self.device = "cuda" if self.use_gpu else "cpu"
71
73
 
72
74
  # Assign the default transform function
73
75
  if transform_fn is None:
@@ -116,8 +118,9 @@ class TorchCamBuilder(CamBuilder):
116
118
  isinstance(layer, nn.Softmax) or isinstance(layer, nn.Sigmoid)):
117
119
  super()._show_layer(name, layer, potential=potential)
118
120
 
119
- def _create_raw_batched_cams(self, data_list: List[np.array], target_class: int, target_layer: nn.Module,
120
- explainer_type: str, softmax_final: bool) -> Tuple[List[np.ndarray], np.ndarray]:
121
+ def _create_raw_batched_cams(self, data_list: List[np.ndarray | torch.Tensor], target_class: int,
122
+ target_layer: nn.Module, explainer_type: str, softmax_final: bool,
123
+ extra_inputs_list: List[Any] = None) -> Tuple[List[np.ndarray], np.ndarray]:
121
124
  """
122
125
  Retrieves raw CAMs from an input data list based on the specified settings (defined by algorithm, target layer,
123
126
  and target class). Additionally, it returns the class probabilities predicted by the model.
@@ -131,6 +134,8 @@ class TorchCamBuilder(CamBuilder):
131
134
  should identify one of the CAM algorithms allowed, as listed by the class constructor.
132
135
  :param softmax_final: (mandatory) A boolean indicating whether the network terminates with a Sigmoid/Softmax
133
136
  activation function.
137
+ :param extra_inputs_list: (optional, defaults is None) A list of additional input objects required by the
138
+ model's forward method.
134
139
 
135
140
  :return:
136
141
  - cam_list: A list of np.ndarray containing CAMs for each item in the input data list, corresponding to the
@@ -164,7 +169,12 @@ class TorchCamBuilder(CamBuilder):
164
169
  data_list = [x.unsqueeze(0) for x in data_list]
165
170
  data_batch = torch.stack(data_list)
166
171
 
167
- outputs = self.model(data_batch)
172
+ # Set device
173
+ self.model = self.model.to(self.device)
174
+ data_batch = data_batch.to(self.device)
175
+
176
+ extra_inputs_list = extra_inputs_list or []
177
+ outputs = self.model(data_batch, *extra_inputs_list)
168
178
  if isinstance(outputs, tuple):
169
179
  outputs = outputs[self.model_output_index]
170
180
 
@@ -6,7 +6,7 @@ os.environ["TF_DETERMINISTIC_OPS"] = "1"
6
6
  import numpy as np
7
7
  import keras
8
8
  import tensorflow as tf
9
- from typing import Callable, List, Tuple, Dict, Any
9
+ from typing import Callable, List, Tuple, Dict, Any, Optional
10
10
 
11
11
  from signal_grad_cam import CamBuilder
12
12
 
@@ -18,9 +18,10 @@ class TfCamBuilder(CamBuilder):
18
18
  HiResCAM.
19
19
  """
20
20
 
21
- def __init__(self, model: tf.keras.Model | Any, transform_fn: Callable = None, class_names: List[str] = None,
22
- time_axs: int = 1, input_transposed: bool = False, ignore_channel_dim: bool = False,
23
- model_output_index: int = None, extend_search: bool = False, padding_dim: int = None, seed: int = 11):
21
+ def __init__(self, model: tf.keras.Model | Any, transform_fn: Callable[[np.ndarray, *tuple[Any, ...]], tf.Tensor]
22
+ = None, class_names: List[str] = None, time_axs: int = 1, input_transposed: bool = False,
23
+ ignore_channel_dim: bool = False, model_output_index: int = None, extend_search: bool = False,
24
+ padding_dim: int = None, seed: int = 11):
24
25
  """
25
26
  Initializes the TfCamBuilder class. The constructor also displays, if present and retrievable, the 1D- and
26
27
  2D-convolutional layers in the network, as well as the final Sigmoid/Softmax activation. Additionally, the CAM
@@ -30,7 +31,8 @@ class TfCamBuilder(CamBuilder):
30
31
  representing a convolutional neural network model to be explained. Unconventional models should always be
31
32
  set to inference mode before being provided as inputs.
32
33
  :param transform_fn: (optional, default is None) A callable function to preprocess np.ndarray data before model
33
- evaluation. This function is also expected to convert data into either PyTorch or TensorFlow tensors.
34
+ evaluation. This function is also expected to convert data into TensorFlow tensors. The function may
35
+ optionally take as a second input a list of objects required by the preprocessing method.
34
36
  :param class_names: (optional, default is None) A list of strings where each string represents the name of an
35
37
  output class.
36
38
  :param time_axs: (optional, default is 1) An integer index indicating whether the input signal's time axis is
@@ -140,9 +142,9 @@ class TfCamBuilder(CamBuilder):
140
142
  isinstance(layer, keras.layers.Softmax) or isinstance(layer, keras.Sequential)):
141
143
  super()._show_layer(name, layer, potential=potential)
142
144
 
143
- def _create_raw_batched_cams(self, data_list: List[np.array], target_class: int,
144
- target_layer: tf.keras.layers.Layer, explainer_type: str, softmax_final: bool) \
145
- -> Tuple[List[np.ndarray], np.ndarray]:
145
+ def _create_raw_batched_cams(self, data_list: List[np.ndarray | tf.Tensor], target_class: int,
146
+ target_layer: tf.keras.layers.Layer, explainer_type: str, softmax_final: bool,
147
+ extra_inputs_list: List[Any] = None) -> Tuple[List[np.ndarray], np.ndarray]:
146
148
  """
147
149
  Retrieves raw CAMs from an input data list based on the specified settings (defined by algorithm, target layer,
148
150
  and target class). Additionally, it returns the class probabilities predicted by the model.
@@ -156,6 +158,8 @@ class TfCamBuilder(CamBuilder):
156
158
  should identify one of the CAM algorithms allowed, as listed by the class constructor.
157
159
  :param softmax_final: (mandatory) A boolean indicating whether the network terminates with a Sigmoid/Softmax
158
160
  activation function.
161
+ :param extra_inputs_list: (optional, defaults is None) A list of additional input objects required by the
162
+ model's forward method.
159
163
 
160
164
  :return:
161
165
  - cam_list: A list of np.ndarray containing CAMs for each item in the input data list, corresponding to the
@@ -183,9 +187,10 @@ class TfCamBuilder(CamBuilder):
183
187
  data_list = [tf.expand_dims(x, axis=0) for x in data_list]
184
188
  data_batch = tf.stack(data_list, axis=0)
185
189
 
186
- grad_model = keras.models.Model(self.model.inputs[0], [target_layer.output, self.model.output])
190
+ grad_model = keras.models.Model(self.model.inputs, [target_layer.output, self.model.output])
191
+ extra_inputs_list = extra_inputs_list or []
187
192
  with tf.GradientTape() as tape:
188
- self.activations, outputs = grad_model(data_batch)
193
+ self.activations, outputs = grad_model([data_batch] + extra_inputs_list)
189
194
 
190
195
  if softmax_final:
191
196
  # Approximate Softmax inversion formula logit = log(prob) + constant, as the constant is negligible
@@ -1,6 +1,6 @@
1
1
  Metadata-Version: 2.1
2
2
  Name: signal-grad-cam
3
- Version: 0.0.2
3
+ Version: 0.1.1
4
4
  Summary: SignalGrad-CAM aims at generalising Grad-CAM to one-dimensional applications, while enhancing usability and efficiency.
5
5
  Home-page: https://github.com/samuelepe11/signal_grad_cam
6
6
  Author: Samuele Pe
@@ -63,7 +63,7 @@ Requires-Dist: tensorflow
63
63
 
64
64
  <p align="justify">Deep learning models have demonstrated remarkable performance across various domains; however, their black-box nature hinders interpretability and trust. As a result, the demand for explanation algorithms has grown, driving advancements in the field of eXplainable AI (XAI). However, relatively few efforts have been dedicated to developing interpretability methods for signal-based models. We introduce SignalGrad-CAM (SGrad-CAM), a versatile and efficient interpretability tool that extends the principles of Grad-CAM to both 1D- and 2D-convolutional neural networks for signal processing. SGrad-CAM is designed to interpret models for either image or signal elaboration, supporting both PyTorch and TensorFlow/Keras frameworks, and provides diagnostic and visualization tools to enhance model transparency. The package is also designed for batch processing, ensuring efficiency even for large-scale applications, while maintaining a simple and user-friendly structure.</p>
65
65
 
66
- <p align="justify">**Keywords:** *eXplainable AI, explanations, local explanation, fidelity, interpretability, transparency, trustworthy AI, feature importance, saliency maps, CAM, Grad-CAM, black-box, deep learning, CNN, signals, time series*</p>
66
+ <p align="justify"><i><b>Keywords:</b> eXplainable AI, explanations, local explanation, fidelity, interpretability, transparency, trustworthy AI, feature importance, saliency maps, CAM, Grad-CAM, black-box, deep learning, CNN, signals, time series</i></p>
67
67
 
68
68
  <p align="right"><a href="#top">Back To Top</a></p>
69
69
 
@@ -114,13 +114,13 @@ class_labels = ["Class 1", "Class 2", "Class 3"]
114
114
  cam_builder = TorchCamBuilder(model=model, transform_fc=preprocess_fc, class_names=class_labels, time_axs=1)
115
115
  ```
116
116
 
117
- <p align="justify">Now, you can use the `cam_builder` object to generate class activation maps from a list of input data using the *`get_cams`* method. You can specify multiple algorithm names, target layers, or target classes as needed.
117
+ <p align="justify">Now, you can use the `cam_builder` object to generate class activation maps from a list of input data using the <i>`get_cams`</i> method. You can specify multiple algorithm names, target layers, or target classes as needed.
118
118
 
119
119
  The function's attributes allow users to customize the visualization (e.g., setting axis ticks or labels). If a result directory path is provided, the output is stored as a '.png' file; otherwise, it is displayed. In all cases, the function returns a dictionary containing the requested CAMs, along with the model's predictions and importance score ranges.
120
120
 
121
121
  Finally, several visualization tools are available to gain deeper insights into the model's behavior. The display can be customized by adjusting line width, point extension, aspect ratio, and more:
122
- * *`single_channel_output_display`* plots the selected channels using a color scheme that reflects the importance of each input feature.
123
- * *`overlapped_output_display`* superimposes CAMs onto the corresponding input in an image-like format, allowing users to capture the overall distribution of input importance.
122
+ * <i>`single_channel_output_display`</i> plots the selected channels using a color scheme that reflects the importance of each input feature.
123
+ * <i>`overlapped_output_display`</i> superimposes CAMs onto the corresponding input in an image-like format, allowing users to capture the overall distribution of input importance.
124
124
  </p>
125
125
 
126
126
  ```python
@@ -138,7 +138,7 @@ cam_dict, predicted_probs_dict, score_ranges_dict = cam_builder.get_cam(data_lis
138
138
 
139
139
  # Visualize single channel importance
140
140
  selected_channels_indices = [0, 2, 10]
141
- cam_builder.single_channel_output_display(data_list=data_list, data_labels=data_labels_list, predicted_probs=predicted_probs_dict,
141
+ cam_builder.single_channel_output_display(data_list=data_list, data_labels=data_labels_list, predicted_probs_dict=predicted_probs_dict,
142
142
  cams_dict=cam_dict, explainer_types="Grad-CAM", target_classes=target_classes,
143
143
  target_layers="target_layer_name", desired_channels=selected_channels_indices,
144
144
  grid_instructions=(1, len(selected_channels_indices), bar_ranges=score_ranges_dict,
@@ -146,7 +146,7 @@ cam_builder.single_channel_output_display(data_list=data_list, data_labels=data_
146
146
  axes_names=("Time (s)", "Amplitude (mV)"))
147
147
 
148
148
  # Visualize overall importance
149
- cam_builder.overlapped_output_display(data_list=data_list, data_labels=data_labels_list, predicted_probs=predicted_probs_dict,
149
+ cam_builder.overlapped_output_display(data_list=data_list, data_labels=data_labels_list, predicted_probs_dict=predicted_probs_dict,
150
150
  cams_dict=cam_dict, explainer_types="Grad-CAM", target_classes=target_classes,
151
151
  target_layers="target_layer_name", fig_size=(20 * len(your_data_X), 20),
152
152
  grid_instructions=(len(your_data_X), 1), bar_ranges=score_ranges_dict, data_names=item_names
File without changes