signal-grad-cam 1.0.1__tar.gz → 2.0.1__tar.gz

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.

Potentially problematic release.


This version of signal-grad-cam might be problematic. Click here for more details.

@@ -1,6 +1,6 @@
1
1
  Metadata-Version: 2.1
2
2
  Name: signal_grad_cam
3
- Version: 1.0.1
3
+ Version: 2.0.1
4
4
  Summary: SignalGrad-CAM aims at generalising Grad-CAM to one-dimensional applications, while enhancing usability and efficiency.
5
5
  Home-page: https://github.com/samuelepe11/signal_grad_cam
6
6
  Author: Samuele Pe
@@ -19,6 +19,7 @@ Requires-Dist: opencv-python
19
19
  Requires-Dist: torch
20
20
  Requires-Dist: keras
21
21
  Requires-Dist: tensorflow
22
+ Requires-Dist: imageio
22
23
 
23
24
  <div id="top"></div>
24
25
 
@@ -31,7 +32,7 @@ Requires-Dist: tensorflow
31
32
  SignalGrad-CAM
32
33
  </h1>
33
34
 
34
- <h3 align="center">SignalGrad-CAM aims at generalising Grad-CAM to one-dimensional applications, while enhancing usability and efficiency.</h3>
35
+ <h3 align="center">SignalGrad-CAM aims at generalising Grad-CAM to time-based applications, while enhancing usability and efficiency.</h3>
35
36
 
36
37
  <p align="center">
37
38
  <a href="https://github.com/bmi-labmedinfo/signal_grad_cam"><strong>Explore the docs</strong></a>
@@ -61,9 +62,9 @@ Requires-Dist: tensorflow
61
62
  <!-- ABOUT THE PROJECT -->
62
63
  ## About The Project
63
64
 
64
- <p align="justify">Deep learning models have demonstrated remarkable performance across various domains; however, their black-box nature hinders interpretability and trust. As a result, the demand for explanation algorithms has grown, driving advancements in the field of eXplainable AI (XAI). However, relatively few efforts have been dedicated to developing interpretability methods for signal-based models. We introduce SignalGrad-CAM (SGrad-CAM), a versatile and efficient interpretability tool that extends the principles of Grad-CAM to both 1D- and 2D-convolutional neural networks for signal processing. SGrad-CAM is designed to interpret models for either image or signal elaboration, supporting both PyTorch and TensorFlow/Keras frameworks, and provides diagnostic and visualization tools to enhance model transparency. The package is also designed for batch processing, ensuring efficiency even for large-scale applications, while maintaining a simple and user-friendly structure.</p>
65
+ <p align="justify">Deep learning models have achieved remarkable performance across many domains, yet their black-box nature often limits interpretability and trust. This has fueled the development of explanation algorithms within the field of eXplainable AI (XAI). Despite this progress, relatively few methods target time-based convolutional neural networks (CNNs), such as 1D-CNNs for signals and 3D-CNNs for videos. We present SignalGrad-CAM (SGrad-CAM), a versatile and efficient interpretability tool that extends the principles of Grad-CAM to 1D, 2D, and 3D CNNs. SGrad-CAM supports model interpretation for signals, images, and video/volume data in both PyTorch and TensorFlow/Keras frameworks. It includes diagnostic and visualization tools to enhance transparency, and its batch-processing design ensures scalability for large datasets while maintaining a simple, user-friendly structure.</p>
65
66
 
66
- <p align="justify"><i><b>Keywords:</b> eXplainable AI, explanations, local explanation, fidelity, interpretability, transparency, trustworthy AI, feature importance, saliency maps, CAM, Grad-CAM, black-box, deep learning, CNN, signals, time series</i></p>
67
+ <p align="justify"><i><b>Keywords:</b> eXplainable AI, XAI, explanations, local explanation, contrastive explanations, cXAI, fidelity, interpretability, transparency, trustworthy AI, feature importance, saliency maps, CAM, Grad-CAM, HiResCAM, black-box, deep learning, CNN, 1D-CNN, 2D-CNN, 3D-CNN, signals, time series, images, videos, volumes.</i></p>
67
68
 
68
69
  <p align="right"><a href="#top">Back To Top</a></p>
69
70
 
@@ -83,15 +84,13 @@ Requires-Dist: tensorflow
83
84
 
84
85
  <!-- USAGE EXAMPLES -->
85
86
  ## Usage
86
- <p align="justify">
87
- Here's a basic example that illustrates SignalGrad-CAM common usage.
87
+ <p align="justify">Here's a basic example that illustrates SignalGrad-CAM common usage.</p>
88
88
 
89
- First, train a classifier on the data or select an already trained model, then instantiate `TorchCamBuilder` (if you are working with a PyTorch model) or `TfCamBuilder` (if the model is built in TensorFlow/Keras).
89
+ <p align="justify">First, train a CNN on the data or load a pre-trained model, then instantiate `TorchCamBuilder` (if you are working with a PyTorch model) or `TfCamBuilder` (if the model is built in TensorFlow/Keras).</p>
90
90
 
91
- Besides the model, `TorchCamBuilder` requires additional information to function effectively. For example, you may provide a list of class labels, a preprocessing function, or an index indicating which dimension corresponds to time. These attributes allow SignalGrad-CAM to be applied to a wide range of models.
91
+ <p align="justify">Besides the model, `TorchCamBuilder` requires additional information to function effectively. For example, you may provide a list of class labels, a pre-processing function, or an index indicating which dimension corresponds to time (for signal elaboration). These attributes allow SignalGrad-CAM to be applied to a wide range of models.</p>
92
92
 
93
- The constructor displays a list of available Grad-CAM algorithms for explanation, as well as a list of layers that can be used as target for the algorithm. It also identifies any Sigmoid/Softmax layer, since its presence or absence will slightly change the algorithm's workflow.
94
- </p>
93
+ <p align="justify">The constructor displays a list of available Grad-CAM algorithms for explanation (Grad-CAM and HiResCAM at the moment), as well as a list of layers that can be used as target for the algorithm. It also identifies any Sigmoid/Softmax layer, since its presence or absence will slightly change the algorithm's workflow.</p>
95
94
 
96
95
  ```python
97
96
  import numpy as np
@@ -114,14 +113,14 @@ class_labels = ["Class 1", "Class 2", "Class 3"]
114
113
  cam_builder = TorchCamBuilder(model=model, transform_fn=preprocess_fn, class_names=class_labels, time_axs=1)
115
114
  ```
116
115
 
117
- <p align="justify">Now, you can use the `cam_builder` object to generate class activation maps from a list of input data using the <i>`get_cams`</i> method. You can specify multiple algorithm names, target layers, or target classes as needed.
116
+ <p align="justify">Now, you can use the `cam_builder` object to generate class activation maps from a list of input data using the <i>`get_cams`</i> method. You can specify multiple algorithm names, target layers, or target classes as needed. As described in each function's documentation, every input (such as data and labels) need to be rearranged into lists for versatility.</p>
118
117
 
119
- The function's attributes allow users to customize the visualization (e.g., setting axis ticks or labels). If a result directory path is provided, the output is stored as a '.png' file; otherwise, it is displayed. In all cases, the function returns a dictionary containing the requested CAMs, along with the model's predictions and importance score ranges.
118
+ <p align="justify">The function's attributes allow users to customize the visualization (e.g., setting axes ticks or labels). If a result directory path is provided, the output is stored as a '.png' file; otherwise, it is simply displayed. In all cases, the function returns a dictionary containing the requested CAMs, along with the model's predictions and importance score ranges.</p>
120
119
 
121
- Finally, several visualization tools are available to gain deeper insights into the model's behavior. The display can be customized by adjusting line width, point extension, aspect ratio, and more:
122
- * <i>`single_channel_output_display`</i> plots the selected channels using a color scheme that reflects the importance of each input feature.
123
- * <i>`overlapped_output_display`</i> superimposes CAMs onto the corresponding input in an image-like format, allowing users to capture the overall distribution of input importance.
124
- </p>
120
+ <p align="justify">Finally, several visualization tools are available to gain deeper insights into the model's behavior. Their display can be customized by adjusting features like line width and point extension (for the drawing of signals and their explanation) along with others (e.g., aspect ratio) for a more general task:</p>
121
+
122
+ * <p align="justify"><i>`single_channel_output_display`</i> plots the selected input channels using a color scheme that reflects the importance of each input feature.</p>
123
+ * <p align="justify"><i>`overlapped_output_display`</i> superimposes CAMs onto the corresponding input in an image-like format, allowing users to capture the overall distribution of input importance.</p>
125
124
 
126
125
  ```python
127
126
  # Prepare data
@@ -132,31 +131,38 @@ target_classes = [0, 1]
132
131
 
133
132
  # Create CAMs
134
133
  cam_dict, predicted_probs_dict, score_ranges_dict = cam_builder.get_cam(data_list=data_list, data_labels=data_labels_list,
135
- target_classes=target_classes, explainer_types="Grad-CAM",
136
- target_layers="conv1d_layer_1", softmax_final=True,
137
- data_sampling_freq=25, dt=1, axes_names=("Time (s)", "Channels"))
134
+ target_classes=target_classes, explainer_types="Grad-CAM",
135
+ target_layers="conv1d_layer_1", softmax_final=True,
136
+ data_sampling_freq=25, dt=1, axes_names=("Time (s)", "Channels"))
138
137
 
139
138
  # Visualize single channel importance
140
139
  selected_channels_indices = [0, 2, 10]
141
140
  cam_builder.single_channel_output_display(data_list=data_list, data_labels=data_labels_list, predicted_probs_dict=predicted_probs_dict,
142
- cams_dict=cam_dict, explainer_types="Grad-CAM", target_classes=target_classes,
143
- target_layers="target_layer_name", desired_channels=selected_channels_indices,
144
- grid_instructions=(1, len(selected_channels_indices), bar_ranges_dict=score_ranges_dict,
145
- results_dir="path_to_your_result_directoory", data_sampling_freq=25, dt=1, line_width=0.5,
146
- axes_names=("Time (s)", "Amplitude (mV)"))
141
+ cams_dict=cam_dict, explainer_types="Grad-CAM", target_classes=target_classes,
142
+ target_layers="target_layer_name", desired_channels=selected_channels_indices,
143
+ grid_instructions=(1, len(selected_channels_indices), bar_ranges_dict=score_ranges_dict,
144
+ results_dir="path_to_your_result_directoory", data_sampling_freq=25, dt=1, line_width=0.5,
145
+ axes_names=("Time (s)", "Amplitude (mV)"))
147
146
 
148
147
  # Visualize overall importance
149
148
  cam_builder.overlapped_output_display(data_list=data_list, data_labels=data_labels_list, predicted_probs_dict=predicted_probs_dict,
150
149
  cams_dict=cam_dict, explainer_types="Grad-CAM", target_classes=target_classes,
151
- target_layers="target_layer_name", fig_size=(20 * len(your_data_X), 20),
152
- grid_instructions=(len(your_data_X), 1), bar_ranges_dict=score_ranges_dict, data_names=item_names
153
- results_dir_path="path_to_your_result_directoory", data_sampling_freq=25, dt=1)
150
+ target_layers="target_layer_name", fig_size=(20 * len(your_data_X), 20),
151
+ grid_instructions=(len(your_data_X), 1), bar_ranges_dict=score_ranges_dict, data_names=item_names
152
+ results_dir_path="path_to_your_result_directoory", data_sampling_freq=25, dt=1)
154
153
  ```
155
154
 
156
- You can also explore the Python scripts available in the examples directory of the repository [here](https://github.com/bmi-labmedinfo/signal_grad_cam/examples), which provide complete, ready-to-run demonstrations for both PyTorch and TensorFlow/Keras models. These examples include open-source models for image and signal classification using 1D- and 2D-CNN architectures, and they illustrate how to apply the recently added feature for creating and displaying "contrastive explanations" in each scenario.
155
+ <p align="justify">You can also explore the Python scripts available in the examples directory of the repository [here](https://github.com/bmi-labmedinfo/signal_grad_cam/examples), which provide complete, ready-to-run demonstrations for both PyTorch and TensorFlow/Keras models. These examples include open-source models for signal, image and video/volume classification using 1D, 2D, and 3D CNN architectures. Moreover, these tutorials illustrate how to deploy the recently added feature contrastive explanations in each scenario.</p>
157
156
 
158
157
  See the [open issues](https://github.com/bmi-labmedinfo/signal_grad_cam/issues) for a full list of proposed features (and known issues).
159
158
 
159
+ ## <i>NEW!</i> Updates in SignalGrad-CAM
160
+ <p align="justify">Compared to previous versions, SignalGrad-CAM now offers the following enhancements:</p>
161
+
162
+ * <p align="justify"><i>Support for regression tasks:</i> SGrad-CAM can now handle regression-based models. Previously, substantial adjustments were required for these tasks, similar to those still needed for segmentation or generative models, now it is only required to set as True the parameter <i>`is_regression_network`</i> in the constructor function.</p>
163
+ * <p align="justify"><i>Contrastive explanations:</i> Users can generate and visualize contrastive explanations by specifying one or more foil classes via the parameter <i>`contrastive_foil_classes`</i>.</p>
164
+ * <p align="justify"><i>3D-CNN support for videos and volumetric data:</i> After expliciting the time axis in the constructor with the parameter <i>`time_axs`</i>, the same functions used for 1D and 2D data now work seamlessly for 3D-CNNs. Outputs include GIF files for quick visualization of 3D activation maps. For a more detailed analysis, users can also request separate PNG images for each volume slice (across the indicated time axis) or video frame using the parameter <i>`show_single_video_frames`</i>.</p>
165
+
160
166
  <p align="right"><a href="#top">Back To Top</a></p>
161
167
 
162
168
 
@@ -164,7 +170,7 @@ If you use the SignalGrad-CAM software for your projects, please cite it as:
164
170
 
165
171
  ```
166
172
  @inproceedings{pe_sgradcam_2025_paper,
167
- author = {Pe, Samuele and Buonocore, Tommaso Mario and Giovanna, Nicora and Enea, Parimbelli}},
173
+ author = {Pe, Samuele and Buonocore, Tommaso Mario and Giovanna, Nicora and Enea, Parimbelli},
168
174
  title = {SignalGrad-CAM: Beyond Image Explanation},
169
175
  booktitle = {Joint Proceedings of the xAI 2025 Late-breaking Work, Demos and Doctoral Consortium co-located with the 3rd World Conference on eXplainable Artificial Intelligence (xAI 2025), Istanbul, Turkey, July 9-11, 2025},
170
176
  series = {CEUR Workshop Proceedings},
@@ -179,9 +185,9 @@ If you use the SignalGrad-CAM software for your projects, please cite it as:
179
185
  ```
180
186
  @software{pe_sgradcam_2025_repo,
181
187
  author = {Pe, Samuele},
182
- title = {{SignalGrad-CAM}},
188
+ title = {SignalGrad-CAM},
183
189
  url = {https://github.com/bmi-labmedinfo/signal_grad_cam},
184
- version = {1.0.0},
190
+ version = {1.0.1},
185
191
  year = {2025}
186
192
  }
187
193
  ```
@@ -0,0 +1,225 @@
1
+ <div id="top"></div>
2
+
3
+ [![Contributors][contributors-shield]][contributors-url] [![Forks][forks-shield]][forks-url] [![Stargazers][stars-shield]][stars-url] [![Issues][issues-shield]][issues-url] [![MIT License][license-shield]][license-url]
4
+
5
+
6
+ <br />
7
+ <div align="center">
8
+ <h1>
9
+ SignalGrad-CAM
10
+ </h1>
11
+
12
+ <h3 align="center">SignalGrad-CAM aims at generalising Grad-CAM to time-based applications, while enhancing usability and efficiency.</h3>
13
+
14
+ <p align="center">
15
+ <a href="https://github.com/bmi-labmedinfo/signal_grad_cam"><strong>Explore the docs</strong></a>
16
+ <br />
17
+ <br />
18
+ <a href="https://github.com/bmi-labmedinfo/signal_grad_cam/issues">Report Bug or Request Feature</a>
19
+ </p>
20
+ </div>
21
+
22
+
23
+
24
+ <!-- TABLE OF CONTENTS -->
25
+ <details>
26
+ <summary>Table of Contents</summary>
27
+ <ol>
28
+ <li><a href="#about-the-project">About The Project</a></li>
29
+ <li><a href="#installation">Installation</a></li>
30
+ <li><a href="#usage">Usage</a></li>
31
+ <li><a href="#publications">Publications</a></li>
32
+ <li><a href="#contacts-and-useful-links">Contacts And Useful Links</a></li>
33
+ <li><a href="#license">License</a></li>
34
+ </ol>
35
+ </details>
36
+
37
+
38
+
39
+ <!-- ABOUT THE PROJECT -->
40
+ ## About The Project
41
+
42
+ <p align="justify">Deep learning models have achieved remarkable performance across many domains, yet their black-box nature often limits interpretability and trust. This has fueled the development of explanation algorithms within the field of eXplainable AI (XAI). Despite this progress, relatively few methods target time-based convolutional neural networks (CNNs), such as 1D-CNNs for signals and 3D-CNNs for videos. We present SignalGrad-CAM (SGrad-CAM), a versatile and efficient interpretability tool that extends the principles of Grad-CAM to 1D, 2D, and 3D CNNs. SGrad-CAM supports model interpretation for signals, images, and video/volume data in both PyTorch and TensorFlow/Keras frameworks. It includes diagnostic and visualization tools to enhance transparency, and its batch-processing design ensures scalability for large datasets while maintaining a simple, user-friendly structure.</p>
43
+
44
+ <p align="justify"><i><b>Keywords:</b> eXplainable AI, XAI, explanations, local explanation, contrastive explanations, cXAI, fidelity, interpretability, transparency, trustworthy AI, feature importance, saliency maps, CAM, Grad-CAM, HiResCAM, black-box, deep learning, CNN, 1D-CNN, 2D-CNN, 3D-CNN, signals, time series, images, videos, volumes.</i></p>
45
+
46
+ <p align="right"><a href="#top">Back To Top</a></p>
47
+
48
+ <!-- INSTALLATION -->
49
+ ## Installation
50
+
51
+ 1. Make sure you have the latest version of pip installed
52
+ ```sh
53
+ pip install --upgrade pip
54
+ ```
55
+ 2. Install SignalGrad-CAM through pip
56
+ ```sh
57
+ pip install signal-grad-cam
58
+ ```
59
+
60
+ <p align="right"><a href="#top">Back To Top</a></p>
61
+
62
+ <!-- USAGE EXAMPLES -->
63
+ ## Usage
64
+ <p align="justify">Here's a basic example that illustrates SignalGrad-CAM common usage.</p>
65
+
66
+ <p align="justify">First, train a CNN on the data or load a pre-trained model, then instantiate `TorchCamBuilder` (if you are working with a PyTorch model) or `TfCamBuilder` (if the model is built in TensorFlow/Keras).</p>
67
+
68
+ <p align="justify">Besides the model, `TorchCamBuilder` requires additional information to function effectively. For example, you may provide a list of class labels, a pre-processing function, or an index indicating which dimension corresponds to time (for signal elaboration). These attributes allow SignalGrad-CAM to be applied to a wide range of models.</p>
69
+
70
+ <p align="justify">The constructor displays a list of available Grad-CAM algorithms for explanation (Grad-CAM and HiResCAM at the moment), as well as a list of layers that can be used as target for the algorithm. It also identifies any Sigmoid/Softmax layer, since its presence or absence will slightly change the algorithm's workflow.</p>
71
+
72
+ ```python
73
+ import numpy as np
74
+ import torch
75
+ from signal_grad_cam import TorchCamBuilder
76
+
77
+ # Load model
78
+ model = YourTorchModelConstructor()
79
+ model.load_state_dict(torch.load("path_to_your_stored_model.pt")
80
+ model.eval()
81
+
82
+ # Introduce useful information
83
+ def preprocess_fn(signal):
84
+ signal = torch.from_numpy(signal).float()
85
+ # Extra preprocessing: data resizing, reshaping, normalization...
86
+ return signal
87
+ class_labels = ["Class 1", "Class 2", "Class 3"]
88
+
89
+ # Define the CAM builder
90
+ cam_builder = TorchCamBuilder(model=model, transform_fn=preprocess_fn, class_names=class_labels, time_axs=1)
91
+ ```
92
+
93
+ <p align="justify">Now, you can use the `cam_builder` object to generate class activation maps from a list of input data using the <i>`get_cams`</i> method. You can specify multiple algorithm names, target layers, or target classes as needed. As described in each function's documentation, every input (such as data and labels) need to be rearranged into lists for versatility.</p>
94
+
95
+ <p align="justify">The function's attributes allow users to customize the visualization (e.g., setting axes ticks or labels). If a result directory path is provided, the output is stored as a '.png' file; otherwise, it is simply displayed. In all cases, the function returns a dictionary containing the requested CAMs, along with the model's predictions and importance score ranges.</p>
96
+
97
+ <p align="justify">Finally, several visualization tools are available to gain deeper insights into the model's behavior. Their display can be customized by adjusting features like line width and point extension (for the drawing of signals and their explanation) along with others (e.g., aspect ratio) for a more general task:</p>
98
+
99
+ * <p align="justify"><i>`single_channel_output_display`</i> plots the selected input channels using a color scheme that reflects the importance of each input feature.</p>
100
+ * <p align="justify"><i>`overlapped_output_display`</i> superimposes CAMs onto the corresponding input in an image-like format, allowing users to capture the overall distribution of input importance.</p>
101
+
102
+ ```python
103
+ # Prepare data
104
+ data_list = [x for x in your_numpy_data_x[:2]]
105
+ data_labels_list = [1, 0]
106
+ item_names = ["Item 1", "Item 2"]
107
+ target_classes = [0, 1]
108
+
109
+ # Create CAMs
110
+ cam_dict, predicted_probs_dict, score_ranges_dict = cam_builder.get_cam(data_list=data_list, data_labels=data_labels_list,
111
+ target_classes=target_classes, explainer_types="Grad-CAM",
112
+ target_layers="conv1d_layer_1", softmax_final=True,
113
+ data_sampling_freq=25, dt=1, axes_names=("Time (s)", "Channels"))
114
+
115
+ # Visualize single channel importance
116
+ selected_channels_indices = [0, 2, 10]
117
+ cam_builder.single_channel_output_display(data_list=data_list, data_labels=data_labels_list, predicted_probs_dict=predicted_probs_dict,
118
+ cams_dict=cam_dict, explainer_types="Grad-CAM", target_classes=target_classes,
119
+ target_layers="target_layer_name", desired_channels=selected_channels_indices,
120
+ grid_instructions=(1, len(selected_channels_indices), bar_ranges_dict=score_ranges_dict,
121
+ results_dir="path_to_your_result_directoory", data_sampling_freq=25, dt=1, line_width=0.5,
122
+ axes_names=("Time (s)", "Amplitude (mV)"))
123
+
124
+ # Visualize overall importance
125
+ cam_builder.overlapped_output_display(data_list=data_list, data_labels=data_labels_list, predicted_probs_dict=predicted_probs_dict,
126
+ cams_dict=cam_dict, explainer_types="Grad-CAM", target_classes=target_classes,
127
+ target_layers="target_layer_name", fig_size=(20 * len(your_data_X), 20),
128
+ grid_instructions=(len(your_data_X), 1), bar_ranges_dict=score_ranges_dict, data_names=item_names
129
+ results_dir_path="path_to_your_result_directoory", data_sampling_freq=25, dt=1)
130
+ ```
131
+
132
+ <p align="justify">You can also explore the Python scripts available in the examples directory of the repository [here](https://github.com/bmi-labmedinfo/signal_grad_cam/examples), which provide complete, ready-to-run demonstrations for both PyTorch and TensorFlow/Keras models. These examples include open-source models for signal, image and video/volume classification using 1D, 2D, and 3D CNN architectures. Moreover, these tutorials illustrate how to deploy the recently added feature contrastive explanations in each scenario.</p>
133
+
134
+ See the [open issues](https://github.com/bmi-labmedinfo/signal_grad_cam/issues) for a full list of proposed features (and known issues).
135
+
136
+ ## <i>NEW!</i> Updates in SignalGrad-CAM
137
+ <p align="justify">Compared to previous versions, SignalGrad-CAM now offers the following enhancements:</p>
138
+
139
+ * <p align="justify"><i>Support for regression tasks:</i> SGrad-CAM can now handle regression-based models. Previously, substantial adjustments were required for these tasks, similar to those still needed for segmentation or generative models, now it is only required to set as True the parameter <i>`is_regression_network`</i> in the constructor function.</p>
140
+ * <p align="justify"><i>Contrastive explanations:</i> Users can generate and visualize contrastive explanations by specifying one or more foil classes via the parameter <i>`contrastive_foil_classes`</i>.</p>
141
+ * <p align="justify"><i>3D-CNN support for videos and volumetric data:</i> After expliciting the time axis in the constructor with the parameter <i>`time_axs`</i>, the same functions used for 1D and 2D data now work seamlessly for 3D-CNNs. Outputs include GIF files for quick visualization of 3D activation maps. For a more detailed analysis, users can also request separate PNG images for each volume slice (across the indicated time axis) or video frame using the parameter <i>`show_single_video_frames`</i>.</p>
142
+
143
+ <p align="right"><a href="#top">Back To Top</a></p>
144
+
145
+
146
+ If you use the SignalGrad-CAM software for your projects, please cite it as:
147
+
148
+ ```
149
+ @inproceedings{pe_sgradcam_2025_paper,
150
+ author = {Pe, Samuele and Buonocore, Tommaso Mario and Giovanna, Nicora and Enea, Parimbelli},
151
+ title = {SignalGrad-CAM: Beyond Image Explanation},
152
+ booktitle = {Joint Proceedings of the xAI 2025 Late-breaking Work, Demos and Doctoral Consortium co-located with the 3rd World Conference on eXplainable Artificial Intelligence (xAI 2025), Istanbul, Turkey, July 9-11, 2025},
153
+ series = {CEUR Workshop Proceedings},
154
+ volume = {4017},
155
+ pages = {209--216},
156
+ url = {https://ceur-ws.org/Vol-4017/paper_27.pdf},
157
+ publisher = {CEUR-WS.org},
158
+ year = {2025}
159
+ }
160
+ ```
161
+
162
+ ```
163
+ @software{pe_sgradcam_2025_repo,
164
+ author = {Pe, Samuele},
165
+ title = {SignalGrad-CAM},
166
+ url = {https://github.com/bmi-labmedinfo/signal_grad_cam},
167
+ version = {1.0.1},
168
+ year = {2025}
169
+ }
170
+ ```
171
+
172
+ <p align="right"><a href="#top">Back To Top</a></p>
173
+
174
+ <!-- CONTACTS AND USEFUL LINKS -->
175
+ ## Contacts and Useful Links
176
+
177
+ * **Repository maintainer**: Samuele Pe [![Gmail][gmail-shield]][gmail-url] [![LinkedIn][linkedin-shield]][linkedin-url]
178
+
179
+ * **Project Link**: [https://github.com/bmi-labmedinfo/signal_grad_cam](https://github.com/bmi-labmedinfo/signal_grad_cam)
180
+
181
+ * **Package Link**: [https://pypi.org/project/signal-grad-cam/](https://pypi.org/project/signal-grad-cam/)
182
+
183
+ <p align="right"><a href="#top">Back To Top</a></p>
184
+
185
+ <!-- LICENSE -->
186
+ ## License
187
+
188
+ Distributed under MIT License. See `LICENSE` for more information.
189
+
190
+
191
+ <p align="right"><a href="#top">Back To Top</a></p>
192
+
193
+ <!-- MARKDOWN LINKS -->
194
+
195
+ [contributors-shield]: https://img.shields.io/github/contributors/bmi-labmedinfo/signal_grad_cam.svg?style=for-the-badge
196
+
197
+ [contributors-url]: https://github.com/bmi-labmedinfo/signal_grad_cam/graphs/contributors
198
+
199
+ [status-shield]: https://img.shields.io/badge/Status-pre--release-blue
200
+
201
+ [status-url]: https://github.com/bmi-labmedinfo/signal_grad_cam/releases
202
+
203
+ [forks-shield]: https://img.shields.io/github/forks/bmi-labmedinfo/signal_grad_cam.svg?style=for-the-badge
204
+
205
+ [forks-url]: https://github.com/bmi-labmedinfo/signal_grad_cam/network/members
206
+
207
+ [stars-shield]: https://img.shields.io/github/stars/bmi-labmedinfo/signal_grad_cam.svg?style=for-the-badge
208
+
209
+ [stars-url]: https://github.com/bmi-labmedinfo/signal_grad_cam/stargazers
210
+
211
+ [issues-shield]: https://img.shields.io/github/issues/bmi-labmedinfo/signal_grad_cam.svg?style=for-the-badge
212
+
213
+ [issues-url]: https://github.com/bmi-labmedinfo/signal_grad_cam/issues
214
+
215
+ [license-shield]: https://img.shields.io/github/license/bmi-labmedinfo/signal_grad_cam.svg?style=for-the-badge
216
+
217
+ [license-url]: https://github.com/bmi-labmedinfo/signal_grad_cam/LICENSE
218
+
219
+ [linkedin-shield]: https://img.shields.io/badge/LinkedIn-0077B5?style=for-the-badge&logo=linkedin&logoColor=white
220
+
221
+ [linkedin-url]: https://linkedin.com/in/samuele-pe-818bbb307
222
+
223
+ [gmail-shield]: https://img.shields.io/badge/Email-D14836?style=for-the-badge&logo=gmail&logoColor=white
224
+
225
+ [gmail-url]: mailto:samuele.pe01@universitadipavia.it
@@ -5,7 +5,7 @@ with open("README.md", "r") as f:
5
5
 
6
6
  setup(
7
7
  name="signal_grad_cam",
8
- version="1.0.1",
8
+ version="2.0.1",
9
9
  description="SignalGrad-CAM aims at generalising Grad-CAM to one-dimensional applications, while enhancing usability"
10
10
  " and efficiency.",
11
11
  keywords="XAI, class activation maps, CNN, time series",
@@ -28,7 +28,8 @@ setup(
28
28
  "opencv-python",
29
29
  "torch",
30
30
  "keras",
31
- "tensorflow"
31
+ "tensorflow",
32
+ "imageio"
32
33
  ],
33
34
  include_package_data=True,
34
35
  zip_safe=False