signal-grad-cam 0.1.7__tar.gz → 1.0.0__tar.gz

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.

Potentially problematic release.


This version of signal-grad-cam might be problematic. Click here for more details.

@@ -1,6 +1,6 @@
1
1
  Metadata-Version: 2.1
2
2
  Name: signal_grad_cam
3
- Version: 0.1.7
3
+ Version: 1.0.0
4
4
  Summary: SignalGrad-CAM aims at generalising Grad-CAM to one-dimensional applications, while enhancing usability and efficiency.
5
5
  Home-page: https://github.com/samuelepe11/signal_grad_cam
6
6
  Author: Samuele Pe
@@ -111,7 +111,7 @@ def preprocess_fn(signal):
111
111
  class_labels = ["Class 1", "Class 2", "Class 3"]
112
112
 
113
113
  # Define the CAM builder
114
- cam_builder = TorchCamBuilder(model=model, transform_fc=preprocess_fc, class_names=class_labels, time_axs=1)
114
+ cam_builder = TorchCamBuilder(model=model, transform_fn=preprocess_fn, class_names=class_labels, time_axs=1)
115
115
  ```
116
116
 
117
117
  <p align="justify">Now, you can use the `cam_builder` object to generate class activation maps from a list of input data using the <i>`get_cams`</i> method. You can specify multiple algorithm names, target layers, or target classes as needed.
@@ -133,7 +133,7 @@ target_classes = [0, 1]
133
133
  # Create CAMs
134
134
  cam_dict, predicted_probs_dict, score_ranges_dict = cam_builder.get_cam(data_list=data_list, data_labels=data_labels_list,
135
135
  target_classes=target_classes, explainer_types="Grad-CAM",
136
- target_layer="conv1d_layer_1", softmax_final=True,
136
+ target_layers="conv1d_layer_1", softmax_final=True,
137
137
  data_sampling_freq=25, dt=1, axes_names=("Time (s)", "Channels"))
138
138
 
139
139
  # Visualize single channel importance
@@ -141,7 +141,7 @@ selected_channels_indices = [0, 2, 10]
141
141
  cam_builder.single_channel_output_display(data_list=data_list, data_labels=data_labels_list, predicted_probs_dict=predicted_probs_dict,
142
142
  cams_dict=cam_dict, explainer_types="Grad-CAM", target_classes=target_classes,
143
143
  target_layers="target_layer_name", desired_channels=selected_channels_indices,
144
- grid_instructions=(1, len(selected_channels_indices), bar_ranges=score_ranges_dict,
144
+ grid_instructions=(1, len(selected_channels_indices), bar_ranges_dict=score_ranges_dict,
145
145
  results_dir="path_to_your_result_directoory", data_sampling_freq=25, dt=1, line_width=0.5,
146
146
  axes_names=("Time (s)", "Amplitude (mV)"))
147
147
 
@@ -149,11 +149,11 @@ cam_builder.single_channel_output_display(data_list=data_list, data_labels=data_
149
149
  cam_builder.overlapped_output_display(data_list=data_list, data_labels=data_labels_list, predicted_probs_dict=predicted_probs_dict,
150
150
  cams_dict=cam_dict, explainer_types="Grad-CAM", target_classes=target_classes,
151
151
  target_layers="target_layer_name", fig_size=(20 * len(your_data_X), 20),
152
- grid_instructions=(len(your_data_X), 1), bar_ranges=score_ranges_dict, data_names=item_names
153
- results_dir="path_to_your_result_directoory", data_sampling_freq=25, dt=1)
152
+ grid_instructions=(len(your_data_X), 1), bar_ranges_dict=score_ranges_dict, data_names=item_names
153
+ results_dir_path="path_to_your_result_directoory", data_sampling_freq=25, dt=1)
154
154
  ```
155
155
 
156
- You can also check the python scripts [here](https://github.com/bmi-labmedinfo/signal_grad_cam/examples).
156
+ You can also explore the Python scripts available in the examples directory of the repository [here](https://github.com/bmi-labmedinfo/signal_grad_cam/examples), which provide complete, ready-to-run demonstrations for both PyTorch and TensorFlow/Keras models. These examples include open-source models for image and signal classification using 1D- and 2D-CNN architectures, and they illustrate how to apply the recently added feature for creating and displaying "contrastive explanations" in each scenario.
157
157
 
158
158
  See the [open issues](https://github.com/bmi-labmedinfo/signal_grad_cam/issues) for a full list of proposed features (and known issues).
159
159
 
@@ -163,11 +163,25 @@ See the [open issues](https://github.com/bmi-labmedinfo/signal_grad_cam/issues)
163
163
  If you use the SignalGrad-CAM software for your projects, please cite it as:
164
164
 
165
165
  ```
166
- @software{Pe_SignalGrad_CAM_2025,
167
- author = {Pe, Samuele and Buonocore, Tommaso Mario and Giovanna, Nicora and Enea, Parimbelli},
166
+ @inproceedings{pe_sgradcam_2025_paper,
167
+ author = {Pe, Samuele and Buonocore, Tommaso Mario and Giovanna, Nicora and Enea, Parimbelli}},
168
+ title = {SignalGrad-CAM: Beyond Image Explanation},
169
+ booktitle = {Joint Proceedings of the xAI 2025 Late-breaking Work, Demos and Doctoral Consortium co-located with the 3rd World Conference on eXplainable Artificial Intelligence (xAI 2025), Istanbul, Turkey, July 9-11, 2025},
170
+ series = {CEUR Workshop Proceedings},
171
+ volume = {4017},
172
+ pages = {209--216},
173
+ url = {https://ceur-ws.org/Vol-4017/paper_27.pdf},
174
+ publisher = {CEUR-WS.org},
175
+ year = {2025}
176
+ }
177
+ ```
178
+
179
+ ```
180
+ @software{pe_sgradcam_2025_repo,
181
+ author = {Pe, Samuele},
168
182
  title = {{SignalGrad-CAM}},
169
183
  url = {https://github.com/bmi-labmedinfo/signal_grad_cam},
170
- version = {0.0.1},
184
+ version = {1.0.0},
171
185
  year = {2025}
172
186
  }
173
187
  ```
@@ -89,7 +89,7 @@ def preprocess_fn(signal):
89
89
  class_labels = ["Class 1", "Class 2", "Class 3"]
90
90
 
91
91
  # Define the CAM builder
92
- cam_builder = TorchCamBuilder(model=model, transform_fc=preprocess_fc, class_names=class_labels, time_axs=1)
92
+ cam_builder = TorchCamBuilder(model=model, transform_fn=preprocess_fn, class_names=class_labels, time_axs=1)
93
93
  ```
94
94
 
95
95
  <p align="justify">Now, you can use the `cam_builder` object to generate class activation maps from a list of input data using the <i>`get_cams`</i> method. You can specify multiple algorithm names, target layers, or target classes as needed.
@@ -111,7 +111,7 @@ target_classes = [0, 1]
111
111
  # Create CAMs
112
112
  cam_dict, predicted_probs_dict, score_ranges_dict = cam_builder.get_cam(data_list=data_list, data_labels=data_labels_list,
113
113
  target_classes=target_classes, explainer_types="Grad-CAM",
114
- target_layer="conv1d_layer_1", softmax_final=True,
114
+ target_layers="conv1d_layer_1", softmax_final=True,
115
115
  data_sampling_freq=25, dt=1, axes_names=("Time (s)", "Channels"))
116
116
 
117
117
  # Visualize single channel importance
@@ -119,7 +119,7 @@ selected_channels_indices = [0, 2, 10]
119
119
  cam_builder.single_channel_output_display(data_list=data_list, data_labels=data_labels_list, predicted_probs_dict=predicted_probs_dict,
120
120
  cams_dict=cam_dict, explainer_types="Grad-CAM", target_classes=target_classes,
121
121
  target_layers="target_layer_name", desired_channels=selected_channels_indices,
122
- grid_instructions=(1, len(selected_channels_indices), bar_ranges=score_ranges_dict,
122
+ grid_instructions=(1, len(selected_channels_indices), bar_ranges_dict=score_ranges_dict,
123
123
  results_dir="path_to_your_result_directoory", data_sampling_freq=25, dt=1, line_width=0.5,
124
124
  axes_names=("Time (s)", "Amplitude (mV)"))
125
125
 
@@ -127,11 +127,11 @@ cam_builder.single_channel_output_display(data_list=data_list, data_labels=data_
127
127
  cam_builder.overlapped_output_display(data_list=data_list, data_labels=data_labels_list, predicted_probs_dict=predicted_probs_dict,
128
128
  cams_dict=cam_dict, explainer_types="Grad-CAM", target_classes=target_classes,
129
129
  target_layers="target_layer_name", fig_size=(20 * len(your_data_X), 20),
130
- grid_instructions=(len(your_data_X), 1), bar_ranges=score_ranges_dict, data_names=item_names
131
- results_dir="path_to_your_result_directoory", data_sampling_freq=25, dt=1)
130
+ grid_instructions=(len(your_data_X), 1), bar_ranges_dict=score_ranges_dict, data_names=item_names
131
+ results_dir_path="path_to_your_result_directoory", data_sampling_freq=25, dt=1)
132
132
  ```
133
133
 
134
- You can also check the python scripts [here](https://github.com/bmi-labmedinfo/signal_grad_cam/examples).
134
+ You can also explore the Python scripts available in the examples directory of the repository [here](https://github.com/bmi-labmedinfo/signal_grad_cam/examples), which provide complete, ready-to-run demonstrations for both PyTorch and TensorFlow/Keras models. These examples include open-source models for image and signal classification using 1D- and 2D-CNN architectures, and they illustrate how to apply the recently added feature for creating and displaying "contrastive explanations" in each scenario.
135
135
 
136
136
  See the [open issues](https://github.com/bmi-labmedinfo/signal_grad_cam/issues) for a full list of proposed features (and known issues).
137
137
 
@@ -141,11 +141,25 @@ See the [open issues](https://github.com/bmi-labmedinfo/signal_grad_cam/issues)
141
141
  If you use the SignalGrad-CAM software for your projects, please cite it as:
142
142
 
143
143
  ```
144
- @software{Pe_SignalGrad_CAM_2025,
145
- author = {Pe, Samuele and Buonocore, Tommaso Mario and Giovanna, Nicora and Enea, Parimbelli},
144
+ @inproceedings{pe_sgradcam_2025_paper,
145
+ author = {Pe, Samuele and Buonocore, Tommaso Mario and Giovanna, Nicora and Enea, Parimbelli}},
146
+ title = {SignalGrad-CAM: Beyond Image Explanation},
147
+ booktitle = {Joint Proceedings of the xAI 2025 Late-breaking Work, Demos and Doctoral Consortium co-located with the 3rd World Conference on eXplainable Artificial Intelligence (xAI 2025), Istanbul, Turkey, July 9-11, 2025},
148
+ series = {CEUR Workshop Proceedings},
149
+ volume = {4017},
150
+ pages = {209--216},
151
+ url = {https://ceur-ws.org/Vol-4017/paper_27.pdf},
152
+ publisher = {CEUR-WS.org},
153
+ year = {2025}
154
+ }
155
+ ```
156
+
157
+ ```
158
+ @software{pe_sgradcam_2025_repo,
159
+ author = {Pe, Samuele},
146
160
  title = {{SignalGrad-CAM}},
147
161
  url = {https://github.com/bmi-labmedinfo/signal_grad_cam},
148
- version = {0.0.1},
162
+ version = {1.0.0},
149
163
  year = {2025}
150
164
  }
151
165
  ```
@@ -5,7 +5,7 @@ with open("README.md", "r") as f:
5
5
 
6
6
  setup(
7
7
  name="signal_grad_cam",
8
- version="0.1.7",
8
+ version="1.0.0",
9
9
  description="SignalGrad-CAM aims at generalising Grad-CAM to one-dimensional applications, while enhancing usability"
10
10
  " and efficiency.",
11
11
  keywords="XAI, class activation maps, CNN, time series",
@@ -111,10 +111,11 @@ class CamBuilder:
111
111
 
112
112
  def get_cam(self, data_list: List[np.ndarray], data_labels: List[int], target_classes: int | List[int],
113
113
  explainer_types: str | List[str], target_layers: str | List[str], softmax_final: bool,
114
- data_names: List[str] = None, data_sampling_freq: float = None, dt: float = 10,
115
- channel_names: List[str | float] = None, results_dir_path: str = None, aspect_factor: float = 100,
116
- data_shape_list: List[Tuple[int, int]] = None, extra_preprocess_inputs_list: List[List[Any]] = None,
117
- extra_inputs_list: List[Any] = None, time_names: List[str | float] = None,
114
+ data_names: List[str] = None, contrastive_foil_classes: int | List[int] = None,
115
+ data_sampling_freq: float = None, dt: float = 10, channel_names: List[str | float] = None,
116
+ results_dir_path: str = None, aspect_factor: float = 100, data_shape_list: List[Tuple[int, int]] = None,
117
+ extra_preprocess_inputs_list: List[List[Any]] = None, extra_inputs_list: List[Any] = None,
118
+ time_names: List[str | float] = None,
118
119
  axes_names: Tuple[str | None, str | None] | List[str | None] = None, eps: float = 1e-6) \
119
120
  -> Tuple[Dict[str, List[np.ndarray]], Dict[str, np.ndarray], Dict[str, Tuple[np.ndarray, np.ndarray]]]:
120
121
  """
@@ -137,6 +138,9 @@ class CamBuilder:
137
138
  activation function.
138
139
  :param data_names: (optional, default is None) A list of strings where each string represents the name of an
139
140
  input item.
141
+ :param contrastive_foil_classes: (optional, default is None) An integer or list of integers representing the
142
+ comparative classes (foils) for the explanation in the context of Contrastive Explanations. If None, the
143
+ explanation would follow the classical paradigm.
140
144
  :param data_sampling_freq: (optional, default is None) A numerical value representing the sampling frequency of
141
145
  signal inputs in samples per second.
142
146
  :param dt: (optional, default is 10) A numerical value representing the granularity of the time axis in seconds
@@ -176,8 +180,8 @@ class CamBuilder:
176
180
  data_names = ["item" + str(i) for i in range(len(data_list))]
177
181
 
178
182
  # Check input types
179
- target_classes, explainer_types, target_layers = self.__check_input_types(target_classes, explainer_types,
180
- target_layers)
183
+ target_classes, explainer_types, target_layers, contrastive_foil_classes = self.__check_input_types(
184
+ target_classes, explainer_types, target_layers, contrastive_foil_classes)
181
185
  for explainer_type in explainer_types:
182
186
  if explainer_type not in self.explainer_types:
183
187
  raise ValueError("'explainer_types' should be an explainer identifier or a list of explainer "
@@ -190,21 +194,28 @@ class CamBuilder:
190
194
  for explainer_type in explainer_types:
191
195
  for target_class in target_classes:
192
196
  for target_layer in target_layers:
193
- cam_list, output_probs, bar_ranges = self.__create_batched_cams(data_list, target_class,
194
- target_layer, explainer_type,
195
- softmax_final,
196
- data_shape_list=data_shape_list,
197
- extra_preprocess_inputs_list=
198
- extra_preprocess_inputs_list,
199
- extra_inputs_list=extra_inputs_list,
200
- eps=eps)
201
- item_key = explainer_type + "_" + target_layer + "_class" + str(target_class)
202
- cams_dict.update({item_key: cam_list})
203
- predicted_probs_dict.update({item_key: output_probs})
204
- bar_ranges_dict.update({item_key: bar_ranges})
205
- self.__display_output(data_labels, target_class, explainer_type, target_layer, cam_list, output_probs,
206
- results_dir_path, data_names, data_sampling_freq, dt, aspect_factor,
207
- bar_ranges, channel_names, time_names=time_names, axes_names=axes_names)
197
+ for contrastive_foil_class in contrastive_foil_classes:
198
+ cam_list, output_probs, bar_ranges = self.__create_batched_cams(data_list, target_class,
199
+ target_layer, explainer_type,
200
+ softmax_final,
201
+ data_shape_list=data_shape_list,
202
+ extra_preprocess_inputs_list=
203
+ extra_preprocess_inputs_list,
204
+ extra_inputs_list=
205
+ extra_inputs_list,
206
+ contrastive_foil_class=
207
+ contrastive_foil_class,
208
+ eps=eps)
209
+ item_key = explainer_type + "_" + target_layer + "_class" + str(target_class)
210
+ if contrastive_foil_class is not None:
211
+ item_key += "_foil" + str(contrastive_foil_class)
212
+ cams_dict.update({item_key: cam_list})
213
+ predicted_probs_dict.update({item_key: output_probs})
214
+ bar_ranges_dict.update({item_key: bar_ranges})
215
+ self.__display_output(data_labels, target_class, explainer_type, target_layer, cam_list, output_probs,
216
+ results_dir_path, data_names, data_sampling_freq, dt, aspect_factor,
217
+ bar_ranges, channel_names, time_names=time_names, axes_names=axes_names,
218
+ contrastive_foil_class=contrastive_foil_class)
208
219
 
209
220
  return cams_dict, predicted_probs_dict, bar_ranges_dict
210
221
 
@@ -213,7 +224,8 @@ class CamBuilder:
213
224
  predicted_probs_dict: Dict[str, np.ndarray], cams_dict: Dict[str, List[np.ndarray]],
214
225
  explainer_types: str | List[str], target_classes: int | List[int],
215
226
  target_layers: str | List[str], target_item_ids: List[int] = None,
216
- data_names: List[str] = None, grid_instructions: Tuple[int, int] = None,
227
+ data_names: List[str] = None, contrastive_foil_classes: int | List[int] = None,
228
+ grid_instructions: Tuple[int, int] = None,
217
229
  bar_ranges_dict: Dict[str, Tuple[np.ndarray, np.ndarray]] = None,
218
230
  results_dir_path: str = None, data_sampling_freq: float = None, dt: float = 10,
219
231
  channel_names: List[str | float] = None, time_names: List[str | float] = None,
@@ -242,11 +254,14 @@ class CamBuilder:
242
254
  among the items in the input data list.
243
255
  :param data_names: (optional, default is None) A list of strings where each string represents the name of an
244
256
  input item.
257
+ :param contrastive_foil_classes: (optional, default is None) An integer or list of integers representing the
258
+ comparative classes (foils) for the explanation in the context of Contrastive Explanations. If None, the
259
+ explanation would follow the classical paradigm.
245
260
  :param grid_instructions: (optional, default is None) A tuple of integers defining the desired tabular layout
246
261
  for figure subplots. The expected format is number of columns (width) x number of rows (height).
247
262
  :param bar_ranges_dict: A dictionary storing a tuple of np.ndarrays. Each tuple contains two np.ndarrays
248
- corresponding to the minimum and maximum importance scores per CAM for each item in the input data list,
249
- based on a given setting (defined by algorithm, target layer, and target class).
263
+ corresponding to the minimum and maximum importance scores per CAM for each item in the input data list,
264
+ based on a given setting (defined by algorithm, target layer, and target class).
250
265
  :param results_dir_path: (optional, default is None) A string representing the relative path to the directory
251
266
  for storing results. If None, the output will be displayed in a figure.
252
267
  :param data_sampling_freq: (optional, default is None) A numerical value representing the sampling frequency of
@@ -263,8 +278,8 @@ class CamBuilder:
263
278
  """
264
279
 
265
280
  # Check input types
266
- target_classes, explainer_types, target_layers = self.__check_input_types(target_classes, explainer_types,
267
- target_layers)
281
+ target_classes, explainer_types, target_layers, contrastive_foil_classes = self.__check_input_types(
282
+ target_classes, explainer_types, target_layers, contrastive_foil_classes)
268
283
  if target_item_ids is None:
269
284
  target_item_ids = list(range(len(data_list)))
270
285
 
@@ -279,36 +294,40 @@ class CamBuilder:
279
294
  for explainer_type in explainer_types:
280
295
  for target_layer in target_layers:
281
296
  for target_class in target_classes:
282
- plt.figure(figsize=fig_size)
283
- for i in range(n_items):
284
- cam, item, batch_idx, item_key = self.__get_data_for_plots(data_list, i, target_item_ids,
285
- cams_dict, explainer_type,
286
- target_layer, target_class)
287
-
288
- plt.subplot(w, h, i + 1)
289
- plt.imshow(item)
290
- aspect = "auto" if cam.shape[0] / cam.shape[1] < 0.1 else None
291
-
292
- norm = self.__get_norm(cam)
293
- map = plt.imshow(cam, cmap="jet", aspect=aspect, norm=norm)
294
- self.__set_colorbar(bar_ranges_dict[item_key], i)
295
- map.set_alpha(0.3)
296
-
297
- self.__set_axes(cam, data_sampling_freq, dt, channel_names, time_names=time_names,
298
- axes_names=axes_names)
299
- data_name = data_names[batch_idx] if data_names is not None else "item" + str(batch_idx)
300
- plt.title(self.__get_cam_title(data_name, target_class, data_labels, batch_idx, item_key,
301
- predicted_probs_dict))
302
-
303
- # Store or show CAM
304
- self.__display_plot(results_dir_path, explainer_type, target_layer, target_class)
297
+ for contrastive_foil_class in contrastive_foil_classes:
298
+ plt.figure(figsize=fig_size)
299
+ for i in range(n_items):
300
+ cam, item, batch_idx, item_key = self.__get_data_for_plots(data_list, i, target_item_ids,
301
+ cams_dict, explainer_type,
302
+ target_layer, target_class,
303
+ contrastive_foil_class)
304
+
305
+ plt.subplot(w, h, i + 1)
306
+ plt.imshow(item)
307
+ aspect = "auto" if cam.shape[0] / cam.shape[1] < 0.1 else None
308
+
309
+ norm = self.__get_norm(cam)
310
+ map = plt.imshow(cam, cmap="jet", aspect=aspect, norm=norm)
311
+ self.__set_colorbar(bar_ranges_dict[item_key], i)
312
+ map.set_alpha(0.3)
313
+
314
+ self.__set_axes(cam, data_sampling_freq, dt, channel_names, time_names=time_names,
315
+ axes_names=axes_names)
316
+ data_name = data_names[batch_idx] if data_names is not None else "item" + str(batch_idx)
317
+ plt.title(self.__get_cam_title(data_name, target_class, data_labels, batch_idx, item_key,
318
+ predicted_probs_dict, contrastive_foil_class))
319
+
320
+ # Store or show CAM
321
+ self.__display_plot(results_dir_path, explainer_type, target_layer, target_class,
322
+ contrastive_foil_class)
305
323
 
306
324
  def single_channel_output_display(self, data_list: List[np.ndarray], data_labels: List[int],
307
325
  predicted_probs_dict: Dict[str, np.ndarray],
308
326
  cams_dict: Dict[str, List[np.ndarray]], explainer_types: str | List[str],
309
327
  target_classes: int | List[int], target_layers: str | List[str],
310
328
  target_item_ids: List[int] = None, desired_channels: List[int] = None,
311
- data_names: List[str] = None, grid_instructions: Tuple[int, int] = None,
329
+ data_names: List[str] = None, contrastive_foil_classes: int | List[int] = None,
330
+ grid_instructions: Tuple[int, int] = None,
312
331
  bar_ranges_dict: Dict[str, Tuple[np.ndarray, np.ndarray]] = None,
313
332
  results_dir_path: str = None, data_sampling_freq: float = None, dt: float = 10,
314
333
  channel_names: List[str | float] = None, time_names: List[str | float] = None,
@@ -340,6 +359,9 @@ class CamBuilder:
340
359
  to be displayed.
341
360
  :param data_names: (optional, default is None) A list of strings where each string represents the name of an
342
361
  input item.
362
+ :param contrastive_foil_classes: (optional, default is None) An integer or list of integers representing the
363
+ comparative classes (foils) for the explanation in the context of Contrastive Explanations. If None, the
364
+ explanation would follow the classical paradigm.
343
365
  :param grid_instructions: (optional, default is None) A tuple of integers defining the desired tabular layout
344
366
  for figure subplots. The expected format is number of columns (width) x number of rows (height).
345
367
  :param bar_ranges_dict: A dictionary storing a tuple of np.ndarrays. Each tuple contains two np.ndarrays
@@ -365,8 +387,8 @@ class CamBuilder:
365
387
  """
366
388
 
367
389
  # Check input types
368
- target_classes, explainer_types, target_layers = self.__check_input_types(target_classes, explainer_types,
369
- target_layers)
390
+ target_classes, explainer_types, target_layers, contrastive_foil_classes = self.__check_input_types(
391
+ target_classes, explainer_types, target_layers, contrastive_foil_classes)
370
392
  if desired_channels is None:
371
393
  try:
372
394
  desired_channels = list(range(data_list[0].shape[1]))
@@ -387,42 +409,44 @@ class CamBuilder:
387
409
  for explainer_type in explainer_types:
388
410
  for target_layer in target_layers:
389
411
  for target_class in target_classes:
390
- for i in range(n_items):
391
- plt.figure(figsize=fig_size)
392
- cam, item, batch_idx, item_key = self.__get_data_for_plots(data_list, i, target_item_ids,
393
- cams_dict, explainer_type,
394
- target_layer, target_class)
395
-
396
- # Cross-CAM normalization
397
- minimum = np.min(cam)
398
- maximum = np.max(cam)
399
-
400
- data_name = data_names[batch_idx] if data_names is not None else "item" + str(batch_idx)
401
- desired_channels = desired_channels if desired_channels is not None else range(cam.shape[1])
402
- for j in range(len(desired_channels)):
403
- channel = desired_channels[j]
404
- plt.subplot(w, h, j + 1)
405
- try:
406
- cam_j = cam[channel, :]
407
- except IndexError:
408
- cam_j = cam[0, :]
409
- item_j = item[:, channel] if item.shape[0] == len(cam_j) else item[channel, :]
410
- plt.plot(item_j, color="black", linewidth=line_width)
411
- plt.scatter(np.arange(len(item_j)), item_j, c=cam_j, cmap="jet", marker=".",
412
- s=marker_width, norm=None, vmin=minimum, vmax=maximum)
413
- self.__set_colorbar(bar_ranges_dict[item_key], i)
414
-
415
- if channel_names is None:
416
- channel_names = ["Channel " + str(c) for c in desired_channels]
417
- self.__set_axes(cam, data_sampling_freq, dt, channel_names, time_names,
418
- axes_names=axes_names, only_x=True)
419
- plt.title(channel_names[j])
420
- plt.suptitle(self.__get_cam_title(data_name, target_class, data_labels, batch_idx, item_key,
421
- predicted_probs_dict))
422
-
423
- # Store or show CAM
424
- self.__display_plot(results_dir_path, explainer_type, target_layer, target_class, data_name,
425
- is_channel=True)
412
+ for contrastive_foil_class in contrastive_foil_classes:
413
+ for i in range(n_items):
414
+ plt.figure(figsize=fig_size)
415
+ cam, item, batch_idx, item_key = self.__get_data_for_plots(data_list, i, target_item_ids,
416
+ cams_dict, explainer_type,
417
+ target_layer, target_class,
418
+ contrastive_foil_class)
419
+
420
+ # Cross-CAM normalization
421
+ minimum = np.min(cam)
422
+ maximum = np.max(cam)
423
+
424
+ data_name = data_names[batch_idx] if data_names is not None else "item" + str(batch_idx)
425
+ desired_channels = desired_channels if desired_channels is not None else range(cam.shape[1])
426
+ for j in range(len(desired_channels)):
427
+ channel = desired_channels[j]
428
+ plt.subplot(w, h, j + 1)
429
+ try:
430
+ cam_j = cam[channel, :]
431
+ except IndexError:
432
+ cam_j = cam[0, :]
433
+ item_j = item[:, channel] if item.shape[0] == len(cam_j) else item[channel, :]
434
+ plt.plot(item_j, color="black", linewidth=line_width)
435
+ plt.scatter(np.arange(len(item_j)), item_j, c=cam_j, cmap="jet", marker=".",
436
+ s=marker_width, norm=None, vmin=minimum, vmax=maximum)
437
+ self.__set_colorbar(bar_ranges_dict[item_key], i)
438
+
439
+ if channel_names is None:
440
+ channel_names = ["Channel " + str(c) for c in desired_channels]
441
+ self.__set_axes(cam, data_sampling_freq, dt, channel_names, time_names,
442
+ axes_names=axes_names, only_x=True)
443
+ plt.title(channel_names[j])
444
+ plt.suptitle(self.__get_cam_title(data_name, target_class, data_labels, batch_idx, item_key,
445
+ predicted_probs_dict, contrastive_foil_class))
446
+
447
+ # Store or show CAM
448
+ self.__display_plot(results_dir_path, explainer_type, target_layer, target_class,
449
+ contrastive_foil_class, data_name, is_channel=True)
426
450
 
427
451
  def _get_layers_pool(self, show: bool = False, extend_search: bool = False) \
428
452
  -> Dict[str, torch.nn.Module | tf.keras.layers.Layer | Any]:
@@ -464,7 +488,8 @@ class CamBuilder:
464
488
 
465
489
  def _create_raw_batched_cams(self, data_list: List[np.ndarray | torch.Tensor | tf.Tensor], target_class: int,
466
490
  target_layer: str, explainer_type: str, softmax_final: bool,
467
- extra_inputs_list: List[Any] = None, eps: float = 1e-6) \
491
+ extra_inputs_list: List[Any] = None, contrastive_foil_class: int = None,
492
+ eps: float = 1e-6) \
468
493
  -> Tuple[List[np.ndarray], np.ndarray]:
469
494
  """
470
495
  Retrieves raw CAMs from an input data list based on the specified settings (defined by algorithm, target layer,
@@ -481,6 +506,9 @@ class CamBuilder:
481
506
  activation function.
482
507
  :param extra_inputs_list: (optional, defaults is None) A list of additional input objects required by the
483
508
  model's forward method.
509
+ :param contrastive_foil_class: (optional, default is None) An integer representing the comparative class (foil)
510
+ for the explanation in the context of Contrastive Explanations. If None, the explanation would follow the
511
+ classical paradigm.
484
512
  :param eps: (optional, default is 1e-6) A float number used in probability clamping before logarithm application
485
513
  to avoid null or None results.
486
514
 
@@ -529,7 +557,7 @@ class CamBuilder:
529
557
  def __create_batched_cams(self, data_list: List[np.ndarray], target_class: int, target_layer: str,
530
558
  explainer_type: str, softmax_final: bool, data_shape_list: List[Tuple[int, int]] = None,
531
559
  extra_preprocess_inputs_list: List[List[Any]] = None, extra_inputs_list: List[Any] = None,
532
- eps: float = 1e-6) \
560
+ contrastive_foil_class: int = None, eps: float = 1e-6) \
533
561
  -> Tuple[List[np.ndarray], np.ndarray, Tuple[np.ndarray, np.ndarray]]:
534
562
  """
535
563
  Prepares the input data list and retrieves CAMs based on the specified settings (defined by algorithm, target
@@ -552,6 +580,9 @@ class CamBuilder:
552
580
  represents the additional input objects required by the preprocessing method for the i-th input.
553
581
  :param extra_inputs_list: (optional, default is None) A list of additional input objects required by the model's
554
582
  forward method.
583
+ :param contrastive_foil_class: (optional, default is None) An integer representing the comparative class (foil)
584
+ for the explanation in the context of Contrastive Explanations. If None, the explanation would follow the
585
+ classical paradigm.
555
586
  :param eps: (optional, default is 1e-6) A float number used in probability clamping before logarithm application
556
587
  to avoid null or None results.
557
588
 
@@ -589,7 +620,7 @@ class CamBuilder:
589
620
 
590
621
  cam_list, target_probs = self._create_raw_batched_cams(data_list, target_class, target_layer, explainer_type,
591
622
  softmax_final, extra_inputs_list=extra_inputs_list,
592
- eps=eps)
623
+ contrastive_foil_class=contrastive_foil_class, eps=eps)
593
624
  self.activations = None
594
625
  self.gradients = None
595
626
  cams = np.stack(cam_list)
@@ -645,7 +676,7 @@ class CamBuilder:
645
676
  data_names: List[str], data_sampling_freq: float = None, dt: float = 10,
646
677
  aspect_factor: float = 100, bar_ranges: Tuple[np.ndarray, np.ndarray] = None,
647
678
  channel_names: List[str | float] = None, time_names: List[str | float] = None,
648
- axes_names: Tuple[str | None, str | None] = None) -> None:
679
+ axes_names: Tuple[str | None, str | None] = None, contrastive_foil_class: int = None) -> None:
649
680
  """
650
681
  Create plots displaying the obtained CAMs, set their axes, and show them as multiple figures or as ".png" files.
651
682
 
@@ -679,6 +710,9 @@ class CamBuilder:
679
710
  :param time_names: (optional, default is None) A list of strings representing tick names for the time axis.
680
711
  :param axes_names: (optional, default is None) A tuple of strings representing names for X and Y axes,
681
712
  respectively.
713
+ :param contrastive_foil_class: (optional, default is None) An integer representing the comparative class (foil)
714
+ for the explanation in the context of Contrastive Explanations. If None, the explanation would follow the
715
+ classical paradigm.
682
716
  """
683
717
 
684
718
  if not os.path.exists(results_dir_path):
@@ -711,19 +745,26 @@ class CamBuilder:
711
745
  self.__set_colorbar(bar_ranges, i)
712
746
 
713
747
  # Set title
714
- plt.title("CAM for class '" + str(self.class_names[target_class]) + "' (confidence = " +
715
- str(np.round(predicted_probs[i] * 100, 2)) + "%) - true label " +
716
- str(self.class_names[data_labels[i]]))
748
+ if contrastive_foil_class is None:
749
+ plt.title("CAM for class '" + self.class_names[target_class] + "' (confidence = " +
750
+ str(np.round(predicted_probs[i] * 100, 2)) + "%) - true label " +
751
+ self.class_names[data_labels[i]])
752
+ else:
753
+ plt.title("Why '" + self.class_names[target_class] + "' (confidence = " +
754
+ str(np.round(predicted_probs[i][0] * 100, 2)) + "%), rather than '" +
755
+ self.class_names[contrastive_foil_class] + "'(confidence = " +
756
+ str(np.round(predicted_probs[i][1] * 100, 2)) + "%)?")
717
757
 
718
758
  # Set axis
719
759
  self.__set_axes(map, data_sampling_freq, dt, channel_names, time_names=time_names, axes_names=axes_names)
720
760
 
721
761
  # Store or show CAM
722
- self.__display_plot(results_dir_path, explainer_type, target_layer, target_class, data_name)
762
+ self.__display_plot(results_dir_path, explainer_type, target_layer, target_class, contrastive_foil_class,
763
+ data_name)
723
764
 
724
765
  def __get_data_for_plots(self, data_list: List[np.ndarray], i: int, target_item_ids: List[int],
725
766
  cams_dict: Dict[str, List[np.ndarray]], explainer_type: str, target_layer: str,
726
- target_class: int) -> Tuple[np.ndarray, np.ndarray, int, str]:
767
+ target_class: int, contrastive_foil_class: int) -> Tuple[np.ndarray, np.ndarray, int, str]:
727
768
  """
728
769
  Prepares input data and CAMs to be plotted, identifying the string key to retrieve CAMs, probabilities and
729
770
  ranges from the corresponding dictionaries.
@@ -740,6 +781,9 @@ class CamBuilder:
740
781
  identify either PyTorch named modules, TensorFlow/Keras layers, or it should be a class dictionary key,
741
782
  used to retrieve the layer from the class attributes.
742
783
  :param target_class: (mandatory) An integer representing the target class for the explanation.
784
+ :param contrastive_foil_class: (mandatory) An integer representing the comparative classes (foils) for the
785
+ explanation in the context of Contrastive Explanations. If None, the explanation would follow the classical
786
+ paradigm.
743
787
 
744
788
  :return:
745
789
  - cam: The CAM for the given setting (defined by algorithm, target layer, and target class), corresponding
@@ -752,6 +796,8 @@ class CamBuilder:
752
796
  batch_idx = target_item_ids[i]
753
797
  item = data_list[batch_idx]
754
798
  item_key = explainer_type + "_" + target_layer + "_class" + str(target_class)
799
+ if contrastive_foil_class is not None:
800
+ item_key += "_foil" + str(contrastive_foil_class)
755
801
  cam = cams_dict[item_key][batch_idx]
756
802
 
757
803
  item_dims = item.shape
@@ -823,7 +869,7 @@ class CamBuilder:
823
869
  plt.ylabel(axes_names[1])
824
870
 
825
871
  def __get_cam_title(self, item_name: str, target_class: int, data_labels: List[int], batch_idx: int, item_key: str,
826
- predicted_probs: Dict[str, np.ndarray]) -> str:
872
+ predicted_probs: Dict[str, np.ndarray], contrastive_foil_class: int) -> str:
827
873
  """
828
874
  Builds the CAM title for a given item and target class.
829
875
 
@@ -836,18 +882,28 @@ class CamBuilder:
836
882
  and target class).
837
883
  :param predicted_probs: (mandatory) A np.ndarray, representing the inferred class probabilities for each item in
838
884
  the input list.
885
+ :param contrastive_foil_class: (mandatory) An integer representing the comparative class (foil) for the
886
+ explanation in the context of Contrastive Explanations. If None, the explanation would follow the classical
887
+ paradigm.
839
888
 
840
889
  :return:
841
890
  - title: A string representing the title of the CAM for a given item and target class.
842
891
  """
892
+ if contrastive_foil_class is None:
893
+ title = ("'" + item_name + "': CAM for class '" + self.class_names[target_class] + "' (confidence = " +
894
+ str(np.round(predicted_probs[item_key][batch_idx] * 100, 2)) + "%) - true class " +
895
+ self.class_names[data_labels[batch_idx]])
896
+ else:
897
+ title = ("'" + item_name + "' (true class '" + self.class_names[data_labels[batch_idx]] + "'): Why '" +
898
+ self.class_names[target_class] + "' (confidence = " +
899
+ str(np.round(predicted_probs[item_key][batch_idx][0] * 100, 2)) + "%), rather than '" +
900
+ self.class_names[contrastive_foil_class] + "' (confidence = " +
901
+ str(np.round(predicted_probs[item_key][batch_idx][1] * 100, 2)) + "%)?")
843
902
 
844
- title = ("'" + item_name + "': CAM for class '" + self.class_names[target_class] + "' (confidence = " +
845
- str(np.round(predicted_probs[item_key][batch_idx] * 100, 2)) + "%) - true class " +
846
- self.class_names[data_labels[batch_idx]])
847
903
  return title
848
904
 
849
905
  def __display_plot(self, results_dir_path: str, explainer_type: str, target_layer: str, target_class: int,
850
- item_name: str = None, is_channel: bool = False) -> None:
906
+ contrastive_foil_class: int, item_name: str = None, is_channel: bool = False) -> None:
851
907
  """
852
908
  Show one CAM plot as a figure or as a ".png" file.
853
909
 
@@ -859,6 +915,9 @@ class CamBuilder:
859
915
  identify either PyTorch named modules, TensorFlow/Keras layers, or it should be a class dictionary key,
860
916
  used to retrieve the layer from the class attributes.
861
917
  :param target_class: (mandatory) An integer representing the target class for the explanation.
918
+ :param contrastive_foil_class: (mandatory) An integer representing the comparative class (foil) for the
919
+ explanation in the context of Contrastive Explanations. If None, the explanation would follow the classical
920
+ paradigm.
862
921
  :param item_name: (optional, default is False) A string representing the name of an input item.
863
922
  :param is_channel: (optional, default is False) A boolean flag indicating whether the figure represents graphs
864
923
  of multiple input channels, to discriminate it from other display modalities.
@@ -882,13 +941,18 @@ class CamBuilder:
882
941
  if data_name not in os.listdir(results_dir_path):
883
942
  os.mkdir(filepath)
884
943
  filename = (name_addon + explainer_type + "_" + re.sub(r"\W", "_", target_layer) + "_class" +
885
- str(target_class) + ".png")
944
+ str(target_class))
945
+ if contrastive_foil_class is not None:
946
+ filename += "_foil" + str(contrastive_foil_class)
947
+ filename += ".png"
886
948
 
887
949
  # Communicate outcome
888
950
  descr_addon1 = "for item '" + item_name + "' " if item_name is not None else ""
889
- self.__print_justify("Storing " + descr_addon + "output display " + descr_addon1 + "(class " +
890
- self.class_names[target_class] + ", layer " + target_layer + ", algorithm " + explainer_type +
891
- ") as '" + filename + "'...")
951
+ tmp_txt = ("Storing " + descr_addon + "output display " + descr_addon1 + "(class " +
952
+ self.class_names[target_class] + ", layer " + target_layer + ", algorithm " + explainer_type)
953
+ if contrastive_foil_class is not None:
954
+ tmp_txt += ", foil class " + self.class_names[contrastive_foil_class]
955
+ self.__print_justify(tmp_txt + ") as '" + filename + "'...")
892
956
 
893
957
  plt.savefig(os.path.join(filepath, filename), format="png", bbox_inches="tight", pad_inches=0,
894
958
  dpi=500)
@@ -986,7 +1050,8 @@ class CamBuilder:
986
1050
 
987
1051
  @staticmethod
988
1052
  def __check_input_types(target_classes: int | List[int], explainer_types: str | List[str],
989
- target_layers: str | List[str]) -> Tuple[List[int], List[str], List[str]]:
1053
+ target_layers: str | List[str], contrastive_foil_classes: int | List[int]) \
1054
+ -> Tuple[List[int], List[str], List[str], List[int]]:
990
1055
  """
991
1056
  Checks whether the setting specifics (target classes, explainer algorithms, and target layers) are provided
992
1057
  as lists of values. If not, they are transformed into a list.
@@ -999,6 +1064,9 @@ class CamBuilder:
999
1064
  :param target_layers: (mandatory) A string or a list of strings representing the target layers for the
1000
1065
  explanations. These strings should identify either PyTorch named modules, TensorFlow/Keras layers, or they
1001
1066
  should be class dictionary keys, used to retrieve each layer from the class attributes.
1067
+ :param contrastive_foil_classes: (mandatory) An integer or list of integers representing the comparative classes
1068
+ (foils) for the explanation in the context of Contrastive Explanations. If None, the explanation would
1069
+ follow the classical paradigm.
1002
1070
 
1003
1071
  :return:
1004
1072
  - target_classes: A list of integers representing the target classes for the explanation.
@@ -1007,6 +1075,9 @@ class CamBuilder:
1007
1075
  - target_layers: A list of strings representing the target layers for the explanations. These strings should
1008
1076
  identify either PyTorch named modules, TensorFlow/Keras layers, or they should be class dictionary keys,
1009
1077
  used to retrieve each layer from the class attributes.
1078
+ - contrastive_foil_classes: A list of intergers representing the comparative classes (foils) for the
1079
+ explanation in the context of Contrastive Explanations. If None, the explanation would follow the classical
1080
+ paradigm.
1010
1081
  """
1011
1082
 
1012
1083
  if not isinstance(target_classes, list):
@@ -1015,8 +1086,10 @@ class CamBuilder:
1015
1086
  explainer_types = [explainer_types]
1016
1087
  if not isinstance(target_layers, list):
1017
1088
  target_layers = [target_layers]
1089
+ if not isinstance(contrastive_foil_classes, list):
1090
+ contrastive_foil_classes = [contrastive_foil_classes]
1018
1091
 
1019
- return target_classes, explainer_types, target_layers
1092
+ return target_classes, explainer_types, target_layers, contrastive_foil_classes
1020
1093
 
1021
1094
  @staticmethod
1022
1095
  def __set_grid(n_items: int, grid_instructions: Tuple[int, int]) -> Tuple[int, int]:
@@ -125,7 +125,8 @@ class TorchCamBuilder(CamBuilder):
125
125
 
126
126
  def _create_raw_batched_cams(self, data_list: List[np.ndarray | torch.Tensor], target_class: int,
127
127
  target_layer: nn.Module, explainer_type: str, softmax_final: bool,
128
- extra_inputs_list: List[Any] = None, eps: float = 1e-6) \
128
+ extra_inputs_list: List[Any] = None, contrastive_foil_class: int = None,
129
+ eps: float = 1e-6) \
129
130
  -> Tuple[List[np.ndarray], np.ndarray]:
130
131
  """
131
132
  Retrieves raw CAMs from an input data list based on the specified settings (defined by algorithm, target layer,
@@ -142,6 +143,9 @@ class TorchCamBuilder(CamBuilder):
142
143
  activation function.
143
144
  :param extra_inputs_list: (optional, defaults is None) A list of additional input objects required by the
144
145
  model's forward method.
146
+ :param contrastive_foil_class: (optional, default is None) An integer representing the comparative class (foil)
147
+ for the explanation in the context of Contrastive Explanations. If None, the explanation would follow the
148
+ classical paradigm.
145
149
  :param eps: (optional, default is 1e-6) A float number used in probability clamping before logarithm application
146
150
  to avoid null or None results.
147
151
 
@@ -215,12 +219,18 @@ class TorchCamBuilder(CamBuilder):
215
219
  target_scores = torch.cat([-outputs, outputs], dim=1)
216
220
  target_probs = torch.cat([1 - p, p], dim=1)
217
221
 
218
- target_probs = target_probs[:, target_class].cpu().detach().numpy()
222
+ class_idx = target_class if contrastive_foil_class is None else [target_class, contrastive_foil_class]
223
+ target_probs = target_probs[:, class_idx].cpu().detach().numpy()
219
224
 
220
225
  cam_list = []
221
226
  for i in range(len(data_list)):
222
227
  self.model.zero_grad()
223
- target_score = target_scores[i, target_class]
228
+ if contrastive_foil_class is None:
229
+ target_score = target_scores[i, target_class]
230
+ else:
231
+ contrastive_foil = torch.autograd.Variable(torch.from_numpy(np.asarray([contrastive_foil_class]
232
+ * target_scores.shape[0])))
233
+ target_score = nn.CrossEntropyLoss()(target_scores[i].unsqueeze(0), contrastive_foil)
224
234
  target_score.backward(retain_graph=True)
225
235
 
226
236
  if explainer_type == "HiResCAM":
@@ -150,7 +150,8 @@ class TfCamBuilder(CamBuilder):
150
150
 
151
151
  def _create_raw_batched_cams(self, data_list: List[np.ndarray | tf.Tensor], target_class: int,
152
152
  target_layer: tf.keras.layers.Layer, explainer_type: str, softmax_final: bool,
153
- extra_inputs_list: List[Any] = None, eps: float = 1e-6) \
153
+ extra_inputs_list: List[Any] = None, contrastive_foil_class: int = None,
154
+ eps: float = 1e-6) \
154
155
  -> Tuple[List[np.ndarray], np.ndarray]:
155
156
  """
156
157
  Retrieves raw CAMs from an input data list based on the specified settings (defined by algorithm, target layer,
@@ -167,6 +168,9 @@ class TfCamBuilder(CamBuilder):
167
168
  activation function.
168
169
  :param extra_inputs_list: (optional, defaults is None) A list of additional input objects required by the
169
170
  model's forward method.
171
+ :param contrastive_foil_class: (optional, default is None) An integer representing the comparative class (foil)
172
+ for the explanation in the context of Contrastive Explanations. If None, the explanation would follow the
173
+ classical paradigm.
170
174
  :param eps: (optional, default is 1e-6) A float number used in probability clamping before logarithm application
171
175
  to avoid null or None results.
172
176
 
@@ -230,8 +234,15 @@ class TfCamBuilder(CamBuilder):
230
234
  target_scores = tf.concat([-outputs, outputs], axis=1)
231
235
  target_probs = tf.concat([1 - p, p], axis=1)
232
236
 
233
- target_scores = target_scores[:, target_class]
234
- target_probs = target_probs[:, target_class]
237
+ if contrastive_foil_class is not None:
238
+ contrastive_foil = tf.constant([contrastive_foil_class] * target_scores.shape[0], dtype=tf.int32)
239
+ target_scores = tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True)(contrastive_foil,
240
+ target_scores)
241
+ target_probs = tf.gather(target_probs, [target_class, contrastive_foil_class], axis=1)
242
+ else:
243
+ target_scores = target_scores[:, target_class]
244
+ target_probs = target_probs[:, target_class]
245
+
235
246
  self.gradients = tape.gradient(target_scores, self.activations)
236
247
 
237
248
  cam_list = []
@@ -1,6 +1,6 @@
1
1
  Metadata-Version: 2.1
2
2
  Name: signal-grad-cam
3
- Version: 0.1.7
3
+ Version: 1.0.0
4
4
  Summary: SignalGrad-CAM aims at generalising Grad-CAM to one-dimensional applications, while enhancing usability and efficiency.
5
5
  Home-page: https://github.com/samuelepe11/signal_grad_cam
6
6
  Author: Samuele Pe
@@ -111,7 +111,7 @@ def preprocess_fn(signal):
111
111
  class_labels = ["Class 1", "Class 2", "Class 3"]
112
112
 
113
113
  # Define the CAM builder
114
- cam_builder = TorchCamBuilder(model=model, transform_fc=preprocess_fc, class_names=class_labels, time_axs=1)
114
+ cam_builder = TorchCamBuilder(model=model, transform_fn=preprocess_fn, class_names=class_labels, time_axs=1)
115
115
  ```
116
116
 
117
117
  <p align="justify">Now, you can use the `cam_builder` object to generate class activation maps from a list of input data using the <i>`get_cams`</i> method. You can specify multiple algorithm names, target layers, or target classes as needed.
@@ -133,7 +133,7 @@ target_classes = [0, 1]
133
133
  # Create CAMs
134
134
  cam_dict, predicted_probs_dict, score_ranges_dict = cam_builder.get_cam(data_list=data_list, data_labels=data_labels_list,
135
135
  target_classes=target_classes, explainer_types="Grad-CAM",
136
- target_layer="conv1d_layer_1", softmax_final=True,
136
+ target_layers="conv1d_layer_1", softmax_final=True,
137
137
  data_sampling_freq=25, dt=1, axes_names=("Time (s)", "Channels"))
138
138
 
139
139
  # Visualize single channel importance
@@ -141,7 +141,7 @@ selected_channels_indices = [0, 2, 10]
141
141
  cam_builder.single_channel_output_display(data_list=data_list, data_labels=data_labels_list, predicted_probs_dict=predicted_probs_dict,
142
142
  cams_dict=cam_dict, explainer_types="Grad-CAM", target_classes=target_classes,
143
143
  target_layers="target_layer_name", desired_channels=selected_channels_indices,
144
- grid_instructions=(1, len(selected_channels_indices), bar_ranges=score_ranges_dict,
144
+ grid_instructions=(1, len(selected_channels_indices), bar_ranges_dict=score_ranges_dict,
145
145
  results_dir="path_to_your_result_directoory", data_sampling_freq=25, dt=1, line_width=0.5,
146
146
  axes_names=("Time (s)", "Amplitude (mV)"))
147
147
 
@@ -149,11 +149,11 @@ cam_builder.single_channel_output_display(data_list=data_list, data_labels=data_
149
149
  cam_builder.overlapped_output_display(data_list=data_list, data_labels=data_labels_list, predicted_probs_dict=predicted_probs_dict,
150
150
  cams_dict=cam_dict, explainer_types="Grad-CAM", target_classes=target_classes,
151
151
  target_layers="target_layer_name", fig_size=(20 * len(your_data_X), 20),
152
- grid_instructions=(len(your_data_X), 1), bar_ranges=score_ranges_dict, data_names=item_names
153
- results_dir="path_to_your_result_directoory", data_sampling_freq=25, dt=1)
152
+ grid_instructions=(len(your_data_X), 1), bar_ranges_dict=score_ranges_dict, data_names=item_names
153
+ results_dir_path="path_to_your_result_directoory", data_sampling_freq=25, dt=1)
154
154
  ```
155
155
 
156
- You can also check the python scripts [here](https://github.com/bmi-labmedinfo/signal_grad_cam/examples).
156
+ You can also explore the Python scripts available in the examples directory of the repository [here](https://github.com/bmi-labmedinfo/signal_grad_cam/examples), which provide complete, ready-to-run demonstrations for both PyTorch and TensorFlow/Keras models. These examples include open-source models for image and signal classification using 1D- and 2D-CNN architectures, and they illustrate how to apply the recently added feature for creating and displaying "contrastive explanations" in each scenario.
157
157
 
158
158
  See the [open issues](https://github.com/bmi-labmedinfo/signal_grad_cam/issues) for a full list of proposed features (and known issues).
159
159
 
@@ -163,11 +163,25 @@ See the [open issues](https://github.com/bmi-labmedinfo/signal_grad_cam/issues)
163
163
  If you use the SignalGrad-CAM software for your projects, please cite it as:
164
164
 
165
165
  ```
166
- @software{Pe_SignalGrad_CAM_2025,
167
- author = {Pe, Samuele and Buonocore, Tommaso Mario and Giovanna, Nicora and Enea, Parimbelli},
166
+ @inproceedings{pe_sgradcam_2025_paper,
167
+ author = {Pe, Samuele and Buonocore, Tommaso Mario and Giovanna, Nicora and Enea, Parimbelli}},
168
+ title = {SignalGrad-CAM: Beyond Image Explanation},
169
+ booktitle = {Joint Proceedings of the xAI 2025 Late-breaking Work, Demos and Doctoral Consortium co-located with the 3rd World Conference on eXplainable Artificial Intelligence (xAI 2025), Istanbul, Turkey, July 9-11, 2025},
170
+ series = {CEUR Workshop Proceedings},
171
+ volume = {4017},
172
+ pages = {209--216},
173
+ url = {https://ceur-ws.org/Vol-4017/paper_27.pdf},
174
+ publisher = {CEUR-WS.org},
175
+ year = {2025}
176
+ }
177
+ ```
178
+
179
+ ```
180
+ @software{pe_sgradcam_2025_repo,
181
+ author = {Pe, Samuele},
168
182
  title = {{SignalGrad-CAM}},
169
183
  url = {https://github.com/bmi-labmedinfo/signal_grad_cam},
170
- version = {0.0.1},
184
+ version = {1.0.0},
171
185
  year = {2025}
172
186
  }
173
187
  ```
File without changes