nettracer3d 0.5.3__tar.gz → 0.5.4__tar.gz

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
Files changed (27) hide show
  1. {nettracer3d-0.5.3/src/nettracer3d.egg-info → nettracer3d-0.5.4}/PKG-INFO +7 -6
  2. nettracer3d-0.5.4/README.md +17 -0
  3. {nettracer3d-0.5.3 → nettracer3d-0.5.4}/pyproject.toml +1 -1
  4. {nettracer3d-0.5.3 → nettracer3d-0.5.4}/src/nettracer3d/community_extractor.py +42 -0
  5. {nettracer3d-0.5.3 → nettracer3d-0.5.4}/src/nettracer3d/nettracer.py +23 -0
  6. {nettracer3d-0.5.3 → nettracer3d-0.5.4}/src/nettracer3d/nettracer_gui.py +320 -68
  7. {nettracer3d-0.5.3 → nettracer3d-0.5.4}/src/nettracer3d/segmenter.py +1 -1
  8. {nettracer3d-0.5.3 → nettracer3d-0.5.4/src/nettracer3d.egg-info}/PKG-INFO +7 -6
  9. nettracer3d-0.5.3/README.md +0 -16
  10. {nettracer3d-0.5.3 → nettracer3d-0.5.4}/LICENSE +0 -0
  11. {nettracer3d-0.5.3 → nettracer3d-0.5.4}/setup.cfg +0 -0
  12. {nettracer3d-0.5.3 → nettracer3d-0.5.4}/src/nettracer3d/__init__.py +0 -0
  13. {nettracer3d-0.5.3 → nettracer3d-0.5.4}/src/nettracer3d/hub_getter.py +0 -0
  14. {nettracer3d-0.5.3 → nettracer3d-0.5.4}/src/nettracer3d/modularity.py +0 -0
  15. {nettracer3d-0.5.3 → nettracer3d-0.5.4}/src/nettracer3d/morphology.py +0 -0
  16. {nettracer3d-0.5.3 → nettracer3d-0.5.4}/src/nettracer3d/network_analysis.py +0 -0
  17. {nettracer3d-0.5.3 → nettracer3d-0.5.4}/src/nettracer3d/network_draw.py +0 -0
  18. {nettracer3d-0.5.3 → nettracer3d-0.5.4}/src/nettracer3d/node_draw.py +0 -0
  19. {nettracer3d-0.5.3 → nettracer3d-0.5.4}/src/nettracer3d/proximity.py +0 -0
  20. {nettracer3d-0.5.3 → nettracer3d-0.5.4}/src/nettracer3d/run.py +0 -0
  21. {nettracer3d-0.5.3 → nettracer3d-0.5.4}/src/nettracer3d/simple_network.py +0 -0
  22. {nettracer3d-0.5.3 → nettracer3d-0.5.4}/src/nettracer3d/smart_dilate.py +0 -0
  23. {nettracer3d-0.5.3 → nettracer3d-0.5.4}/src/nettracer3d.egg-info/SOURCES.txt +0 -0
  24. {nettracer3d-0.5.3 → nettracer3d-0.5.4}/src/nettracer3d.egg-info/dependency_links.txt +0 -0
  25. {nettracer3d-0.5.3 → nettracer3d-0.5.4}/src/nettracer3d.egg-info/entry_points.txt +0 -0
  26. {nettracer3d-0.5.3 → nettracer3d-0.5.4}/src/nettracer3d.egg-info/requires.txt +0 -0
  27. {nettracer3d-0.5.3 → nettracer3d-0.5.4}/src/nettracer3d.egg-info/top_level.txt +0 -0
@@ -1,6 +1,6 @@
1
1
  Metadata-Version: 2.2
2
2
  Name: nettracer3d
3
- Version: 0.5.3
3
+ Version: 0.5.4
4
4
  Summary: Scripts for intializing and analyzing networks from segmentations of three dimensional images.
5
5
  Author-email: Liam McLaughlin <mclaughlinliam99@gmail.com>
6
6
  Project-URL: User_Tutorial, https://www.youtube.com/watch?v=cRatn5VTWDY
@@ -44,9 +44,10 @@ NetTracer3D is free to use/fork for academic/nonprofit use so long as citation i
44
44
 
45
45
  NetTracer3D was developed by Liam McLaughlin while working under Dr. Sanjay Jain at Washington University School of Medicine.
46
46
 
47
- -- Version 0.5.3 updates --
47
+ -- Version 0.5.4 updates --
48
48
 
49
- 1. Improved calculate volumes method. Previous method used np.argwhere() to count voxels of labeled objects in parallel which was quite strenuous in large arrays with many objects. New method uses np.bincount() which uses optimized numpy C libraries to do the same.
50
- 2. scipy.ndimage.find_objects() method was replaced as the method to find bounding boxes for objects when searching for object neighborhoods for the morphological proximity network and the edge < > node interaction quantification. This new version should be substantially faster in big arrays with many labels. (Depending on how well this improves performance, I may reimplement the secondary network search algorithm, as a side-option, which uses the same parallel-search within subarray strategies, as opposed to the primary network search algorithm that uses distance transforms).
51
- 3. Image viewer window can now load in .nii format images, as well as .jpeg, .jpg, and .png. The nibabel library was added to the dependencies to enable .nii loading, although this is currently all it is used for (and the gui will still run without nibabel).
52
- 4. Fixed bug regarding deleting edge objects.
49
+ 1. Added new function to GUI in image -> overlays -> color nodes/edges. Generates a rgb array corresponding to the nodes/edge labels where each node/edge (depending which array is selected) is randomly assigned a unique rgb color in an overlay channel. This can be used, for example, to color code labeled branches for easy identification of which branch is which.
50
+
51
+ 2. Improved highlight overlay general functionality (for selecting nodes/edges). Previously selecting a node/edge had the program attempting to create an equal sized array as an overlay, find all objects corresponding to the selected ones, fill those into the new highlight overlay, then overlay that image. This was understandably quite slow in big arrays where the system was wasting a lot of time searching the entire array every time something was selected. New version retains this functionality for arrays below 125 million voxels, since search time is rather manageable at that size. For larger arrays, it instead draws the highlight for the selected objects only into the current slice, rendering a new slice whenever the user scrolls in the stack (although the entire highlight overlay is still initialized as a placeholder). Functions that require the use of the entire highlight overlay (such as masking) are correspondingly updated to draw the entirety of the highlight overlay before executing (when the system has up until that point been drawing slices one at a time). This will likely be the retained behavior moving forward, although to eliminate this behavior, one can open nettracer_gui.py and set self.mini_thresh to some comically large value. The new highlight overlay seems to work effectively the same but faster in my testing although it is possible a bug slipped through, which I will fix if informed about (or if I find it myself).
52
+
53
+ 3. For the machine learning segmenter, changed the system to attempt to segment the image by chunking the array into the largest possible chunks that can be divided across all CPU cores. Previously the system split the array into 64^3 voxel sized chunks and passed those to the CPU cores until everything was processed. I am not sure which version is more efficient/faster so this is somewhat of a test. In theory the new behavior could be faster because it asking Python to interpret less stuff.
@@ -0,0 +1,17 @@
1
+ NetTracer3D is a python package developed for both 2D and 3D analysis of microscopic images in the .tif file format. It supports generation of 3D networks showing the relationships between objects (or nodes) in three dimensional space, either based on their own proximity or connectivity via connecting objects such as nerves or blood vessels. In addition to these functionalities are several advanced 3D data processing algorithms, such as labeling of branched structures or abstraction of branched structures into networks. Note that nettracer3d uses segmented data, which can be segmented from other softwares such as ImageJ and imported into NetTracer3D, although it does offer its own segmentation via intensity and volumetric thresholding, or random forest machine learning segmentation. NetTracer3D currently has a fully functional GUI. To use the GUI, after installing the nettracer3d package via pip, enter the command 'nettracer3d' in your command prompt:
2
+
3
+
4
+ This gui is built from the PyQt6 package and therefore may not function on dockers or virtual envs that are unable to support PyQt6 displays. More advanced documentation is coming down the line, but for now please see: https://www.youtube.com/watch?v=cRatn5VTWDY
5
+ for a video tutorial on using the GUI.
6
+
7
+ NetTracer3D is free to use/fork for academic/nonprofit use so long as citation is provided, and is available for commercial use at a fee (see license file for information).
8
+
9
+ NetTracer3D was developed by Liam McLaughlin while working under Dr. Sanjay Jain at Washington University School of Medicine.
10
+
11
+ -- Version 0.5.4 updates --
12
+
13
+ 1. Added new function to GUI in image -> overlays -> color nodes/edges. Generates a rgb array corresponding to the nodes/edge labels where each node/edge (depending which array is selected) is randomly assigned a unique rgb color in an overlay channel. This can be used, for example, to color code labeled branches for easy identification of which branch is which.
14
+
15
+ 2. Improved highlight overlay general functionality (for selecting nodes/edges). Previously selecting a node/edge had the program attempting to create an equal sized array as an overlay, find all objects corresponding to the selected ones, fill those into the new highlight overlay, then overlay that image. This was understandably quite slow in big arrays where the system was wasting a lot of time searching the entire array every time something was selected. New version retains this functionality for arrays below 125 million voxels, since search time is rather manageable at that size. For larger arrays, it instead draws the highlight for the selected objects only into the current slice, rendering a new slice whenever the user scrolls in the stack (although the entire highlight overlay is still initialized as a placeholder). Functions that require the use of the entire highlight overlay (such as masking) are correspondingly updated to draw the entirety of the highlight overlay before executing (when the system has up until that point been drawing slices one at a time). This will likely be the retained behavior moving forward, although to eliminate this behavior, one can open nettracer_gui.py and set self.mini_thresh to some comically large value. The new highlight overlay seems to work effectively the same but faster in my testing although it is possible a bug slipped through, which I will fix if informed about (or if I find it myself).
16
+
17
+ 3. For the machine learning segmenter, changed the system to attempt to segment the image by chunking the array into the largest possible chunks that can be divided across all CPU cores. Previously the system split the array into 64^3 voxel sized chunks and passed those to the CPU cores until everything was processed. I am not sure which version is more efficient/faster so this is somewhat of a test. In theory the new behavior could be faster because it asking Python to interpret less stuff.
@@ -1,6 +1,6 @@
1
1
  [project]
2
2
  name = "nettracer3d"
3
- version = "0.5.3"
3
+ version = "0.5.4"
4
4
  authors = [
5
5
  { name="Liam McLaughlin", email="mclaughlinliam99@gmail.com" },
6
6
  ]
@@ -9,6 +9,7 @@ from scipy import ndimage
9
9
  from scipy.ndimage import zoom
10
10
  from networkx.algorithms import community
11
11
  from community import community_louvain
12
+ import random
12
13
  from . import node_draw
13
14
 
14
15
 
@@ -781,6 +782,47 @@ def generate_distinct_colors(n_colors: int) -> List[Tuple[int, int, int]]:
781
782
  colors.append(rgb)
782
783
  return colors
783
784
 
785
+ def assign_node_colors(node_list: List[int], labeled_array: np.ndarray) -> Tuple[np.ndarray, Dict[int, str]]:
786
+ """
787
+ Assign distinct colors to nodes and create an RGBA image.
788
+
789
+ Args:
790
+ node_list: List of node IDs
791
+ labeled_array: 3D numpy array with labels corresponding to node IDs
792
+
793
+ Returns:
794
+ Tuple of (RGBA-coded numpy array (H, W, D, 4), dictionary mapping nodes to color names)
795
+ """
796
+
797
+ # Sort communities by size (descending)
798
+ sorted_nodes= sorted(node_list, reverse=True)
799
+
800
+ # Generate distinct colors
801
+ colors = generate_distinct_colors(len(node_list))
802
+ random.shuffle(colors) #Randomly sorted to make adjacent structures likely stand out
803
+
804
+ # Convert RGB colors to RGBA by adding alpha channel
805
+ colors_rgba = [(r, g, b, 255) for r, g, b in colors] # Full opacity for colored regions
806
+
807
+ # Create mapping from community to color
808
+ node_to_color = {node: colors_rgba[i] for i, node in enumerate(sorted_nodes)}
809
+
810
+ # Create RGBA array (initialize with transparent background)
811
+ rgba_array = np.zeros((*labeled_array.shape, 4), dtype=np.uint8)
812
+
813
+ # Assign colors to each voxel based on its label
814
+ for label in np.unique(labeled_array):
815
+ if label in node_to_color: # Skip background (usually label 0)
816
+ mask = labeled_array == label
817
+ for i in range(4): # RGBA channels
818
+ rgba_array[mask, i] = node_to_color[label][i]
819
+
820
+ # Convert the RGB portion of community_to_color back to RGB for color naming
821
+ node_to_color_rgb = {k: tuple(v[:3]) for k, v in node_to_color.items()}
822
+ node_to_color_names = convert_node_colors_to_names(node_to_color_rgb)
823
+
824
+ return rgba_array, node_to_color_names
825
+
784
826
  def assign_community_colors(community_dict: Dict[int, int], labeled_array: np.ndarray) -> Tuple[np.ndarray, Dict[int, str]]:
785
827
  """
786
828
  Assign distinct colors to communities and create an RGBA image.
@@ -4091,6 +4091,29 @@ class Network_3D:
4091
4091
 
4092
4092
 
4093
4093
  return image, output
4094
+
4095
+ def node_to_color(self, down_factor = None, mode = 0):
4096
+
4097
+ if mode == 0:
4098
+ array = self._nodes
4099
+ elif mode == 1:
4100
+ array = self._edges
4101
+
4102
+ items = list(np.unique(array))
4103
+ if 0 in items:
4104
+ del items[0]
4105
+
4106
+
4107
+ if down_factor is not None:
4108
+ original_shape = array.shape
4109
+ array = downsample(array, down_factor)
4110
+
4111
+ array, output = community_extractor.assign_node_colors(items, array)
4112
+
4113
+ if down_factor is not None:
4114
+ array = upsample_with_padding(array, down_factor, original_shape)
4115
+
4116
+ return array, output
4094
4117
 
4095
4118
 
4096
4119
 
@@ -397,6 +397,8 @@ class ImageViewerWindow(QMainWindow):
397
397
  # Initialize highlight overlay
398
398
  self.highlight_overlay = None
399
399
  self.highlight_bounds = None # Store bounds for positioning
400
+ self.mini_overlay = False # If the program is currently drawing the overlay by frame this will be true
401
+ self.mini_thresh = (500*500*500) # Array volume to start using mini overlays for
400
402
 
401
403
  def start_left_scroll(self):
402
404
  """Start scrolling left when left arrow is pressed."""
@@ -443,6 +445,8 @@ class ImageViewerWindow(QMainWindow):
443
445
  edge_indices (list): List of edge indices to highlight
444
446
  """
445
447
 
448
+ self.mini_overlay = False #If this method is ever being called, it means we are rendering the entire overlay so mini overlay needs to reset.
449
+
446
450
  def process_chunk(chunk_data, indices_to_check):
447
451
  """Process a single chunk of the array to create highlight mask"""
448
452
  mask = np.isin(chunk_data, indices_to_check)
@@ -463,6 +467,9 @@ class ImageViewerWindow(QMainWindow):
463
467
  if overlay1_indices is not None:
464
468
  if 0 in overlay1_indices:
465
469
  overlay1_indices.remove(0)
470
+ if overlay2_indices is not None:
471
+ if 0 in overlay2_indices:
472
+ overlay2_indices.remove(0)
466
473
 
467
474
  if node_indices is None:
468
475
  node_indices = []
@@ -628,6 +635,110 @@ class ImageViewerWindow(QMainWindow):
628
635
  # Update display
629
636
  self.update_display(preserve_zoom=(current_xlim, current_ylim), called = True)
630
637
 
638
+ def create_mini_overlay(self, node_indices = None, edge_indices = None):
639
+
640
+ """
641
+ Create a binary overlay highlighting specific nodes and/or edges using parallel processing, one slice at a time for efficiency.
642
+
643
+ Args:
644
+ node_indices (list): List of node indices to highlight
645
+ edge_indices (list): List of edge indices to highlight
646
+ """
647
+
648
+ def process_chunk(chunk_data, indices_to_check):
649
+ """Process a single chunk of the array to create highlight mask"""
650
+ mask = np.isin(chunk_data, indices_to_check)
651
+ return mask * 255
652
+
653
+
654
+ if node_indices is not None:
655
+ if 0 in node_indices:
656
+ node_indices.remove(0)
657
+ if edge_indices is not None:
658
+ if 0 in edge_indices:
659
+ edge_indices.remove(0)
660
+
661
+
662
+ if node_indices is None:
663
+ node_indices = []
664
+ if edge_indices is None:
665
+ edge_indices = []
666
+
667
+
668
+ current_xlim = self.ax.get_xlim() if hasattr(self, 'ax') and self.ax.get_xlim() != (0, 1) else None
669
+ current_ylim = self.ax.get_ylim() if hasattr(self, 'ax') and self.ax.get_ylim() != (0, 1) else None
670
+
671
+ if not node_indices and not edge_indices:
672
+ self.highlight_overlay = None
673
+ self.update_display(preserve_zoom=(current_xlim, current_ylim))
674
+ return
675
+
676
+ # Get the shape of the full array from any existing channel
677
+ for channel in self.channel_data:
678
+ if channel is not None:
679
+ full_shape = channel.shape
680
+ break
681
+ else:
682
+ return # No valid channels to get shape from
683
+
684
+ # Initialize full-size overlay
685
+ if self.highlight_overlay is None:
686
+ self.highlight_overlay = np.zeros(full_shape, dtype=np.uint8)
687
+
688
+ # Get number of CPU cores
689
+ num_cores = mp.cpu_count()
690
+
691
+ # Calculate chunk size along y-axis
692
+ chunk_size = full_shape[1] // num_cores
693
+ if chunk_size < 1:
694
+ chunk_size = 1
695
+
696
+ def process_channel(channel_data, indices, array_shape):
697
+ if channel_data is None or not indices:
698
+ return None
699
+
700
+ # Create chunks
701
+ chunks = []
702
+ for i in range(0, array_shape[1], chunk_size):
703
+ end = min(i + chunk_size, array_shape[1])
704
+ chunks.append(channel_data[i:end, :])
705
+
706
+ # Process chunks in parallel using ThreadPoolExecutor
707
+ process_func = partial(process_chunk, indices_to_check=indices)
708
+
709
+
710
+ with ThreadPoolExecutor(max_workers=num_cores) as executor:
711
+ chunk_results = list(executor.map(process_func, chunks))
712
+
713
+ # Reassemble the chunks
714
+ return np.concatenate(chunk_results, axis=0)
715
+
716
+ # Process nodes and edges in parallel using multiprocessing
717
+ with ThreadPoolExecutor(max_workers=num_cores) as executor:
718
+ try:
719
+ slice_node = self.channel_data[0][self.current_slice, :, :] #This is the only major difference to the big highlight... we are only looking at this
720
+ future_nodes = executor.submit(process_channel, slice_node, node_indices, full_shape)
721
+ node_overlay = future_nodes.result()
722
+ except:
723
+ node_overlay = None
724
+ try:
725
+ slice_edge = self.channel_data[1][self.current_slice, :, :]
726
+ future_edges = executor.submit(process_channel, slice_edge, edge_indices, full_shape)
727
+ edge_overlay = future_edges.result()
728
+ except:
729
+ edge_overlay = None
730
+
731
+ # Combine results
732
+ self.highlight_overlay[self.current_slice, :, :] = np.zeros_like(self.highlight_overlay[self.current_slice, :, :])
733
+ if node_overlay is not None:
734
+ self.highlight_overlay[self.current_slice, :, :] = np.maximum(self.highlight_overlay[self.current_slice, :, :], node_overlay)
735
+ if edge_overlay is not None:
736
+ self.highlight_overlay[self.current_slice, :, :] = np.maximum(self.highlight_overlay[self.current_slice, :, :], edge_overlay)
737
+
738
+
739
+ # Update display
740
+ self.update_display(preserve_zoom=(current_xlim, current_ylim))
741
+
631
742
 
632
743
 
633
744
 
@@ -890,14 +1001,23 @@ class ImageViewerWindow(QMainWindow):
890
1001
  edge_indices = filtered_df.iloc[:, 2].unique().tolist()
891
1002
  self.clicked_values['edges'] = edge_indices
892
1003
 
893
- self.create_highlight_overlay(
894
- node_indices=self.clicked_values['nodes'],
895
- edge_indices=self.clicked_values['edges']
896
- )
1004
+ if self.channel_data[1].shape[0] * self.channel_data[1].shape[1] * self.channel_data[1].shape[2] > self.mini_thresh:
1005
+ self.mini_overlay = True
1006
+ self.create_mini_overlay(node_indices = self.clicked_values['nodes'], edge_indices = self.clicked_values['edges'])
1007
+ else:
1008
+ self.create_highlight_overlay(
1009
+ node_indices=self.clicked_values['nodes'],
1010
+ edge_indices=self.clicked_values['edges']
1011
+ )
897
1012
  else:
898
- self.create_highlight_overlay(
899
- node_indices=self.clicked_values['nodes']
900
- )
1013
+ if self.channel_data[0].shape[0] * self.channel_data[0].shape[1] * self.channel_data[0].shape[2] > self.mini_thresh:
1014
+ self.mini_overlay = True
1015
+ self.create_mini_overlay(node_indices = self.clicked_values['nodes'], edge_indices = self.clicked_values['edges'])
1016
+ else:
1017
+ self.create_highlight_overlay(
1018
+ node_indices=self.clicked_values['nodes'],
1019
+ edge_indices = self.clicked_values['edges']
1020
+ )
901
1021
 
902
1022
 
903
1023
  except Exception as e:
@@ -972,14 +1092,23 @@ class ImageViewerWindow(QMainWindow):
972
1092
  if edges:
973
1093
  edge_indices = filtered_df.iloc[:, 2].unique().tolist()
974
1094
  self.clicked_values['edges'] = edge_indices
975
- self.create_highlight_overlay(
976
- node_indices=self.clicked_values['nodes'],
977
- edge_indices=edge_indices
978
- )
1095
+ if self.channel_data[1].shape[0] * self.channel_data[1].shape[1] * self.channel_data[1].shape[2] > self.mini_thresh:
1096
+ self.mini_overlay = True
1097
+ self.create_mini_overlay(node_indices = self.clicked_values['nodes'], edge_indices = self.clicked_values['edges'])
1098
+ else:
1099
+ self.create_highlight_overlay(
1100
+ node_indices=self.clicked_values['nodes'],
1101
+ edge_indices=edge_indices
1102
+ )
979
1103
  else:
980
- self.create_highlight_overlay(
981
- node_indices = self.clicked_values['nodes']
982
- )
1104
+ if self.channel_data[0].shape[0] * self.channel_data[0].shape[1] * self.channel_data[0].shape[2] > self.mini_thresh:
1105
+ self.mini_overlay = True
1106
+ self.create_mini_overlay(node_indices = self.clicked_values['nodes'], edge_indices = self.clicked_values['edges'])
1107
+ else:
1108
+ self.create_highlight_overlay(
1109
+ node_indices = self.clicked_values['nodes'],
1110
+ edge_indices = self.clicked_values['edges']
1111
+ )
983
1112
 
984
1113
  except Exception as e:
985
1114
 
@@ -1043,15 +1172,24 @@ class ImageViewerWindow(QMainWindow):
1043
1172
  if edges:
1044
1173
  edge_indices = filtered_df.iloc[:, 2].unique().tolist()
1045
1174
  self.clicked_values['edges'] = edge_indices
1046
- self.create_highlight_overlay(
1047
- node_indices=nodes,
1048
- edge_indices=edge_indices
1049
- )
1175
+ if self.channel_data[1].shape[0] * self.channel_data[1].shape[1] * self.channel_data[1].shape[2] > self.mini_thresh:
1176
+ self.mini_overlay = True
1177
+ self.create_mini_overlay(node_indices = nodes, edge_indices = edge_indices)
1178
+ else:
1179
+ self.create_highlight_overlay(
1180
+ node_indices=nodes,
1181
+ edge_indices=edge_indices
1182
+ )
1050
1183
  self.clicked_values['nodes'] = nodes
1051
1184
  else:
1052
- self.create_highlight_overlay(
1053
- node_indices = nodes
1054
- )
1185
+ if self.channel_data[0].shape[0] * self.channel_data[0].shape[1] * self.channel_data[0].shape[2] > self.mini_thresh:
1186
+ self.mini_overlay = True
1187
+ self.create_mini_overlay(node_indices = nodes, edge_indices = self.clicked_values['edges'])
1188
+ else:
1189
+ self.create_highlight_overlay(
1190
+ node_indices = nodes,
1191
+ edge_indices = self.clicked_values['edges']
1192
+ )
1055
1193
  self.clicked_values['nodes'] = nodes
1056
1194
 
1057
1195
  except Exception as e:
@@ -1095,9 +1233,15 @@ class ImageViewerWindow(QMainWindow):
1095
1233
 
1096
1234
  print(f"Found {len(filtered_df)} direct connections between nodes of ID {sort} and their neighbors (of any ID)")
1097
1235
 
1098
- self.create_highlight_overlay(
1099
- node_indices= nodes
1100
- )
1236
+ if self.channel_data[0].shape[0] * self.channel_data[0].shape[1] * self.channel_data[0].shape[2] > self.mini_thresh:
1237
+ self.mini_overlay = True
1238
+ self.create_mini_overlay(node_indices = nodes, edge_indices = self.clicked_values['edges'])
1239
+ else:
1240
+ self.create_highlight_overlay(
1241
+ node_indices = nodes,
1242
+ edge_indices = self.clicked_values['edges']
1243
+ )
1244
+ self.clicked_values['nodes'] = nodes
1101
1245
 
1102
1246
  except Exception as e:
1103
1247
  print(f"Error showing identities: {e}")
@@ -1393,8 +1537,6 @@ class ImageViewerWindow(QMainWindow):
1393
1537
 
1394
1538
  pairs = list(combinations(nodes, 2))
1395
1539
 
1396
- print(pairs)
1397
-
1398
1540
 
1399
1541
  for i in range(len(my_network.network_lists[0]) - 1, -1, -1):
1400
1542
  print((my_network.network_lists[0][i], my_network.network_lists[1][i]))
@@ -1891,7 +2033,11 @@ class ImageViewerWindow(QMainWindow):
1891
2033
  self.clicked_values['nodes'].extend(selected_values)
1892
2034
  # Remove duplicates while preserving order
1893
2035
  self.clicked_values['nodes'] = list(dict.fromkeys(self.clicked_values['nodes']))
1894
- self.create_highlight_overlay(node_indices=self.clicked_values['nodes'])
2036
+ if self.channel_data[0].shape[0] * self.channel_data[0].shape[1] * self.channel_data[0].shape[2] > self.mini_thresh:
2037
+ self.mini_overlay = True
2038
+ self.create_mini_overlay(node_indices = self.clicked_values['nodes'], edge_indices = self.clicked_values['edges'])
2039
+ else:
2040
+ self.create_highlight_overlay(node_indices=self.clicked_values['nodes'])
1895
2041
 
1896
2042
  # Try to highlight the last selected value in tables
1897
2043
  if self.clicked_values['nodes']:
@@ -1903,7 +2049,11 @@ class ImageViewerWindow(QMainWindow):
1903
2049
  self.clicked_values['edges'].extend(selected_values)
1904
2050
  # Remove duplicates while preserving order
1905
2051
  self.clicked_values['edges'] = list(dict.fromkeys(self.clicked_values['edges']))
1906
- self.create_highlight_overlay(edge_indices=self.clicked_values['edges'])
2052
+ if self.channel_data[1].shape[0] * self.channel_data[1].shape[1] * self.channel_data[1].shape[2] > self.mini_thresh:
2053
+ self.mini_overlay = True
2054
+ self.create_mini_overlay(node_indices = self.clicked_values['nodes'], edge_indices = self.clicked_values['edges'])
2055
+ else:
2056
+ self.create_highlight_overlay(edge_indices=self.clicked_values['edges'])
1907
2057
 
1908
2058
  # Try to highlight the last selected value in tables
1909
2059
  if self.clicked_values['edges']:
@@ -2144,9 +2294,17 @@ class ImageViewerWindow(QMainWindow):
2144
2294
 
2145
2295
  # Highlight the clicked element in the image using the stored lists
2146
2296
  if self.active_channel == 0 and (starting_vals['nodes']) != (self.clicked_values['nodes']):
2147
- self.create_highlight_overlay(node_indices=self.clicked_values['nodes'], edge_indices=self.clicked_values['edges'])
2297
+ if self.channel_data[0].shape[0] * self.channel_data[0].shape[1] * self.channel_data[0].shape[2] > self.mini_thresh:
2298
+ self.mini_overlay = True
2299
+ self.create_mini_overlay(node_indices = self.clicked_values['nodes'], edge_indices = self.clicked_values['edges'])
2300
+ else:
2301
+ self.create_highlight_overlay(node_indices=self.clicked_values['nodes'], edge_indices=self.clicked_values['edges'])
2148
2302
  elif self.active_channel == 1 and starting_vals['edges'] != self.clicked_values['edges']:
2149
- self.create_highlight_overlay(node_indices=self.clicked_values['nodes'], edge_indices=self.clicked_values['edges'])
2303
+ if self.channel_data[1].shape[0] * self.channel_data[1].shape[1] * self.channel_data[1].shape[2] > self.mini_thresh:
2304
+ self.mini_overlay = True
2305
+ self.create_mini_overlay(node_indices = self.clicked_values['nodes'], edge_indices = self.clicked_values['edges'])
2306
+ else:
2307
+ self.create_highlight_overlay(node_indices=self.clicked_values['nodes'], edge_indices=self.clicked_values['edges'])
2150
2308
 
2151
2309
 
2152
2310
  except IndexError:
@@ -2298,6 +2456,8 @@ class ImageViewerWindow(QMainWindow):
2298
2456
  netoverlay_action.triggered.connect(self.show_netoverlay_dialog)
2299
2457
  idoverlay_action = overlay_menu.addAction("Create ID Overlay")
2300
2458
  idoverlay_action.triggered.connect(self.show_idoverlay_dialog)
2459
+ coloroverlay_action = overlay_menu.addAction("Color Nodes (or Edges)")
2460
+ coloroverlay_action.triggered.connect(self.show_coloroverlay_dialog)
2301
2461
  searchoverlay_action = overlay_menu.addAction("Show Search Regions")
2302
2462
  searchoverlay_action.triggered.connect(self.show_search_dialog)
2303
2463
  shuffle_action = overlay_menu.addAction("Shuffle")
@@ -2578,6 +2738,11 @@ class ImageViewerWindow(QMainWindow):
2578
2738
  dialog = IdOverlayDialog(self)
2579
2739
  dialog.exec()
2580
2740
 
2741
+ def show_coloroverlay_dialog(self):
2742
+ """show the color overlay dialog"""
2743
+ dialog = ColorOverlayDialog(self)
2744
+ dialog.exec()
2745
+
2581
2746
  def show_search_dialog(self):
2582
2747
  """Show the search dialog"""
2583
2748
  dialog = SearchOverlayDialog(self)
@@ -3001,14 +3166,20 @@ class ImageViewerWindow(QMainWindow):
3001
3166
  else:
3002
3167
  self.channel_data[channel_index] = channel_data
3003
3168
 
3004
- if len(self.channel_data[channel_index].shape) == 3: # potentially 2D RGB
3005
- if self.channel_data[channel_index].shape[-1] in (3, 4): # last dim is 3 or 4
3006
- if self.confirm_rgb_dialog():
3007
- # User confirmed it's 2D RGB, expand to 4D
3008
- self.channel_data[channel_index] = np.expand_dims(self.channel_data[channel_index], axis=0)
3009
-
3010
- if len(self.channel_data[channel_index].shape) == 4 and (channel_index == 0 or channel_index == 1):
3011
- self.channel_data[channel_index] = self.reduce_rgb_dimension(self.channel_data[channel_index])
3169
+ try:
3170
+ if len(self.channel_data[channel_index].shape) == 3: # potentially 2D RGB
3171
+ if self.channel_data[channel_index].shape[-1] in (3, 4): # last dim is 3 or 4
3172
+ if self.confirm_rgb_dialog():
3173
+ # User confirmed it's 2D RGB, expand to 4D
3174
+ self.channel_data[channel_index] = np.expand_dims(self.channel_data[channel_index], axis=0)
3175
+ except:
3176
+ pass
3177
+
3178
+ try:
3179
+ if len(self.channel_data[channel_index].shape) == 4 and (channel_index == 0 or channel_index == 1):
3180
+ self.channel_data[channel_index] = self.reduce_rgb_dimension(self.channel_data[channel_index])
3181
+ except:
3182
+ pass
3012
3183
 
3013
3184
  reset_resize = False
3014
3185
 
@@ -3050,27 +3221,30 @@ class ImageViewerWindow(QMainWindow):
3050
3221
  self.active_channel_combo.setEnabled(True)
3051
3222
 
3052
3223
  # Update slider range if this is the first channel loaded
3053
- if len(self.channel_data[channel_index].shape) == 3 or len(self.channel_data[channel_index].shape) == 4:
3054
- if not self.slice_slider.isEnabled():
3055
- self.slice_slider.setEnabled(True)
3056
- self.slice_slider.setMinimum(0)
3057
- self.slice_slider.setMaximum(self.channel_data[channel_index].shape[0] - 1)
3058
- if self.slice_slider.value() < self.channel_data[channel_index].shape[0] - 1:
3059
- self.current_slice = self.slice_slider.value()
3224
+ try:
3225
+ if len(self.channel_data[channel_index].shape) == 3 or len(self.channel_data[channel_index].shape) == 4:
3226
+ if not self.slice_slider.isEnabled():
3227
+ self.slice_slider.setEnabled(True)
3228
+ self.slice_slider.setMinimum(0)
3229
+ self.slice_slider.setMaximum(self.channel_data[channel_index].shape[0] - 1)
3230
+ if self.slice_slider.value() < self.channel_data[channel_index].shape[0] - 1:
3231
+ self.current_slice = self.slice_slider.value()
3232
+ else:
3233
+ self.slice_slider.setValue(0)
3234
+ self.current_slice = 0
3060
3235
  else:
3061
- self.slice_slider.setValue(0)
3062
- self.current_slice = 0
3236
+ self.slice_slider.setEnabled(True)
3237
+ self.slice_slider.setMinimum(0)
3238
+ self.slice_slider.setMaximum(self.channel_data[channel_index].shape[0] - 1)
3239
+ if self.slice_slider.value() < self.channel_data[channel_index].shape[0] - 1:
3240
+ self.current_slice = self.slice_slider.value()
3241
+ else:
3242
+ self.current_slice = 0
3243
+ self.slice_slider.setValue(0)
3063
3244
  else:
3064
- self.slice_slider.setEnabled(True)
3065
- self.slice_slider.setMinimum(0)
3066
- self.slice_slider.setMaximum(self.channel_data[channel_index].shape[0] - 1)
3067
- if self.slice_slider.value() < self.channel_data[channel_index].shape[0] - 1:
3068
- self.current_slice = self.slice_slider.value()
3069
- else:
3070
- self.current_slice = 0
3071
- self.slice_slider.setValue(0)
3072
- else:
3073
- self.slice_slider.setEnabled(False)
3245
+ self.slice_slider.setEnabled(False)
3246
+ except:
3247
+ pass
3074
3248
 
3075
3249
 
3076
3250
  # If this is the first channel loaded, make it active
@@ -3083,13 +3257,16 @@ class ImageViewerWindow(QMainWindow):
3083
3257
  self.min_max[channel_index][1] = np.max(self.channel_data[channel_index])
3084
3258
  self.volume_dict[channel_index] = None #reset volumes
3085
3259
 
3086
- if assign_shape: #keep original shape tracked to undo resampling.
3087
- if self.original_shape is None:
3088
- self.original_shape = self.channel_data[channel_index].shape
3089
- elif self.original_shape[0] < self.channel_data[channel_index].shape[0] or self.original_shape[1] < self.channel_data[channel_index].shape[1] or self.original_shape[2] < self.channel_data[channel_index].shape[2]:
3090
- self.original_shape = self.channel_data[channel_index].shape
3091
- if len(self.original_shape) == 4:
3092
- self.original_shape = (self.original_shape[0], self.original_shape[1], self.original_shape[2])
3260
+ try:
3261
+ if assign_shape: #keep original shape tracked to undo resampling.
3262
+ if self.original_shape is None:
3263
+ self.original_shape = self.channel_data[channel_index].shape
3264
+ elif self.original_shape[0] < self.channel_data[channel_index].shape[0] or self.original_shape[1] < self.channel_data[channel_index].shape[1] or self.original_shape[2] < self.channel_data[channel_index].shape[2]:
3265
+ self.original_shape = self.channel_data[channel_index].shape
3266
+ if len(self.original_shape) == 4:
3267
+ self.original_shape = (self.original_shape[0], self.original_shape[1], self.original_shape[2])
3268
+ except:
3269
+ pass
3093
3270
 
3094
3271
  self.update_display(reset_resize = reset_resize)
3095
3272
 
@@ -3262,6 +3439,8 @@ class ImageViewerWindow(QMainWindow):
3262
3439
  elif ch_index == 3:
3263
3440
  my_network.save_id_overlay(filename=filename)
3264
3441
  elif ch_index == 4:
3442
+ if self.mini_overlay == True:
3443
+ self.create_highlight_overlay(node_indices = self.clicked_values['nodes'], edge_indices = self.clicked_values['edges'])
3265
3444
  if filename == None:
3266
3445
  filename = "Highlighted_Element.tif"
3267
3446
  tifffile.imwrite(f"{filename}", self.highlight_overlay)
@@ -3306,6 +3485,8 @@ class ImageViewerWindow(QMainWindow):
3306
3485
  if self.pending_slice is not None:
3307
3486
  slice_value, view_settings = self.pending_slice
3308
3487
  self.current_slice = slice_value
3488
+ if self.mini_overlay == True: #If we are rendering the highlight overlay for selected values one at a time.
3489
+ self.create_mini_overlay(node_indices = self.clicked_values['nodes'], edge_indices = self.clicked_values['edges'])
3309
3490
  self.update_display(preserve_zoom=view_settings)
3310
3491
  self.pending_slice = None
3311
3492
 
@@ -3968,7 +4149,13 @@ class CustomTableView(QTableView):
3968
4149
  # Navigate to the Z-slice
3969
4150
  self.parent.slice_slider.setValue(int(centroid[0]))
3970
4151
  print(f"Found node {value} at Z-slice {centroid[0]}")
3971
- self.parent.create_highlight_overlay(node_indices=[value])
4152
+ if self.parent.channel_data[0].shape[0] * self.parent.channel_data[0].shape[1] * self.parent.channel_data[0].shape[2] > self.parent.mini_thresh:
4153
+ self.parent.mini_overlay = True
4154
+ self.parent.create_mini_overlay(node_indices = [value])
4155
+ else:
4156
+ self.parent.create_highlight_overlay(node_indices=[value])
4157
+ self.parent.clicked_values['nodes'] = []
4158
+ self.parent.clicked_values['edges'] = []
3972
4159
  self.parent.clicked_values['nodes'].append(value)
3973
4160
 
3974
4161
  # Highlight the value in both tables if it exists
@@ -3992,7 +4179,13 @@ class CustomTableView(QTableView):
3992
4179
  # Navigate to the Z-slice
3993
4180
  self.parent.slice_slider.setValue(int(centroid[0]))
3994
4181
  print(f"Found edge {value} at Z-slice {centroid[0]}")
3995
- self.parent.create_highlight_overlay(edge_indices=[value])
4182
+ if self.parent.channel_data[1].shape[0] * self.parent.channel_data[1].shape[1] * self.parent.channel_data[1].shape[2] > self.parent.mini_thresh:
4183
+ self.parent.mini_overlay = True
4184
+ self.parent.create_mini_overlay(edge_indices = [value])
4185
+ else:
4186
+ self.parent.create_highlight_overlay(edge_indices=[value])
4187
+ self.parent.clicked_values['nodes'] = []
4188
+ self.parent.clicked_values['edges'] = []
3996
4189
  self.parent.clicked_values['edges'].append(value)
3997
4190
 
3998
4191
  # Highlight the value in both tables if it exists
@@ -4020,12 +4213,24 @@ class CustomTableView(QTableView):
4020
4213
  self.parent.slice_slider.setValue(int(centroid1[0]))
4021
4214
  print(f"Found node pair {value[0]} and {value[1]} at Z-slices {centroid1[0]} and {centroid2[0]}, respectively")
4022
4215
  try:
4023
- self.parent.create_highlight_overlay(node_indices=[int(value[0]), int(value[1])], edge_indices = int(value[2]))
4216
+ if self.parent.channel_data[0].shape[0] * self.parent.channel_data[0].shape[1] * self.parent.channel_data[0].shape[2] > self.parent.mini_thresh:
4217
+ self.parent.mini_overlay = True
4218
+ self.parent.create_mini_overlay(node_indices=[int(value[0]), int(value[1])], edge_indices = int(value[2]))
4219
+ else:
4220
+ self.parent.create_highlight_overlay(node_indices=[int(value[0]), int(value[1])], edge_indices = int(value[2]))
4221
+ self.parent.clicked_values['nodes'] = []
4222
+ self.parent.clicked_values['edges'] = []
4024
4223
  self.parent.clicked_values['edges'].append(value[2])
4025
4224
  self.parent.clicked_values['nodes'].append(value[0])
4026
4225
  self.parent.clicked_values['nodes'].append(value[1])
4027
4226
  except:
4028
- self.parent.create_highlight_overlay(node_indices=[int(value[0]), int(value[1])])
4227
+ if self.parent.channel_data[0].shape[0] * self.parent.channel_data[0].shape[1] * self.parent.channel_data[0].shape[2] > self.parent.mini_thresh:
4228
+ self.parent.mini_overlay = True
4229
+ self.parent.create_mini_overlay(node_indices=[int(value[0]), int(value[1])])
4230
+ else:
4231
+ self.parent.create_highlight_overlay(node_indices=[int(value[0]), int(value[1])])
4232
+ self.parent.clicked_values['nodes'] = []
4233
+ self.parent.clicked_values['edges'] = []
4029
4234
  self.parent.clicked_values['nodes'].append(value[0])
4030
4235
  self.parent.clicked_values['nodes'].append(value[1])
4031
4236
 
@@ -4571,6 +4776,8 @@ class Show3dDialog(QDialog):
4571
4776
  arrays_4d.append(channel)
4572
4777
 
4573
4778
  if self.parent().highlight_overlay is not None:
4779
+ if self.parent().mini_overlay == True:
4780
+ self.parent().create_highlight_overlay(node_indices = self.parent().clicked_values['nodes'], edge_indices = self.parent().clicked_values['edges'])
4574
4781
  arrays_3d.append(self.parent().highlight_overlay)
4575
4782
  colors.append(color_template[4])
4576
4783
 
@@ -4705,6 +4912,45 @@ class IdOverlayDialog(QDialog):
4705
4912
 
4706
4913
  self.accept()
4707
4914
 
4915
+ class ColorOverlayDialog(QDialog):
4916
+
4917
+ def __init__(self, parent=None):
4918
+
4919
+ super().__init__(parent)
4920
+ self.setWindowTitle("Generate Node (or Edge) -> Color Overlay?")
4921
+ self.setModal(True)
4922
+
4923
+ layout = QFormLayout(self)
4924
+
4925
+ self.down_factor = QLineEdit("")
4926
+ layout.addRow("down_factor (for speeding up overlay generation - optional):", self.down_factor)
4927
+
4928
+ # Add Run button
4929
+ run_button = QPushButton("Generate (Will go to Overlay 2)")
4930
+ run_button.clicked.connect(self.coloroverlay)
4931
+ layout.addWidget(run_button)
4932
+
4933
+ def coloroverlay(self):
4934
+
4935
+ down_factor = float(self.down_factor.text()) if self.down_factor.text().strip() else None
4936
+
4937
+ if self.parent().active_channel == 0:
4938
+ mode = 0
4939
+ self.sort = 'Node'
4940
+ else:
4941
+ mode = 1
4942
+ self.sort = 'Edge'
4943
+
4944
+
4945
+ result, legend = my_network.node_to_color(down_factor = down_factor, mode = mode)
4946
+
4947
+ self.parent().format_for_upperright_table(legend, f'{self.sort} Id', f'Encoding Val: {self.sort}', 'Legend')
4948
+
4949
+
4950
+ self.parent().load_channel(3, channel_data = result, data = True)
4951
+
4952
+ self.accept()
4953
+
4708
4954
 
4709
4955
  class ShuffleDialog(QDialog):
4710
4956
 
@@ -4745,11 +4991,15 @@ class ShuffleDialog(QDialog):
4745
4991
  accepted_target = self.target_selector.currentIndex()
4746
4992
 
4747
4993
  if accepted_mode == 4:
4994
+ if self.parent().mini_overlay == True:
4995
+ self.parent().create_highlight_overlay(node_indices = self.parent().clicked_values['nodes'], edge_indices = self.parent().clicked_values['edges'])
4748
4996
  active_data = self.parent().highlight_overlay
4749
4997
  else:
4750
4998
  active_data = self.parent().channel_data[accepted_mode]
4751
4999
 
4752
5000
  if accepted_target == 4:
5001
+ if self.parent().mini_overlay == True:
5002
+ self.parent().create_highlight_overlay(node_indices = self.parent().clicked_values['nodes'], edge_indices = self.parent().clicked_values['edges'])
4753
5003
  target_data = self.parent().highlight_overlay
4754
5004
  else:
4755
5005
  target_data = self.parent().channel_data[accepted_target]
@@ -6851,6 +7101,8 @@ class MaskDialog(QDialog):
6851
7101
  output_target = self.output_selector.currentIndex()
6852
7102
 
6853
7103
  if accepted_mode == 4:
7104
+ if self.parent().mini_overlay == True:
7105
+ self.parent().create_highlight_overlay(node_indices = self.parent().clicked_values['nodes'], edge_indices = self.parent().clicked_values['edges'])
6854
7106
  active_data = self.parent().highlight_overlay
6855
7107
  else:
6856
7108
  active_data = self.parent().channel_data[accepted_mode]
@@ -401,7 +401,7 @@ class InteractiveSegmenter:
401
401
 
402
402
  return foreground, background
403
403
 
404
- def segment_volume(self, chunk_size=64, gpu=False):
404
+ def segment_volume(self, chunk_size=None, gpu=False):
405
405
  """Segment volume using parallel processing of chunks with vectorized chunk creation"""
406
406
  #Change the above chunk size to None to have it auto-compute largest chunks (not sure which is faster, 64 seems reasonable in test cases)
407
407
 
@@ -1,6 +1,6 @@
1
1
  Metadata-Version: 2.2
2
2
  Name: nettracer3d
3
- Version: 0.5.3
3
+ Version: 0.5.4
4
4
  Summary: Scripts for intializing and analyzing networks from segmentations of three dimensional images.
5
5
  Author-email: Liam McLaughlin <mclaughlinliam99@gmail.com>
6
6
  Project-URL: User_Tutorial, https://www.youtube.com/watch?v=cRatn5VTWDY
@@ -44,9 +44,10 @@ NetTracer3D is free to use/fork for academic/nonprofit use so long as citation i
44
44
 
45
45
  NetTracer3D was developed by Liam McLaughlin while working under Dr. Sanjay Jain at Washington University School of Medicine.
46
46
 
47
- -- Version 0.5.3 updates --
47
+ -- Version 0.5.4 updates --
48
48
 
49
- 1. Improved calculate volumes method. Previous method used np.argwhere() to count voxels of labeled objects in parallel which was quite strenuous in large arrays with many objects. New method uses np.bincount() which uses optimized numpy C libraries to do the same.
50
- 2. scipy.ndimage.find_objects() method was replaced as the method to find bounding boxes for objects when searching for object neighborhoods for the morphological proximity network and the edge < > node interaction quantification. This new version should be substantially faster in big arrays with many labels. (Depending on how well this improves performance, I may reimplement the secondary network search algorithm, as a side-option, which uses the same parallel-search within subarray strategies, as opposed to the primary network search algorithm that uses distance transforms).
51
- 3. Image viewer window can now load in .nii format images, as well as .jpeg, .jpg, and .png. The nibabel library was added to the dependencies to enable .nii loading, although this is currently all it is used for (and the gui will still run without nibabel).
52
- 4. Fixed bug regarding deleting edge objects.
49
+ 1. Added new function to GUI in image -> overlays -> color nodes/edges. Generates a rgb array corresponding to the nodes/edge labels where each node/edge (depending which array is selected) is randomly assigned a unique rgb color in an overlay channel. This can be used, for example, to color code labeled branches for easy identification of which branch is which.
50
+
51
+ 2. Improved highlight overlay general functionality (for selecting nodes/edges). Previously selecting a node/edge had the program attempting to create an equal sized array as an overlay, find all objects corresponding to the selected ones, fill those into the new highlight overlay, then overlay that image. This was understandably quite slow in big arrays where the system was wasting a lot of time searching the entire array every time something was selected. New version retains this functionality for arrays below 125 million voxels, since search time is rather manageable at that size. For larger arrays, it instead draws the highlight for the selected objects only into the current slice, rendering a new slice whenever the user scrolls in the stack (although the entire highlight overlay is still initialized as a placeholder). Functions that require the use of the entire highlight overlay (such as masking) are correspondingly updated to draw the entirety of the highlight overlay before executing (when the system has up until that point been drawing slices one at a time). This will likely be the retained behavior moving forward, although to eliminate this behavior, one can open nettracer_gui.py and set self.mini_thresh to some comically large value. The new highlight overlay seems to work effectively the same but faster in my testing although it is possible a bug slipped through, which I will fix if informed about (or if I find it myself).
52
+
53
+ 3. For the machine learning segmenter, changed the system to attempt to segment the image by chunking the array into the largest possible chunks that can be divided across all CPU cores. Previously the system split the array into 64^3 voxel sized chunks and passed those to the CPU cores until everything was processed. I am not sure which version is more efficient/faster so this is somewhat of a test. In theory the new behavior could be faster because it asking Python to interpret less stuff.
@@ -1,16 +0,0 @@
1
- NetTracer3D is a python package developed for both 2D and 3D analysis of microscopic images in the .tif file format. It supports generation of 3D networks showing the relationships between objects (or nodes) in three dimensional space, either based on their own proximity or connectivity via connecting objects such as nerves or blood vessels. In addition to these functionalities are several advanced 3D data processing algorithms, such as labeling of branched structures or abstraction of branched structures into networks. Note that nettracer3d uses segmented data, which can be segmented from other softwares such as ImageJ and imported into NetTracer3D, although it does offer its own segmentation via intensity and volumetric thresholding, or random forest machine learning segmentation. NetTracer3D currently has a fully functional GUI. To use the GUI, after installing the nettracer3d package via pip, enter the command 'nettracer3d' in your command prompt:
2
-
3
-
4
- This gui is built from the PyQt6 package and therefore may not function on dockers or virtual envs that are unable to support PyQt6 displays. More advanced documentation is coming down the line, but for now please see: https://www.youtube.com/watch?v=cRatn5VTWDY
5
- for a video tutorial on using the GUI.
6
-
7
- NetTracer3D is free to use/fork for academic/nonprofit use so long as citation is provided, and is available for commercial use at a fee (see license file for information).
8
-
9
- NetTracer3D was developed by Liam McLaughlin while working under Dr. Sanjay Jain at Washington University School of Medicine.
10
-
11
- -- Version 0.5.3 updates --
12
-
13
- 1. Improved calculate volumes method. Previous method used np.argwhere() to count voxels of labeled objects in parallel which was quite strenuous in large arrays with many objects. New method uses np.bincount() which uses optimized numpy C libraries to do the same.
14
- 2. scipy.ndimage.find_objects() method was replaced as the method to find bounding boxes for objects when searching for object neighborhoods for the morphological proximity network and the edge < > node interaction quantification. This new version should be substantially faster in big arrays with many labels. (Depending on how well this improves performance, I may reimplement the secondary network search algorithm, as a side-option, which uses the same parallel-search within subarray strategies, as opposed to the primary network search algorithm that uses distance transforms).
15
- 3. Image viewer window can now load in .nii format images, as well as .jpeg, .jpg, and .png. The nibabel library was added to the dependencies to enable .nii loading, although this is currently all it is used for (and the gui will still run without nibabel).
16
- 4. Fixed bug regarding deleting edge objects.
File without changes
File without changes