nettracer3d 0.5.2__tar.gz → 0.5.4__tar.gz

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
Files changed (27) hide show
  1. {nettracer3d-0.5.2/src/nettracer3d.egg-info → nettracer3d-0.5.4}/PKG-INFO +10 -1
  2. nettracer3d-0.5.4/README.md +17 -0
  3. {nettracer3d-0.5.2 → nettracer3d-0.5.4}/pyproject.toml +3 -2
  4. {nettracer3d-0.5.2 → nettracer3d-0.5.4}/src/nettracer3d/community_extractor.py +42 -0
  5. {nettracer3d-0.5.2 → nettracer3d-0.5.4}/src/nettracer3d/morphology.py +63 -109
  6. {nettracer3d-0.5.2 → nettracer3d-0.5.4}/src/nettracer3d/nettracer.py +70 -49
  7. {nettracer3d-0.5.2 → nettracer3d-0.5.4}/src/nettracer3d/nettracer_gui.py +378 -81
  8. {nettracer3d-0.5.2 → nettracer3d-0.5.4}/src/nettracer3d/proximity.py +45 -47
  9. {nettracer3d-0.5.2 → nettracer3d-0.5.4}/src/nettracer3d/segmenter.py +1 -1
  10. {nettracer3d-0.5.2 → nettracer3d-0.5.4/src/nettracer3d.egg-info}/PKG-INFO +10 -1
  11. {nettracer3d-0.5.2 → nettracer3d-0.5.4}/src/nettracer3d.egg-info/requires.txt +1 -0
  12. nettracer3d-0.5.2/README.md +0 -9
  13. {nettracer3d-0.5.2 → nettracer3d-0.5.4}/LICENSE +0 -0
  14. {nettracer3d-0.5.2 → nettracer3d-0.5.4}/setup.cfg +0 -0
  15. {nettracer3d-0.5.2 → nettracer3d-0.5.4}/src/nettracer3d/__init__.py +0 -0
  16. {nettracer3d-0.5.2 → nettracer3d-0.5.4}/src/nettracer3d/hub_getter.py +0 -0
  17. {nettracer3d-0.5.2 → nettracer3d-0.5.4}/src/nettracer3d/modularity.py +0 -0
  18. {nettracer3d-0.5.2 → nettracer3d-0.5.4}/src/nettracer3d/network_analysis.py +0 -0
  19. {nettracer3d-0.5.2 → nettracer3d-0.5.4}/src/nettracer3d/network_draw.py +0 -0
  20. {nettracer3d-0.5.2 → nettracer3d-0.5.4}/src/nettracer3d/node_draw.py +0 -0
  21. {nettracer3d-0.5.2 → nettracer3d-0.5.4}/src/nettracer3d/run.py +0 -0
  22. {nettracer3d-0.5.2 → nettracer3d-0.5.4}/src/nettracer3d/simple_network.py +0 -0
  23. {nettracer3d-0.5.2 → nettracer3d-0.5.4}/src/nettracer3d/smart_dilate.py +0 -0
  24. {nettracer3d-0.5.2 → nettracer3d-0.5.4}/src/nettracer3d.egg-info/SOURCES.txt +0 -0
  25. {nettracer3d-0.5.2 → nettracer3d-0.5.4}/src/nettracer3d.egg-info/dependency_links.txt +0 -0
  26. {nettracer3d-0.5.2 → nettracer3d-0.5.4}/src/nettracer3d.egg-info/entry_points.txt +0 -0
  27. {nettracer3d-0.5.2 → nettracer3d-0.5.4}/src/nettracer3d.egg-info/top_level.txt +0 -0
@@ -1,6 +1,6 @@
1
1
  Metadata-Version: 2.2
2
2
  Name: nettracer3d
3
- Version: 0.5.2
3
+ Version: 0.5.4
4
4
  Summary: Scripts for intializing and analyzing networks from segmentations of three dimensional images.
5
5
  Author-email: Liam McLaughlin <mclaughlinliam99@gmail.com>
6
6
  Project-URL: User_Tutorial, https://www.youtube.com/watch?v=cRatn5VTWDY
@@ -26,6 +26,7 @@ Requires-Dist: tifffile==2023.7.18
26
26
  Requires-Dist: qtrangeslider==0.1.5
27
27
  Requires-Dist: PyQt6==6.8.0
28
28
  Requires-Dist: scikit-learn==1.6.1
29
+ Requires-Dist: nibabel==5.2.0
29
30
  Provides-Extra: cuda11
30
31
  Requires-Dist: cupy-cuda11x; extra == "cuda11"
31
32
  Provides-Extra: cuda12
@@ -42,3 +43,11 @@ for a video tutorial on using the GUI.
42
43
  NetTracer3D is free to use/fork for academic/nonprofit use so long as citation is provided, and is available for commercial use at a fee (see license file for information).
43
44
 
44
45
  NetTracer3D was developed by Liam McLaughlin while working under Dr. Sanjay Jain at Washington University School of Medicine.
46
+
47
+ -- Version 0.5.4 updates --
48
+
49
+ 1. Added new function to GUI in image -> overlays -> color nodes/edges. Generates a rgb array corresponding to the nodes/edge labels where each node/edge (depending which array is selected) is randomly assigned a unique rgb color in an overlay channel. This can be used, for example, to color code labeled branches for easy identification of which branch is which.
50
+
51
+ 2. Improved highlight overlay general functionality (for selecting nodes/edges). Previously selecting a node/edge had the program attempting to create an equal sized array as an overlay, find all objects corresponding to the selected ones, fill those into the new highlight overlay, then overlay that image. This was understandably quite slow in big arrays where the system was wasting a lot of time searching the entire array every time something was selected. New version retains this functionality for arrays below 125 million voxels, since search time is rather manageable at that size. For larger arrays, it instead draws the highlight for the selected objects only into the current slice, rendering a new slice whenever the user scrolls in the stack (although the entire highlight overlay is still initialized as a placeholder). Functions that require the use of the entire highlight overlay (such as masking) are correspondingly updated to draw the entirety of the highlight overlay before executing (when the system has up until that point been drawing slices one at a time). This will likely be the retained behavior moving forward, although to eliminate this behavior, one can open nettracer_gui.py and set self.mini_thresh to some comically large value. The new highlight overlay seems to work effectively the same but faster in my testing although it is possible a bug slipped through, which I will fix if informed about (or if I find it myself).
52
+
53
+ 3. For the machine learning segmenter, changed the system to attempt to segment the image by chunking the array into the largest possible chunks that can be divided across all CPU cores. Previously the system split the array into 64^3 voxel sized chunks and passed those to the CPU cores until everything was processed. I am not sure which version is more efficient/faster so this is somewhat of a test. In theory the new behavior could be faster because it asking Python to interpret less stuff.
@@ -0,0 +1,17 @@
1
+ NetTracer3D is a python package developed for both 2D and 3D analysis of microscopic images in the .tif file format. It supports generation of 3D networks showing the relationships between objects (or nodes) in three dimensional space, either based on their own proximity or connectivity via connecting objects such as nerves or blood vessels. In addition to these functionalities are several advanced 3D data processing algorithms, such as labeling of branched structures or abstraction of branched structures into networks. Note that nettracer3d uses segmented data, which can be segmented from other softwares such as ImageJ and imported into NetTracer3D, although it does offer its own segmentation via intensity and volumetric thresholding, or random forest machine learning segmentation. NetTracer3D currently has a fully functional GUI. To use the GUI, after installing the nettracer3d package via pip, enter the command 'nettracer3d' in your command prompt:
2
+
3
+
4
+ This gui is built from the PyQt6 package and therefore may not function on dockers or virtual envs that are unable to support PyQt6 displays. More advanced documentation is coming down the line, but for now please see: https://www.youtube.com/watch?v=cRatn5VTWDY
5
+ for a video tutorial on using the GUI.
6
+
7
+ NetTracer3D is free to use/fork for academic/nonprofit use so long as citation is provided, and is available for commercial use at a fee (see license file for information).
8
+
9
+ NetTracer3D was developed by Liam McLaughlin while working under Dr. Sanjay Jain at Washington University School of Medicine.
10
+
11
+ -- Version 0.5.4 updates --
12
+
13
+ 1. Added new function to GUI in image -> overlays -> color nodes/edges. Generates a rgb array corresponding to the nodes/edge labels where each node/edge (depending which array is selected) is randomly assigned a unique rgb color in an overlay channel. This can be used, for example, to color code labeled branches for easy identification of which branch is which.
14
+
15
+ 2. Improved highlight overlay general functionality (for selecting nodes/edges). Previously selecting a node/edge had the program attempting to create an equal sized array as an overlay, find all objects corresponding to the selected ones, fill those into the new highlight overlay, then overlay that image. This was understandably quite slow in big arrays where the system was wasting a lot of time searching the entire array every time something was selected. New version retains this functionality for arrays below 125 million voxels, since search time is rather manageable at that size. For larger arrays, it instead draws the highlight for the selected objects only into the current slice, rendering a new slice whenever the user scrolls in the stack (although the entire highlight overlay is still initialized as a placeholder). Functions that require the use of the entire highlight overlay (such as masking) are correspondingly updated to draw the entirety of the highlight overlay before executing (when the system has up until that point been drawing slices one at a time). This will likely be the retained behavior moving forward, although to eliminate this behavior, one can open nettracer_gui.py and set self.mini_thresh to some comically large value. The new highlight overlay seems to work effectively the same but faster in my testing although it is possible a bug slipped through, which I will fix if informed about (or if I find it myself).
16
+
17
+ 3. For the machine learning segmenter, changed the system to attempt to segment the image by chunking the array into the largest possible chunks that can be divided across all CPU cores. Previously the system split the array into 64^3 voxel sized chunks and passed those to the CPU cores until everything was processed. I am not sure which version is more efficient/faster so this is somewhat of a test. In theory the new behavior could be faster because it asking Python to interpret less stuff.
@@ -1,6 +1,6 @@
1
1
  [project]
2
2
  name = "nettracer3d"
3
- version = "0.5.2"
3
+ version = "0.5.4"
4
4
  authors = [
5
5
  { name="Liam McLaughlin", email="mclaughlinliam99@gmail.com" },
6
6
  ]
@@ -20,7 +20,8 @@ dependencies = [
20
20
  "tifffile == 2023.7.18",
21
21
  "qtrangeslider == 0.1.5",
22
22
  "PyQt6 == 6.8.0",
23
- "scikit-learn == 1.6.1"
23
+ "scikit-learn == 1.6.1",
24
+ "nibabel == 5.2.0"
24
25
  ]
25
26
 
26
27
  readme = "README.md"
@@ -9,6 +9,7 @@ from scipy import ndimage
9
9
  from scipy.ndimage import zoom
10
10
  from networkx.algorithms import community
11
11
  from community import community_louvain
12
+ import random
12
13
  from . import node_draw
13
14
 
14
15
 
@@ -781,6 +782,47 @@ def generate_distinct_colors(n_colors: int) -> List[Tuple[int, int, int]]:
781
782
  colors.append(rgb)
782
783
  return colors
783
784
 
785
+ def assign_node_colors(node_list: List[int], labeled_array: np.ndarray) -> Tuple[np.ndarray, Dict[int, str]]:
786
+ """
787
+ Assign distinct colors to nodes and create an RGBA image.
788
+
789
+ Args:
790
+ node_list: List of node IDs
791
+ labeled_array: 3D numpy array with labels corresponding to node IDs
792
+
793
+ Returns:
794
+ Tuple of (RGBA-coded numpy array (H, W, D, 4), dictionary mapping nodes to color names)
795
+ """
796
+
797
+ # Sort communities by size (descending)
798
+ sorted_nodes= sorted(node_list, reverse=True)
799
+
800
+ # Generate distinct colors
801
+ colors = generate_distinct_colors(len(node_list))
802
+ random.shuffle(colors) #Randomly sorted to make adjacent structures likely stand out
803
+
804
+ # Convert RGB colors to RGBA by adding alpha channel
805
+ colors_rgba = [(r, g, b, 255) for r, g, b in colors] # Full opacity for colored regions
806
+
807
+ # Create mapping from community to color
808
+ node_to_color = {node: colors_rgba[i] for i, node in enumerate(sorted_nodes)}
809
+
810
+ # Create RGBA array (initialize with transparent background)
811
+ rgba_array = np.zeros((*labeled_array.shape, 4), dtype=np.uint8)
812
+
813
+ # Assign colors to each voxel based on its label
814
+ for label in np.unique(labeled_array):
815
+ if label in node_to_color: # Skip background (usually label 0)
816
+ mask = labeled_array == label
817
+ for i in range(4): # RGBA channels
818
+ rgba_array[mask, i] = node_to_color[label][i]
819
+
820
+ # Convert the RGB portion of community_to_color back to RGB for color naming
821
+ node_to_color_rgb = {k: tuple(v[:3]) for k, v in node_to_color.items()}
822
+ node_to_color_names = convert_node_colors_to_names(node_to_color_rgb)
823
+
824
+ return rgba_array, node_to_color_names
825
+
784
826
  def assign_community_colors(community_dict: Dict[int, int], labeled_array: np.ndarray) -> Tuple[np.ndarray, Dict[int, str]]:
785
827
  """
786
828
  Assign distinct colors to communities and create an RGBA image.
@@ -7,46 +7,37 @@ from concurrent.futures import ThreadPoolExecutor, as_completed
7
7
  import tifffile
8
8
  from functools import partial
9
9
  import pandas as pd
10
+ from scipy import ndimage
10
11
 
11
- def get_reslice_indices(args):
12
- """Internal method used for the secondary algorithm that finds dimensions for subarrays around nodes"""
13
-
14
- indices, dilate_xy, dilate_z, array_shape = args
15
- try:
16
- max_indices = np.amax(indices, axis = 0) #Get the max/min of each index.
17
- except ValueError: #Return Nones if this error is encountered
12
+ def get_reslice_indices(slice_obj, dilate_xy, dilate_z, array_shape):
13
+ """Convert slice object to padded indices accounting for dilation and boundaries"""
14
+ if slice_obj is None:
18
15
  return None, None, None
19
- min_indices = np.amin(indices, axis = 0)
20
-
21
- z_max, y_max, x_max = max_indices[0], max_indices[1], max_indices[2]
22
-
23
- z_min, y_min, x_min = min_indices[0], min_indices[1], min_indices[2]
24
-
25
- y_max = y_max + ((dilate_xy-1)/2) + 1 #Establish dimensions of intended subarray, expanding the max/min indices to include
26
- y_min = y_min - ((dilate_xy-1)/2) - 1 #the future dilation space (by adding/subtracting half the dilation kernel for each axis)
27
- x_max = x_max + ((dilate_xy-1)/2) + 1 #an additional index is added in each direction to make sure nothing is discluded.
16
+
17
+ z_slice, y_slice, x_slice = slice_obj
18
+
19
+ # Extract min/max from slices
20
+ z_min, z_max = z_slice.start, z_slice.stop - 1
21
+ y_min, y_max = y_slice.start, y_slice.stop - 1
22
+ x_min, x_max = x_slice.start, x_slice.stop - 1
23
+
24
+ # Add dilation padding
25
+ y_max = y_max + ((dilate_xy-1)/2) + 1
26
+ y_min = y_min - ((dilate_xy-1)/2) - 1
27
+ x_max = x_max + ((dilate_xy-1)/2) + 1
28
28
  x_min = x_min - ((dilate_xy-1)/2) - 1
29
29
  z_max = z_max + ((dilate_z-1)/2) + 1
30
30
  z_min = z_min - ((dilate_z-1)/2) - 1
31
31
 
32
- if y_max > (array_shape[1] - 1): #Some if statements to make sure the subarray will not cause an indexerror
33
- y_max = (array_shape[1] - 1)
34
- if x_max > (array_shape[2] - 1):
35
- x_max = (array_shape[2] - 1)
36
- if z_max > (array_shape[0] - 1):
37
- z_max = (array_shape[0] - 1)
38
- if y_min < 0:
39
- y_min = 0
40
- if x_min < 0:
41
- x_min = 0
42
- if z_min < 0:
43
- z_min = 0
44
-
45
- y_vals = [y_min, y_max] #Return the subarray dimensions as lists
46
- x_vals = [x_min, x_max]
47
- z_vals = [z_min, z_max]
48
-
49
- return z_vals, y_vals, x_vals
32
+ # Boundary checks
33
+ y_max = min(y_max, array_shape[1] - 1)
34
+ x_max = min(x_max, array_shape[2] - 1)
35
+ z_max = min(z_max, array_shape[0] - 1)
36
+ y_min = max(y_min, 0)
37
+ x_min = max(x_min, 0)
38
+ z_min = max(z_min, 0)
39
+
40
+ return [z_min, z_max], [y_min, y_max], [x_min, x_max]
50
41
 
51
42
  def reslice_3d_array(args):
52
43
  """Internal method used for the secondary algorithm to reslice subarrays around nodes."""
@@ -97,39 +88,46 @@ def _get_node_edge_dict(label_array, edge_array, label, dilate_xy, dilate_z, cor
97
88
  return args
98
89
 
99
90
  def process_label(args):
100
- """Internal method used for the secondary algorithm to process a particular node."""
101
- nodes, edges, label, dilate_xy, dilate_z, array_shape = args
91
+ """Modified to use pre-computed bounding boxes instead of argwhere"""
92
+ nodes, edges, label, dilate_xy, dilate_z, array_shape, bounding_boxes = args
102
93
  print(f"Processing node {label}")
103
- indices = np.argwhere(nodes == label)
104
- if len(indices) == 0:
94
+
95
+ # Get the pre-computed bounding box for this label
96
+ slice_obj = bounding_boxes[label-1] # -1 because label numbers start at 1
97
+ if slice_obj is None:
105
98
  return None, None, None
106
- z_vals, y_vals, x_vals = get_reslice_indices((indices, dilate_xy, dilate_z, array_shape))
107
- if z_vals is None: #If get_reslice_indices ran into a ValueError, nothing is returned.
99
+
100
+ z_vals, y_vals, x_vals = get_reslice_indices(slice_obj, dilate_xy, dilate_z, array_shape)
101
+ if z_vals is None:
108
102
  return None, None, None
103
+
109
104
  sub_nodes = reslice_3d_array((nodes, z_vals, y_vals, x_vals))
110
105
  sub_edges = reslice_3d_array((edges, z_vals, y_vals, x_vals))
111
106
  return label, sub_nodes, sub_edges
112
107
 
113
108
 
114
- def create_node_dictionary(nodes, edges, num_nodes, dilate_xy, dilate_z, cores = 0):
115
- """Internal method used for the secondary algorithm to process nodes in parallel."""
116
- # Initialize the dictionary to be returned
117
- node_dict = {}
118
109
 
110
+ def create_node_dictionary(nodes, edges, num_nodes, dilate_xy, dilate_z, cores=0):
111
+ """Modified to pre-compute all bounding boxes using find_objects"""
112
+ node_dict = {}
119
113
  array_shape = nodes.shape
120
-
114
+
115
+ # Get all bounding boxes at once
116
+ bounding_boxes = ndimage.find_objects(nodes)
117
+
121
118
  # Use ThreadPoolExecutor for parallel execution
122
119
  with ThreadPoolExecutor(max_workers=mp.cpu_count()) as executor:
123
- # First parallel section to process labels
124
- # List of arguments for each parallel task
125
- args_list = [(nodes, edges, i, dilate_xy, dilate_z, array_shape) for i in range(1, num_nodes + 1)]
120
+ # Create args list with bounding_boxes included
121
+ args_list = [(nodes, edges, i, dilate_xy, dilate_z, array_shape, bounding_boxes)
122
+ for i in range(1, num_nodes + 1)]
126
123
 
127
124
  # Execute parallel tasks to process labels
128
125
  results = executor.map(process_label, args_list)
129
126
 
130
- # Second parallel section to create dictionary entries
127
+ # Process results in parallel
131
128
  for label, sub_nodes, sub_edges in results:
132
- executor.submit(create_dict_entry, node_dict, label, sub_nodes, sub_edges, dilate_xy, dilate_z, cores)
129
+ executor.submit(create_dict_entry, node_dict, label, sub_nodes, sub_edges,
130
+ dilate_xy, dilate_z, cores)
133
131
 
134
132
  return node_dict
135
133
 
@@ -193,10 +191,10 @@ def quantify_edge_node(nodes, edges, search = 0, xy_scale = 1, z_scale = 1, core
193
191
  return edge_quants
194
192
 
195
193
 
194
+
196
195
  def calculate_voxel_volumes(array, xy_scale=1, z_scale=1):
197
196
  """
198
- Calculate voxel volumes for each uniquely labelled object in a 3D numpy array
199
- using parallel processing.
197
+ Calculate voxel volumes for each uniquely labelled object in a 3D numpy array.
200
198
 
201
199
  Args:
202
200
  array: 3D numpy array where different objects are marked with different integer labels
@@ -207,69 +205,25 @@ def calculate_voxel_volumes(array, xy_scale=1, z_scale=1):
207
205
  Dictionary mapping object labels to their voxel volumes
208
206
  """
209
207
 
210
- def process_volume_chunk(chunk_data, labels, xy_scale, z_scale):
211
- """
212
- Calculate volumes for a chunk of the array.
213
-
214
- Args:
215
- chunk_data: 3D numpy array chunk
216
- labels: Array of unique labels to process
217
- xy_scale: Scale factor for x and y dimensions
218
- z_scale: Scale factor for z dimension
219
-
220
- Returns:
221
- Dictionary of label: volume pairs for this chunk
222
- """
223
- chunk_volumes = {}
224
- for label in labels:
225
- volume = np.count_nonzero(chunk_data == label) * (xy_scale**2) * z_scale
226
- if volume > 0: # Only include if object exists in this chunk
227
- chunk_volumes[label] = volume
228
- return chunk_volumes
229
-
230
- # Get unique labels (excluding 0 which typically represents background)
231
208
  labels = np.unique(array)
232
209
  if len(labels) == 2:
233
210
  array, _ = nettracer.label_objects(array)
234
- labels = np.unique(array)
235
- labels = labels[labels != 0] # Remove background label if present
236
-
237
- if len(labels) == 0:
238
- return {}
239
-
240
- # Get number of CPU cores
241
- num_cores = mp.cpu_count()
242
-
243
- # Calculate chunk size along y-axis
244
- chunk_size = array.shape[1] // num_cores
245
- if chunk_size < 1:
246
- chunk_size = 1
247
-
248
- # Create chunks along y-axis
249
- chunks = []
250
- for i in range(0, array.shape[1], chunk_size):
251
- end = min(i + chunk_size, array.shape[1])
252
- chunks.append(array[:, i:end, :])
211
+
212
+ del labels
253
213
 
254
- # Process chunks in parallel
255
- process_func = partial(process_volume_chunk,
256
- labels=labels,
257
- xy_scale=xy_scale,
258
- z_scale=z_scale)
214
+ # Get volumes using bincount
215
+ if 0 in array:
216
+ volumes = np.bincount(array.ravel())[1:]
217
+ else:
218
+ volumes = np.bincount(array.ravel())
219
+
259
220
 
260
- volumes = {}
261
- with ThreadPoolExecutor(max_workers=num_cores) as executor:
262
- chunk_results = list(executor.map(process_func, chunks))
263
-
264
- # Combine results from all chunks
265
- for chunk_volumes in chunk_results:
266
- for label, volume in chunk_volumes.items():
267
- if label in volumes:
268
- volumes[label] += volume
269
- else:
270
- volumes[label] = volume
221
+ # Apply scaling
222
+ volumes = volumes * (xy_scale**2) * z_scale
271
223
 
272
- return volumes
224
+ # Create dictionary with label:volume pairs
225
+ return {label: volume for label, volume in enumerate(volumes, start=1) if volume > 0}
226
+
273
227
 
274
228
 
275
229
  def search_neighbor_ids(nodes, targets, id_dict, neighborhood_dict, totals, search, xy_scale, z_scale, root):
@@ -5,6 +5,7 @@ from scipy import ndimage
5
5
  from skimage import measure
6
6
  import cv2
7
7
  import concurrent.futures
8
+ from concurrent.futures import ThreadPoolExecutor, as_completed
8
9
  from scipy.ndimage import zoom
9
10
  import multiprocessing as mp
10
11
  import os
@@ -23,7 +24,6 @@ except:
23
24
  from . import node_draw
24
25
  from . import network_draw
25
26
  from skimage import morphology as mpg
26
- from concurrent.futures import ThreadPoolExecutor, as_completed
27
27
  from . import smart_dilate
28
28
  from . import modularity
29
29
  from . import simple_network
@@ -37,45 +37,35 @@ from . import proximity
37
37
  #These next several methods relate to searching with 3D objects by dilating each one in a subarray around their neighborhood although I don't explicitly use this anywhere... can call them deprecated although I may want to use them later again so I have them still written out here.
38
38
 
39
39
 
40
- def get_reslice_indices(args):
41
- """Internal method used for the secondary algorithm that finds dimensions for subarrays around nodes"""
42
-
43
- indices, dilate_xy, dilate_z, array_shape = args
44
- try:
45
- max_indices = np.amax(indices, axis = 0) #Get the max/min of each index.
46
- except ValueError: #Return Nones if this error is encountered
40
+ def get_reslice_indices(slice_obj, dilate_xy, dilate_z, array_shape):
41
+ """Convert slice object to padded indices accounting for dilation and boundaries"""
42
+ if slice_obj is None:
47
43
  return None, None, None
48
- min_indices = np.amin(indices, axis = 0)
49
-
50
- z_max, y_max, x_max = max_indices[0], max_indices[1], max_indices[2]
51
-
52
- z_min, y_min, x_min = min_indices[0], min_indices[1], min_indices[2]
53
-
54
- y_max = y_max + ((dilate_xy-1)/2) + 1 #Establish dimensions of intended subarray, expanding the max/min indices to include
55
- y_min = y_min - ((dilate_xy-1)/2) - 1 #the future dilation space (by adding/subtracting half the dilation kernel for each axis)
56
- x_max = x_max + ((dilate_xy-1)/2) + 1 #an additional index is added in each direction to make sure nothing is discluded.
44
+
45
+ z_slice, y_slice, x_slice = slice_obj
46
+
47
+ # Extract min/max from slices
48
+ z_min, z_max = z_slice.start, z_slice.stop - 1
49
+ y_min, y_max = y_slice.start, y_slice.stop - 1
50
+ x_min, x_max = x_slice.start, x_slice.stop - 1
51
+
52
+ # Add dilation padding
53
+ y_max = y_max + ((dilate_xy-1)/2) + 1
54
+ y_min = y_min - ((dilate_xy-1)/2) - 1
55
+ x_max = x_max + ((dilate_xy-1)/2) + 1
57
56
  x_min = x_min - ((dilate_xy-1)/2) - 1
58
57
  z_max = z_max + ((dilate_z-1)/2) + 1
59
58
  z_min = z_min - ((dilate_z-1)/2) - 1
60
59
 
61
- if y_max > (array_shape[1] - 1): #Some if statements to make sure the subarray will not cause an indexerror
62
- y_max = (array_shape[1] - 1)
63
- if x_max > (array_shape[2] - 1):
64
- x_max = (array_shape[2] - 1)
65
- if z_max > (array_shape[0] - 1):
66
- z_max = (array_shape[0] - 1)
67
- if y_min < 0:
68
- y_min = 0
69
- if x_min < 0:
70
- x_min = 0
71
- if z_min < 0:
72
- z_min = 0
73
-
74
- y_vals = [y_min, y_max] #Return the subarray dimensions as lists
75
- x_vals = [x_min, x_max]
76
- z_vals = [z_min, z_max]
77
-
78
- return z_vals, y_vals, x_vals
60
+ # Boundary checks
61
+ y_max = min(y_max, array_shape[1] - 1)
62
+ x_max = min(x_max, array_shape[2] - 1)
63
+ z_max = min(z_max, array_shape[0] - 1)
64
+ y_min = max(y_min, 0)
65
+ x_min = max(x_min, 0)
66
+ z_min = max(z_min, 0)
67
+
68
+ return [z_min, z_max], [y_min, y_max], [x_min, x_max]
79
69
 
80
70
  def reslice_3d_array(args):
81
71
  """Internal method used for the secondary algorithm to reslice subarrays around nodes."""
@@ -110,37 +100,45 @@ def _get_node_edge_dict(label_array, edge_array, label, dilate_xy, dilate_z):
110
100
  return edge_array
111
101
 
112
102
  def process_label(args):
113
- """Internal method used for the secondary algorithm to process a particular node."""
114
- nodes, edges, label, dilate_xy, dilate_z, array_shape = args
103
+ """Modified to use pre-computed bounding boxes instead of argwhere"""
104
+ nodes, edges, label, dilate_xy, dilate_z, array_shape, bounding_boxes = args
115
105
  print(f"Processing node {label}")
116
- indices = np.argwhere(nodes == label)
117
- z_vals, y_vals, x_vals = get_reslice_indices((indices, dilate_xy, dilate_z, array_shape))
118
- if z_vals is None: #If get_reslice_indices ran into a ValueError, nothing is returned.
106
+
107
+ # Get the pre-computed bounding box for this label
108
+ slice_obj = bounding_boxes[label-1] # -1 because label numbers start at 1
109
+ if slice_obj is None:
119
110
  return None, None, None
111
+
112
+ z_vals, y_vals, x_vals = get_reslice_indices(slice_obj, dilate_xy, dilate_z, array_shape)
113
+ if z_vals is None:
114
+ return None, None, None
115
+
120
116
  sub_nodes = reslice_3d_array((nodes, z_vals, y_vals, x_vals))
121
117
  sub_edges = reslice_3d_array((edges, z_vals, y_vals, x_vals))
122
118
  return label, sub_nodes, sub_edges
123
119
 
124
120
 
125
121
  def create_node_dictionary(nodes, edges, num_nodes, dilate_xy, dilate_z):
126
- """Internal method used for the secondary algorithm to process nodes in parallel."""
127
- # Initialize the dictionary to be returned
122
+ """Modified to pre-compute all bounding boxes using find_objects"""
128
123
  node_dict = {}
129
-
130
124
  array_shape = nodes.shape
131
-
125
+
126
+ # Get all bounding boxes at once
127
+ bounding_boxes = ndimage.find_objects(nodes)
128
+
132
129
  # Use ThreadPoolExecutor for parallel execution
133
130
  with ThreadPoolExecutor(max_workers=mp.cpu_count()) as executor:
134
- # First parallel section to process labels
135
- # List of arguments for each parallel task
136
- args_list = [(nodes, edges, i, dilate_xy, dilate_z, array_shape) for i in range(1, num_nodes + 1)]
131
+ # Create args list with bounding_boxes included
132
+ args_list = [(nodes, edges, i, dilate_xy, dilate_z, array_shape, bounding_boxes)
133
+ for i in range(1, num_nodes + 1)]
137
134
 
138
135
  # Execute parallel tasks to process labels
139
136
  results = executor.map(process_label, args_list)
140
137
 
141
- # Second parallel section to create dictionary entries
138
+ # Process results in parallel
142
139
  for label, sub_nodes, sub_edges in results:
143
- executor.submit(create_dict_entry, node_dict, label, sub_nodes, sub_edges, dilate_xy, dilate_z)
140
+ executor.submit(create_dict_entry, node_dict, label, sub_nodes, sub_edges,
141
+ dilate_xy, dilate_z)
144
142
 
145
143
  return node_dict
146
144
 
@@ -4093,6 +4091,29 @@ class Network_3D:
4093
4091
 
4094
4092
 
4095
4093
  return image, output
4094
+
4095
+ def node_to_color(self, down_factor = None, mode = 0):
4096
+
4097
+ if mode == 0:
4098
+ array = self._nodes
4099
+ elif mode == 1:
4100
+ array = self._edges
4101
+
4102
+ items = list(np.unique(array))
4103
+ if 0 in items:
4104
+ del items[0]
4105
+
4106
+
4107
+ if down_factor is not None:
4108
+ original_shape = array.shape
4109
+ array = downsample(array, down_factor)
4110
+
4111
+ array, output = community_extractor.assign_node_colors(items, array)
4112
+
4113
+ if down_factor is not None:
4114
+ array = upsample_with_padding(array, down_factor, original_shape)
4115
+
4116
+ return array, output
4096
4117
 
4097
4118
 
4098
4119