nettracer3d 0.5.2__tar.gz → 0.5.3__tar.gz
This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
- {nettracer3d-0.5.2/src/nettracer3d.egg-info → nettracer3d-0.5.3}/PKG-INFO +9 -1
- {nettracer3d-0.5.2 → nettracer3d-0.5.3}/README.md +8 -1
- {nettracer3d-0.5.2 → nettracer3d-0.5.3}/pyproject.toml +3 -2
- {nettracer3d-0.5.2 → nettracer3d-0.5.3}/src/nettracer3d/morphology.py +63 -109
- {nettracer3d-0.5.2 → nettracer3d-0.5.3}/src/nettracer3d/nettracer.py +47 -49
- {nettracer3d-0.5.2 → nettracer3d-0.5.3}/src/nettracer3d/nettracer_gui.py +58 -13
- {nettracer3d-0.5.2 → nettracer3d-0.5.3}/src/nettracer3d/proximity.py +45 -47
- {nettracer3d-0.5.2 → nettracer3d-0.5.3/src/nettracer3d.egg-info}/PKG-INFO +9 -1
- {nettracer3d-0.5.2 → nettracer3d-0.5.3}/src/nettracer3d.egg-info/requires.txt +1 -0
- {nettracer3d-0.5.2 → nettracer3d-0.5.3}/LICENSE +0 -0
- {nettracer3d-0.5.2 → nettracer3d-0.5.3}/setup.cfg +0 -0
- {nettracer3d-0.5.2 → nettracer3d-0.5.3}/src/nettracer3d/__init__.py +0 -0
- {nettracer3d-0.5.2 → nettracer3d-0.5.3}/src/nettracer3d/community_extractor.py +0 -0
- {nettracer3d-0.5.2 → nettracer3d-0.5.3}/src/nettracer3d/hub_getter.py +0 -0
- {nettracer3d-0.5.2 → nettracer3d-0.5.3}/src/nettracer3d/modularity.py +0 -0
- {nettracer3d-0.5.2 → nettracer3d-0.5.3}/src/nettracer3d/network_analysis.py +0 -0
- {nettracer3d-0.5.2 → nettracer3d-0.5.3}/src/nettracer3d/network_draw.py +0 -0
- {nettracer3d-0.5.2 → nettracer3d-0.5.3}/src/nettracer3d/node_draw.py +0 -0
- {nettracer3d-0.5.2 → nettracer3d-0.5.3}/src/nettracer3d/run.py +0 -0
- {nettracer3d-0.5.2 → nettracer3d-0.5.3}/src/nettracer3d/segmenter.py +0 -0
- {nettracer3d-0.5.2 → nettracer3d-0.5.3}/src/nettracer3d/simple_network.py +0 -0
- {nettracer3d-0.5.2 → nettracer3d-0.5.3}/src/nettracer3d/smart_dilate.py +0 -0
- {nettracer3d-0.5.2 → nettracer3d-0.5.3}/src/nettracer3d.egg-info/SOURCES.txt +0 -0
- {nettracer3d-0.5.2 → nettracer3d-0.5.3}/src/nettracer3d.egg-info/dependency_links.txt +0 -0
- {nettracer3d-0.5.2 → nettracer3d-0.5.3}/src/nettracer3d.egg-info/entry_points.txt +0 -0
- {nettracer3d-0.5.2 → nettracer3d-0.5.3}/src/nettracer3d.egg-info/top_level.txt +0 -0
|
@@ -1,6 +1,6 @@
|
|
|
1
1
|
Metadata-Version: 2.2
|
|
2
2
|
Name: nettracer3d
|
|
3
|
-
Version: 0.5.
|
|
3
|
+
Version: 0.5.3
|
|
4
4
|
Summary: Scripts for intializing and analyzing networks from segmentations of three dimensional images.
|
|
5
5
|
Author-email: Liam McLaughlin <mclaughlinliam99@gmail.com>
|
|
6
6
|
Project-URL: User_Tutorial, https://www.youtube.com/watch?v=cRatn5VTWDY
|
|
@@ -26,6 +26,7 @@ Requires-Dist: tifffile==2023.7.18
|
|
|
26
26
|
Requires-Dist: qtrangeslider==0.1.5
|
|
27
27
|
Requires-Dist: PyQt6==6.8.0
|
|
28
28
|
Requires-Dist: scikit-learn==1.6.1
|
|
29
|
+
Requires-Dist: nibabel==5.2.0
|
|
29
30
|
Provides-Extra: cuda11
|
|
30
31
|
Requires-Dist: cupy-cuda11x; extra == "cuda11"
|
|
31
32
|
Provides-Extra: cuda12
|
|
@@ -42,3 +43,10 @@ for a video tutorial on using the GUI.
|
|
|
42
43
|
NetTracer3D is free to use/fork for academic/nonprofit use so long as citation is provided, and is available for commercial use at a fee (see license file for information).
|
|
43
44
|
|
|
44
45
|
NetTracer3D was developed by Liam McLaughlin while working under Dr. Sanjay Jain at Washington University School of Medicine.
|
|
46
|
+
|
|
47
|
+
-- Version 0.5.3 updates --
|
|
48
|
+
|
|
49
|
+
1. Improved calculate volumes method. Previous method used np.argwhere() to count voxels of labeled objects in parallel which was quite strenuous in large arrays with many objects. New method uses np.bincount() which uses optimized numpy C libraries to do the same.
|
|
50
|
+
2. scipy.ndimage.find_objects() method was replaced as the method to find bounding boxes for objects when searching for object neighborhoods for the morphological proximity network and the edge < > node interaction quantification. This new version should be substantially faster in big arrays with many labels. (Depending on how well this improves performance, I may reimplement the secondary network search algorithm, as a side-option, which uses the same parallel-search within subarray strategies, as opposed to the primary network search algorithm that uses distance transforms).
|
|
51
|
+
3. Image viewer window can now load in .nii format images, as well as .jpeg, .jpg, and .png. The nibabel library was added to the dependencies to enable .nii loading, although this is currently all it is used for (and the gui will still run without nibabel).
|
|
52
|
+
4. Fixed bug regarding deleting edge objects.
|
|
@@ -6,4 +6,11 @@ for a video tutorial on using the GUI.
|
|
|
6
6
|
|
|
7
7
|
NetTracer3D is free to use/fork for academic/nonprofit use so long as citation is provided, and is available for commercial use at a fee (see license file for information).
|
|
8
8
|
|
|
9
|
-
NetTracer3D was developed by Liam McLaughlin while working under Dr. Sanjay Jain at Washington University School of Medicine.
|
|
9
|
+
NetTracer3D was developed by Liam McLaughlin while working under Dr. Sanjay Jain at Washington University School of Medicine.
|
|
10
|
+
|
|
11
|
+
-- Version 0.5.3 updates --
|
|
12
|
+
|
|
13
|
+
1. Improved calculate volumes method. Previous method used np.argwhere() to count voxels of labeled objects in parallel which was quite strenuous in large arrays with many objects. New method uses np.bincount() which uses optimized numpy C libraries to do the same.
|
|
14
|
+
2. scipy.ndimage.find_objects() method was replaced as the method to find bounding boxes for objects when searching for object neighborhoods for the morphological proximity network and the edge < > node interaction quantification. This new version should be substantially faster in big arrays with many labels. (Depending on how well this improves performance, I may reimplement the secondary network search algorithm, as a side-option, which uses the same parallel-search within subarray strategies, as opposed to the primary network search algorithm that uses distance transforms).
|
|
15
|
+
3. Image viewer window can now load in .nii format images, as well as .jpeg, .jpg, and .png. The nibabel library was added to the dependencies to enable .nii loading, although this is currently all it is used for (and the gui will still run without nibabel).
|
|
16
|
+
4. Fixed bug regarding deleting edge objects.
|
|
@@ -1,6 +1,6 @@
|
|
|
1
1
|
[project]
|
|
2
2
|
name = "nettracer3d"
|
|
3
|
-
version = "0.5.
|
|
3
|
+
version = "0.5.3"
|
|
4
4
|
authors = [
|
|
5
5
|
{ name="Liam McLaughlin", email="mclaughlinliam99@gmail.com" },
|
|
6
6
|
]
|
|
@@ -20,7 +20,8 @@ dependencies = [
|
|
|
20
20
|
"tifffile == 2023.7.18",
|
|
21
21
|
"qtrangeslider == 0.1.5",
|
|
22
22
|
"PyQt6 == 6.8.0",
|
|
23
|
-
"scikit-learn == 1.6.1"
|
|
23
|
+
"scikit-learn == 1.6.1",
|
|
24
|
+
"nibabel == 5.2.0"
|
|
24
25
|
]
|
|
25
26
|
|
|
26
27
|
readme = "README.md"
|
|
@@ -7,46 +7,37 @@ from concurrent.futures import ThreadPoolExecutor, as_completed
|
|
|
7
7
|
import tifffile
|
|
8
8
|
from functools import partial
|
|
9
9
|
import pandas as pd
|
|
10
|
+
from scipy import ndimage
|
|
10
11
|
|
|
11
|
-
def get_reslice_indices(
|
|
12
|
-
"""
|
|
13
|
-
|
|
14
|
-
indices, dilate_xy, dilate_z, array_shape = args
|
|
15
|
-
try:
|
|
16
|
-
max_indices = np.amax(indices, axis = 0) #Get the max/min of each index.
|
|
17
|
-
except ValueError: #Return Nones if this error is encountered
|
|
12
|
+
def get_reslice_indices(slice_obj, dilate_xy, dilate_z, array_shape):
|
|
13
|
+
"""Convert slice object to padded indices accounting for dilation and boundaries"""
|
|
14
|
+
if slice_obj is None:
|
|
18
15
|
return None, None, None
|
|
19
|
-
|
|
20
|
-
|
|
21
|
-
|
|
22
|
-
|
|
23
|
-
z_min,
|
|
24
|
-
|
|
25
|
-
|
|
26
|
-
|
|
27
|
-
|
|
16
|
+
|
|
17
|
+
z_slice, y_slice, x_slice = slice_obj
|
|
18
|
+
|
|
19
|
+
# Extract min/max from slices
|
|
20
|
+
z_min, z_max = z_slice.start, z_slice.stop - 1
|
|
21
|
+
y_min, y_max = y_slice.start, y_slice.stop - 1
|
|
22
|
+
x_min, x_max = x_slice.start, x_slice.stop - 1
|
|
23
|
+
|
|
24
|
+
# Add dilation padding
|
|
25
|
+
y_max = y_max + ((dilate_xy-1)/2) + 1
|
|
26
|
+
y_min = y_min - ((dilate_xy-1)/2) - 1
|
|
27
|
+
x_max = x_max + ((dilate_xy-1)/2) + 1
|
|
28
28
|
x_min = x_min - ((dilate_xy-1)/2) - 1
|
|
29
29
|
z_max = z_max + ((dilate_z-1)/2) + 1
|
|
30
30
|
z_min = z_min - ((dilate_z-1)/2) - 1
|
|
31
31
|
|
|
32
|
-
|
|
33
|
-
|
|
34
|
-
|
|
35
|
-
|
|
36
|
-
|
|
37
|
-
|
|
38
|
-
|
|
39
|
-
|
|
40
|
-
|
|
41
|
-
x_min = 0
|
|
42
|
-
if z_min < 0:
|
|
43
|
-
z_min = 0
|
|
44
|
-
|
|
45
|
-
y_vals = [y_min, y_max] #Return the subarray dimensions as lists
|
|
46
|
-
x_vals = [x_min, x_max]
|
|
47
|
-
z_vals = [z_min, z_max]
|
|
48
|
-
|
|
49
|
-
return z_vals, y_vals, x_vals
|
|
32
|
+
# Boundary checks
|
|
33
|
+
y_max = min(y_max, array_shape[1] - 1)
|
|
34
|
+
x_max = min(x_max, array_shape[2] - 1)
|
|
35
|
+
z_max = min(z_max, array_shape[0] - 1)
|
|
36
|
+
y_min = max(y_min, 0)
|
|
37
|
+
x_min = max(x_min, 0)
|
|
38
|
+
z_min = max(z_min, 0)
|
|
39
|
+
|
|
40
|
+
return [z_min, z_max], [y_min, y_max], [x_min, x_max]
|
|
50
41
|
|
|
51
42
|
def reslice_3d_array(args):
|
|
52
43
|
"""Internal method used for the secondary algorithm to reslice subarrays around nodes."""
|
|
@@ -97,39 +88,46 @@ def _get_node_edge_dict(label_array, edge_array, label, dilate_xy, dilate_z, cor
|
|
|
97
88
|
return args
|
|
98
89
|
|
|
99
90
|
def process_label(args):
|
|
100
|
-
"""
|
|
101
|
-
nodes, edges, label, dilate_xy, dilate_z, array_shape = args
|
|
91
|
+
"""Modified to use pre-computed bounding boxes instead of argwhere"""
|
|
92
|
+
nodes, edges, label, dilate_xy, dilate_z, array_shape, bounding_boxes = args
|
|
102
93
|
print(f"Processing node {label}")
|
|
103
|
-
|
|
104
|
-
|
|
94
|
+
|
|
95
|
+
# Get the pre-computed bounding box for this label
|
|
96
|
+
slice_obj = bounding_boxes[label-1] # -1 because label numbers start at 1
|
|
97
|
+
if slice_obj is None:
|
|
105
98
|
return None, None, None
|
|
106
|
-
|
|
107
|
-
|
|
99
|
+
|
|
100
|
+
z_vals, y_vals, x_vals = get_reslice_indices(slice_obj, dilate_xy, dilate_z, array_shape)
|
|
101
|
+
if z_vals is None:
|
|
108
102
|
return None, None, None
|
|
103
|
+
|
|
109
104
|
sub_nodes = reslice_3d_array((nodes, z_vals, y_vals, x_vals))
|
|
110
105
|
sub_edges = reslice_3d_array((edges, z_vals, y_vals, x_vals))
|
|
111
106
|
return label, sub_nodes, sub_edges
|
|
112
107
|
|
|
113
108
|
|
|
114
|
-
def create_node_dictionary(nodes, edges, num_nodes, dilate_xy, dilate_z, cores = 0):
|
|
115
|
-
"""Internal method used for the secondary algorithm to process nodes in parallel."""
|
|
116
|
-
# Initialize the dictionary to be returned
|
|
117
|
-
node_dict = {}
|
|
118
109
|
|
|
110
|
+
def create_node_dictionary(nodes, edges, num_nodes, dilate_xy, dilate_z, cores=0):
|
|
111
|
+
"""Modified to pre-compute all bounding boxes using find_objects"""
|
|
112
|
+
node_dict = {}
|
|
119
113
|
array_shape = nodes.shape
|
|
120
|
-
|
|
114
|
+
|
|
115
|
+
# Get all bounding boxes at once
|
|
116
|
+
bounding_boxes = ndimage.find_objects(nodes)
|
|
117
|
+
|
|
121
118
|
# Use ThreadPoolExecutor for parallel execution
|
|
122
119
|
with ThreadPoolExecutor(max_workers=mp.cpu_count()) as executor:
|
|
123
|
-
#
|
|
124
|
-
|
|
125
|
-
|
|
120
|
+
# Create args list with bounding_boxes included
|
|
121
|
+
args_list = [(nodes, edges, i, dilate_xy, dilate_z, array_shape, bounding_boxes)
|
|
122
|
+
for i in range(1, num_nodes + 1)]
|
|
126
123
|
|
|
127
124
|
# Execute parallel tasks to process labels
|
|
128
125
|
results = executor.map(process_label, args_list)
|
|
129
126
|
|
|
130
|
-
#
|
|
127
|
+
# Process results in parallel
|
|
131
128
|
for label, sub_nodes, sub_edges in results:
|
|
132
|
-
executor.submit(create_dict_entry, node_dict, label, sub_nodes, sub_edges,
|
|
129
|
+
executor.submit(create_dict_entry, node_dict, label, sub_nodes, sub_edges,
|
|
130
|
+
dilate_xy, dilate_z, cores)
|
|
133
131
|
|
|
134
132
|
return node_dict
|
|
135
133
|
|
|
@@ -193,10 +191,10 @@ def quantify_edge_node(nodes, edges, search = 0, xy_scale = 1, z_scale = 1, core
|
|
|
193
191
|
return edge_quants
|
|
194
192
|
|
|
195
193
|
|
|
194
|
+
|
|
196
195
|
def calculate_voxel_volumes(array, xy_scale=1, z_scale=1):
|
|
197
196
|
"""
|
|
198
|
-
Calculate voxel volumes for each uniquely labelled object in a 3D numpy array
|
|
199
|
-
using parallel processing.
|
|
197
|
+
Calculate voxel volumes for each uniquely labelled object in a 3D numpy array.
|
|
200
198
|
|
|
201
199
|
Args:
|
|
202
200
|
array: 3D numpy array where different objects are marked with different integer labels
|
|
@@ -207,69 +205,25 @@ def calculate_voxel_volumes(array, xy_scale=1, z_scale=1):
|
|
|
207
205
|
Dictionary mapping object labels to their voxel volumes
|
|
208
206
|
"""
|
|
209
207
|
|
|
210
|
-
def process_volume_chunk(chunk_data, labels, xy_scale, z_scale):
|
|
211
|
-
"""
|
|
212
|
-
Calculate volumes for a chunk of the array.
|
|
213
|
-
|
|
214
|
-
Args:
|
|
215
|
-
chunk_data: 3D numpy array chunk
|
|
216
|
-
labels: Array of unique labels to process
|
|
217
|
-
xy_scale: Scale factor for x and y dimensions
|
|
218
|
-
z_scale: Scale factor for z dimension
|
|
219
|
-
|
|
220
|
-
Returns:
|
|
221
|
-
Dictionary of label: volume pairs for this chunk
|
|
222
|
-
"""
|
|
223
|
-
chunk_volumes = {}
|
|
224
|
-
for label in labels:
|
|
225
|
-
volume = np.count_nonzero(chunk_data == label) * (xy_scale**2) * z_scale
|
|
226
|
-
if volume > 0: # Only include if object exists in this chunk
|
|
227
|
-
chunk_volumes[label] = volume
|
|
228
|
-
return chunk_volumes
|
|
229
|
-
|
|
230
|
-
# Get unique labels (excluding 0 which typically represents background)
|
|
231
208
|
labels = np.unique(array)
|
|
232
209
|
if len(labels) == 2:
|
|
233
210
|
array, _ = nettracer.label_objects(array)
|
|
234
|
-
|
|
235
|
-
|
|
236
|
-
|
|
237
|
-
if len(labels) == 0:
|
|
238
|
-
return {}
|
|
239
|
-
|
|
240
|
-
# Get number of CPU cores
|
|
241
|
-
num_cores = mp.cpu_count()
|
|
242
|
-
|
|
243
|
-
# Calculate chunk size along y-axis
|
|
244
|
-
chunk_size = array.shape[1] // num_cores
|
|
245
|
-
if chunk_size < 1:
|
|
246
|
-
chunk_size = 1
|
|
247
|
-
|
|
248
|
-
# Create chunks along y-axis
|
|
249
|
-
chunks = []
|
|
250
|
-
for i in range(0, array.shape[1], chunk_size):
|
|
251
|
-
end = min(i + chunk_size, array.shape[1])
|
|
252
|
-
chunks.append(array[:, i:end, :])
|
|
211
|
+
|
|
212
|
+
del labels
|
|
253
213
|
|
|
254
|
-
#
|
|
255
|
-
|
|
256
|
-
|
|
257
|
-
|
|
258
|
-
|
|
214
|
+
# Get volumes using bincount
|
|
215
|
+
if 0 in array:
|
|
216
|
+
volumes = np.bincount(array.ravel())[1:]
|
|
217
|
+
else:
|
|
218
|
+
volumes = np.bincount(array.ravel())
|
|
219
|
+
|
|
259
220
|
|
|
260
|
-
|
|
261
|
-
|
|
262
|
-
chunk_results = list(executor.map(process_func, chunks))
|
|
263
|
-
|
|
264
|
-
# Combine results from all chunks
|
|
265
|
-
for chunk_volumes in chunk_results:
|
|
266
|
-
for label, volume in chunk_volumes.items():
|
|
267
|
-
if label in volumes:
|
|
268
|
-
volumes[label] += volume
|
|
269
|
-
else:
|
|
270
|
-
volumes[label] = volume
|
|
221
|
+
# Apply scaling
|
|
222
|
+
volumes = volumes * (xy_scale**2) * z_scale
|
|
271
223
|
|
|
272
|
-
|
|
224
|
+
# Create dictionary with label:volume pairs
|
|
225
|
+
return {label: volume for label, volume in enumerate(volumes, start=1) if volume > 0}
|
|
226
|
+
|
|
273
227
|
|
|
274
228
|
|
|
275
229
|
def search_neighbor_ids(nodes, targets, id_dict, neighborhood_dict, totals, search, xy_scale, z_scale, root):
|
|
@@ -5,6 +5,7 @@ from scipy import ndimage
|
|
|
5
5
|
from skimage import measure
|
|
6
6
|
import cv2
|
|
7
7
|
import concurrent.futures
|
|
8
|
+
from concurrent.futures import ThreadPoolExecutor, as_completed
|
|
8
9
|
from scipy.ndimage import zoom
|
|
9
10
|
import multiprocessing as mp
|
|
10
11
|
import os
|
|
@@ -23,7 +24,6 @@ except:
|
|
|
23
24
|
from . import node_draw
|
|
24
25
|
from . import network_draw
|
|
25
26
|
from skimage import morphology as mpg
|
|
26
|
-
from concurrent.futures import ThreadPoolExecutor, as_completed
|
|
27
27
|
from . import smart_dilate
|
|
28
28
|
from . import modularity
|
|
29
29
|
from . import simple_network
|
|
@@ -37,45 +37,35 @@ from . import proximity
|
|
|
37
37
|
#These next several methods relate to searching with 3D objects by dilating each one in a subarray around their neighborhood although I don't explicitly use this anywhere... can call them deprecated although I may want to use them later again so I have them still written out here.
|
|
38
38
|
|
|
39
39
|
|
|
40
|
-
def get_reslice_indices(
|
|
41
|
-
"""
|
|
42
|
-
|
|
43
|
-
indices, dilate_xy, dilate_z, array_shape = args
|
|
44
|
-
try:
|
|
45
|
-
max_indices = np.amax(indices, axis = 0) #Get the max/min of each index.
|
|
46
|
-
except ValueError: #Return Nones if this error is encountered
|
|
40
|
+
def get_reslice_indices(slice_obj, dilate_xy, dilate_z, array_shape):
|
|
41
|
+
"""Convert slice object to padded indices accounting for dilation and boundaries"""
|
|
42
|
+
if slice_obj is None:
|
|
47
43
|
return None, None, None
|
|
48
|
-
|
|
49
|
-
|
|
50
|
-
|
|
51
|
-
|
|
52
|
-
z_min,
|
|
53
|
-
|
|
54
|
-
|
|
55
|
-
|
|
56
|
-
|
|
44
|
+
|
|
45
|
+
z_slice, y_slice, x_slice = slice_obj
|
|
46
|
+
|
|
47
|
+
# Extract min/max from slices
|
|
48
|
+
z_min, z_max = z_slice.start, z_slice.stop - 1
|
|
49
|
+
y_min, y_max = y_slice.start, y_slice.stop - 1
|
|
50
|
+
x_min, x_max = x_slice.start, x_slice.stop - 1
|
|
51
|
+
|
|
52
|
+
# Add dilation padding
|
|
53
|
+
y_max = y_max + ((dilate_xy-1)/2) + 1
|
|
54
|
+
y_min = y_min - ((dilate_xy-1)/2) - 1
|
|
55
|
+
x_max = x_max + ((dilate_xy-1)/2) + 1
|
|
57
56
|
x_min = x_min - ((dilate_xy-1)/2) - 1
|
|
58
57
|
z_max = z_max + ((dilate_z-1)/2) + 1
|
|
59
58
|
z_min = z_min - ((dilate_z-1)/2) - 1
|
|
60
59
|
|
|
61
|
-
|
|
62
|
-
|
|
63
|
-
|
|
64
|
-
|
|
65
|
-
|
|
66
|
-
|
|
67
|
-
|
|
68
|
-
|
|
69
|
-
|
|
70
|
-
x_min = 0
|
|
71
|
-
if z_min < 0:
|
|
72
|
-
z_min = 0
|
|
73
|
-
|
|
74
|
-
y_vals = [y_min, y_max] #Return the subarray dimensions as lists
|
|
75
|
-
x_vals = [x_min, x_max]
|
|
76
|
-
z_vals = [z_min, z_max]
|
|
77
|
-
|
|
78
|
-
return z_vals, y_vals, x_vals
|
|
60
|
+
# Boundary checks
|
|
61
|
+
y_max = min(y_max, array_shape[1] - 1)
|
|
62
|
+
x_max = min(x_max, array_shape[2] - 1)
|
|
63
|
+
z_max = min(z_max, array_shape[0] - 1)
|
|
64
|
+
y_min = max(y_min, 0)
|
|
65
|
+
x_min = max(x_min, 0)
|
|
66
|
+
z_min = max(z_min, 0)
|
|
67
|
+
|
|
68
|
+
return [z_min, z_max], [y_min, y_max], [x_min, x_max]
|
|
79
69
|
|
|
80
70
|
def reslice_3d_array(args):
|
|
81
71
|
"""Internal method used for the secondary algorithm to reslice subarrays around nodes."""
|
|
@@ -110,37 +100,45 @@ def _get_node_edge_dict(label_array, edge_array, label, dilate_xy, dilate_z):
|
|
|
110
100
|
return edge_array
|
|
111
101
|
|
|
112
102
|
def process_label(args):
|
|
113
|
-
"""
|
|
114
|
-
nodes, edges, label, dilate_xy, dilate_z, array_shape = args
|
|
103
|
+
"""Modified to use pre-computed bounding boxes instead of argwhere"""
|
|
104
|
+
nodes, edges, label, dilate_xy, dilate_z, array_shape, bounding_boxes = args
|
|
115
105
|
print(f"Processing node {label}")
|
|
116
|
-
|
|
117
|
-
|
|
118
|
-
|
|
106
|
+
|
|
107
|
+
# Get the pre-computed bounding box for this label
|
|
108
|
+
slice_obj = bounding_boxes[label-1] # -1 because label numbers start at 1
|
|
109
|
+
if slice_obj is None:
|
|
119
110
|
return None, None, None
|
|
111
|
+
|
|
112
|
+
z_vals, y_vals, x_vals = get_reslice_indices(slice_obj, dilate_xy, dilate_z, array_shape)
|
|
113
|
+
if z_vals is None:
|
|
114
|
+
return None, None, None
|
|
115
|
+
|
|
120
116
|
sub_nodes = reslice_3d_array((nodes, z_vals, y_vals, x_vals))
|
|
121
117
|
sub_edges = reslice_3d_array((edges, z_vals, y_vals, x_vals))
|
|
122
118
|
return label, sub_nodes, sub_edges
|
|
123
119
|
|
|
124
120
|
|
|
125
121
|
def create_node_dictionary(nodes, edges, num_nodes, dilate_xy, dilate_z):
|
|
126
|
-
"""
|
|
127
|
-
# Initialize the dictionary to be returned
|
|
122
|
+
"""Modified to pre-compute all bounding boxes using find_objects"""
|
|
128
123
|
node_dict = {}
|
|
129
|
-
|
|
130
124
|
array_shape = nodes.shape
|
|
131
|
-
|
|
125
|
+
|
|
126
|
+
# Get all bounding boxes at once
|
|
127
|
+
bounding_boxes = ndimage.find_objects(nodes)
|
|
128
|
+
|
|
132
129
|
# Use ThreadPoolExecutor for parallel execution
|
|
133
130
|
with ThreadPoolExecutor(max_workers=mp.cpu_count()) as executor:
|
|
134
|
-
#
|
|
135
|
-
|
|
136
|
-
|
|
131
|
+
# Create args list with bounding_boxes included
|
|
132
|
+
args_list = [(nodes, edges, i, dilate_xy, dilate_z, array_shape, bounding_boxes)
|
|
133
|
+
for i in range(1, num_nodes + 1)]
|
|
137
134
|
|
|
138
135
|
# Execute parallel tasks to process labels
|
|
139
136
|
results = executor.map(process_label, args_list)
|
|
140
137
|
|
|
141
|
-
#
|
|
138
|
+
# Process results in parallel
|
|
142
139
|
for label, sub_nodes, sub_edges in results:
|
|
143
|
-
executor.submit(create_dict_entry, node_dict, label, sub_nodes, sub_edges,
|
|
140
|
+
executor.submit(create_dict_entry, node_dict, label, sub_nodes, sub_edges,
|
|
141
|
+
dilate_xy, dilate_z)
|
|
144
142
|
|
|
145
143
|
return node_dict
|
|
146
144
|
|
|
@@ -1286,7 +1286,7 @@ class ImageViewerWindow(QMainWindow):
|
|
|
1286
1286
|
self.load_channel(1, my_network.edges, True)
|
|
1287
1287
|
self.highlight_overlay = None
|
|
1288
1288
|
self.update_display()
|
|
1289
|
-
print("Network is not updated automatically, please recompute if
|
|
1289
|
+
print("Network is not updated automatically, please recompute if necessary. Identities are not automatically updated.")
|
|
1290
1290
|
self.show_centroid_dialog()
|
|
1291
1291
|
|
|
1292
1292
|
except Exception as e:
|
|
@@ -1315,7 +1315,7 @@ class ImageViewerWindow(QMainWindow):
|
|
|
1315
1315
|
|
|
1316
1316
|
|
|
1317
1317
|
if len(self.clicked_values['edges']) > 0:
|
|
1318
|
-
self.create_highlight_overlay(
|
|
1318
|
+
self.create_highlight_overlay(edge_indices = self.clicked_values['edges'])
|
|
1319
1319
|
mask = self.highlight_overlay == 0
|
|
1320
1320
|
my_network.edges = my_network.edges * mask
|
|
1321
1321
|
self.load_channel(1, my_network.edges, True)
|
|
@@ -2954,14 +2954,46 @@ class ImageViewerWindow(QMainWindow):
|
|
|
2954
2954
|
|
|
2955
2955
|
try:
|
|
2956
2956
|
if not data: # For solo loading
|
|
2957
|
-
import tifffile
|
|
2958
2957
|
filename, _ = QFileDialog.getOpenFileName(
|
|
2959
2958
|
self,
|
|
2960
2959
|
f"Load Channel {channel_index + 1}",
|
|
2961
2960
|
"",
|
|
2962
|
-
"
|
|
2961
|
+
"Image Files (*.tif *.tiff *.nii *.jpg *.jpeg *.png)"
|
|
2963
2962
|
)
|
|
2964
|
-
|
|
2963
|
+
|
|
2964
|
+
if not filename:
|
|
2965
|
+
return
|
|
2966
|
+
|
|
2967
|
+
file_extension = filename.lower().split('.')[-1]
|
|
2968
|
+
|
|
2969
|
+
try:
|
|
2970
|
+
if file_extension in ['tif', 'tiff']:
|
|
2971
|
+
import tifffile
|
|
2972
|
+
self.channel_data[channel_index] = tifffile.imread(filename)
|
|
2973
|
+
|
|
2974
|
+
elif file_extension == 'nii':
|
|
2975
|
+
import nibabel as nib
|
|
2976
|
+
nii_img = nib.load(filename)
|
|
2977
|
+
# Get data and transpose to match TIFF orientation
|
|
2978
|
+
# If X needs to become Z, we move axis 2 (X) to position 0 (Z)
|
|
2979
|
+
data = nii_img.get_fdata()
|
|
2980
|
+
self.channel_data[channel_index] = np.transpose(data, (2, 1, 0))
|
|
2981
|
+
|
|
2982
|
+
elif file_extension in ['jpg', 'jpeg', 'png']:
|
|
2983
|
+
from PIL import Image
|
|
2984
|
+
|
|
2985
|
+
with Image.open(filename) as img:
|
|
2986
|
+
# Convert directly to numpy array, keeping color if present
|
|
2987
|
+
self.channel_data[channel_index] = np.array(img)
|
|
2988
|
+
|
|
2989
|
+
# Debug info to check shape
|
|
2990
|
+
print(f"Loaded image shape: {self.channel_data[channel_index].shape}")
|
|
2991
|
+
|
|
2992
|
+
except ImportError as e:
|
|
2993
|
+
QMessageBox.critical(self, "Error", f"Required library not installed: {str(e)}")
|
|
2994
|
+
except Exception as e:
|
|
2995
|
+
QMessageBox.critical(self, "Error", f"Error loading image: {str(e)}")
|
|
2996
|
+
|
|
2965
2997
|
|
|
2966
2998
|
if len(self.channel_data[channel_index].shape) == 2: # handle 2d data
|
|
2967
2999
|
self.channel_data[channel_index] = np.expand_dims(self.channel_data[channel_index], axis=0)
|
|
@@ -2983,10 +3015,13 @@ class ImageViewerWindow(QMainWindow):
|
|
|
2983
3015
|
for i in range(4): #Try to ensure users don't load in different sized arrays
|
|
2984
3016
|
if self.channel_data[i] is None or i == channel_index or data:
|
|
2985
3017
|
if self.highlight_overlay is not None: #Make sure highlight overlay is always the same shape as new images
|
|
2986
|
-
|
|
2987
|
-
self.
|
|
2988
|
-
|
|
2989
|
-
|
|
3018
|
+
try:
|
|
3019
|
+
if self.channel_data[i].shape[:3] != self.highlight_overlay.shape:
|
|
3020
|
+
self.resizing = True
|
|
3021
|
+
reset_resize = True
|
|
3022
|
+
self.highlight_overlay = None
|
|
3023
|
+
except:
|
|
3024
|
+
pass
|
|
2990
3025
|
continue
|
|
2991
3026
|
else:
|
|
2992
3027
|
old_shape = self.channel_data[i].shape[:3] #Ask user to resize images that are shaped differently
|
|
@@ -3061,6 +3096,8 @@ class ImageViewerWindow(QMainWindow):
|
|
|
3061
3096
|
|
|
3062
3097
|
|
|
3063
3098
|
except Exception as e:
|
|
3099
|
+
import traceback
|
|
3100
|
+
print(traceback.format_exc())
|
|
3064
3101
|
if not data:
|
|
3065
3102
|
from PyQt6.QtWidgets import QMessageBox
|
|
3066
3103
|
QMessageBox.critical(
|
|
@@ -6916,11 +6953,16 @@ class WatershedDialog(QDialog):
|
|
|
6916
6953
|
self.directory.setPlaceholderText("Leave empty for None")
|
|
6917
6954
|
layout.addRow("Output Directory:", self.directory)
|
|
6918
6955
|
|
|
6919
|
-
|
|
6956
|
+
try:
|
|
6920
6957
|
|
|
6921
|
-
|
|
6922
|
-
|
|
6923
|
-
|
|
6958
|
+
active_shape = self.parent().channel_data[self.parent().active_channel].shape[0]
|
|
6959
|
+
|
|
6960
|
+
if active_shape == 1:
|
|
6961
|
+
self.default = 0.2
|
|
6962
|
+
else:
|
|
6963
|
+
self.default = 0.05
|
|
6964
|
+
|
|
6965
|
+
except:
|
|
6924
6966
|
self.default = 0.05
|
|
6925
6967
|
|
|
6926
6968
|
|
|
@@ -6949,6 +6991,9 @@ class WatershedDialog(QDialog):
|
|
|
6949
6991
|
self.predownsample2.setPlaceholderText("Leave empty for None")
|
|
6950
6992
|
layout.addRow("Smart Label GPU Downsample:", self.predownsample2)
|
|
6951
6993
|
|
|
6994
|
+
layout.addRow("Note:", QLabel(f"If the optimal proportion watershed output is still labeling spatially seperated objects with the same label, try right placing the result in nodes or edges\nthen right click the image and choose 'select all', followed by right clicking and 'selection' -> 'split non-touching labels'."))
|
|
6995
|
+
|
|
6996
|
+
|
|
6952
6997
|
# Add Run button
|
|
6953
6998
|
run_button = QPushButton("Run Watershed")
|
|
6954
6999
|
run_button.clicked.connect(self.run_watershed)
|
|
@@ -3,6 +3,7 @@ from . import nettracer
|
|
|
3
3
|
import multiprocessing as mp
|
|
4
4
|
from concurrent.futures import ThreadPoolExecutor, as_completed
|
|
5
5
|
from scipy.spatial import KDTree
|
|
6
|
+
from scipy import ndimage
|
|
6
7
|
import concurrent.futures
|
|
7
8
|
import multiprocessing as mp
|
|
8
9
|
import pandas as pd
|
|
@@ -11,41 +12,35 @@ from typing import Dict, Union, Tuple, List, Optional
|
|
|
11
12
|
|
|
12
13
|
# Related to morphological border searching:
|
|
13
14
|
|
|
14
|
-
def get_reslice_indices(
|
|
15
|
-
"""
|
|
16
|
-
|
|
17
|
-
indices, dilate_xy, dilate_z, array_shape = args
|
|
18
|
-
try:
|
|
19
|
-
max_indices = np.amax(indices, axis = 0) #Get the max/min of each index.
|
|
20
|
-
except ValueError: #Return Nones if this error is encountered
|
|
15
|
+
def get_reslice_indices(slice_obj, dilate_xy, dilate_z, array_shape):
|
|
16
|
+
"""Convert slice object to padded indices accounting for dilation and boundaries"""
|
|
17
|
+
if slice_obj is None:
|
|
21
18
|
return None, None, None
|
|
22
|
-
|
|
23
|
-
|
|
24
|
-
|
|
25
|
-
|
|
26
|
-
z_min,
|
|
27
|
-
|
|
28
|
-
|
|
29
|
-
|
|
30
|
-
|
|
19
|
+
|
|
20
|
+
z_slice, y_slice, x_slice = slice_obj
|
|
21
|
+
|
|
22
|
+
# Extract min/max from slices
|
|
23
|
+
z_min, z_max = z_slice.start, z_slice.stop - 1
|
|
24
|
+
y_min, y_max = y_slice.start, y_slice.stop - 1
|
|
25
|
+
x_min, x_max = x_slice.start, x_slice.stop - 1
|
|
26
|
+
|
|
27
|
+
# Add dilation padding
|
|
28
|
+
y_max = y_max + ((dilate_xy-1)/2) + 1
|
|
29
|
+
y_min = y_min - ((dilate_xy-1)/2) - 1
|
|
30
|
+
x_max = x_max + ((dilate_xy-1)/2) + 1
|
|
31
31
|
x_min = x_min - ((dilate_xy-1)/2) - 1
|
|
32
32
|
z_max = z_max + ((dilate_z-1)/2) + 1
|
|
33
33
|
z_min = z_min - ((dilate_z-1)/2) - 1
|
|
34
34
|
|
|
35
|
-
|
|
36
|
-
|
|
37
|
-
|
|
38
|
-
|
|
39
|
-
|
|
40
|
-
|
|
41
|
-
|
|
42
|
-
|
|
43
|
-
|
|
44
|
-
x_min = 0
|
|
45
|
-
if z_min < 0:
|
|
46
|
-
z_min = 0
|
|
47
|
-
|
|
48
|
-
y_vals = [y_min, y_max] #Return the subarray dimensions as lists
|
|
35
|
+
# Boundary checks
|
|
36
|
+
y_max = min(y_max, array_shape[1] - 1)
|
|
37
|
+
x_max = min(x_max, array_shape[2] - 1)
|
|
38
|
+
z_max = min(z_max, array_shape[0] - 1)
|
|
39
|
+
y_min = max(y_min, 0)
|
|
40
|
+
x_min = max(x_min, 0)
|
|
41
|
+
z_min = max(z_min, 0)
|
|
42
|
+
|
|
43
|
+
y_vals = [y_min, y_max]
|
|
49
44
|
x_vals = [x_min, x_max]
|
|
50
45
|
z_vals = [z_min, z_max]
|
|
51
46
|
|
|
@@ -85,40 +80,43 @@ def _get_node_node_dict(label_array, label, dilate_xy, dilate_z):
|
|
|
85
80
|
return label_array
|
|
86
81
|
|
|
87
82
|
def process_label(args):
|
|
88
|
-
"""
|
|
89
|
-
nodes, label, dilate_xy, dilate_z, array_shape = args
|
|
83
|
+
"""Modified to use pre-computed bounding boxes instead of argwhere"""
|
|
84
|
+
nodes, label, dilate_xy, dilate_z, array_shape, bounding_boxes = args
|
|
90
85
|
print(f"Processing node {label}")
|
|
91
|
-
|
|
92
|
-
|
|
86
|
+
|
|
87
|
+
# Get the pre-computed bounding box for this label
|
|
88
|
+
slice_obj = bounding_boxes[label-1] # -1 because label numbers start at 1
|
|
89
|
+
if slice_obj is None:
|
|
93
90
|
return None, None
|
|
94
|
-
|
|
95
|
-
|
|
91
|
+
|
|
92
|
+
z_vals, y_vals, x_vals = get_reslice_indices(slice_obj, dilate_xy, dilate_z, array_shape)
|
|
93
|
+
if z_vals is None:
|
|
96
94
|
return None, None
|
|
95
|
+
|
|
97
96
|
sub_nodes = reslice_3d_array((nodes, z_vals, y_vals, x_vals))
|
|
98
97
|
return label, sub_nodes
|
|
99
98
|
|
|
100
99
|
|
|
101
|
-
def create_node_dictionary(nodes, num_nodes, dilate_xy, dilate_z, targets
|
|
102
|
-
"""
|
|
103
|
-
# Initialize the dictionary to be returned
|
|
100
|
+
def create_node_dictionary(nodes, num_nodes, dilate_xy, dilate_z, targets=None):
|
|
101
|
+
"""Modified to pre-compute all bounding boxes using find_objects"""
|
|
104
102
|
node_dict = {}
|
|
105
|
-
|
|
106
103
|
array_shape = nodes.shape
|
|
107
|
-
|
|
108
|
-
|
|
104
|
+
|
|
105
|
+
# Get all bounding boxes at once
|
|
106
|
+
bounding_boxes = ndimage.find_objects(nodes)
|
|
107
|
+
|
|
109
108
|
# Use ThreadPoolExecutor for parallel execution
|
|
110
109
|
with ThreadPoolExecutor(max_workers=mp.cpu_count()) as executor:
|
|
111
|
-
#
|
|
112
|
-
|
|
113
|
-
|
|
110
|
+
# Create args list with bounding_boxes included
|
|
111
|
+
args_list = [(nodes, i, dilate_xy, dilate_z, array_shape, bounding_boxes)
|
|
112
|
+
for i in range(1, num_nodes + 1)]
|
|
114
113
|
|
|
115
114
|
if targets is not None:
|
|
116
115
|
args_list = [tup for tup in args_list if tup[1] in targets]
|
|
117
116
|
|
|
118
117
|
results = executor.map(process_label, args_list)
|
|
119
118
|
|
|
120
|
-
|
|
121
|
-
# Second parallel section to create dictionary entries
|
|
119
|
+
# Process results in parallel
|
|
122
120
|
for label, sub_nodes in results:
|
|
123
121
|
executor.submit(create_dict_entry, node_dict, label, sub_nodes, dilate_xy, dilate_z)
|
|
124
122
|
|
|
@@ -1,6 +1,6 @@
|
|
|
1
1
|
Metadata-Version: 2.2
|
|
2
2
|
Name: nettracer3d
|
|
3
|
-
Version: 0.5.
|
|
3
|
+
Version: 0.5.3
|
|
4
4
|
Summary: Scripts for intializing and analyzing networks from segmentations of three dimensional images.
|
|
5
5
|
Author-email: Liam McLaughlin <mclaughlinliam99@gmail.com>
|
|
6
6
|
Project-URL: User_Tutorial, https://www.youtube.com/watch?v=cRatn5VTWDY
|
|
@@ -26,6 +26,7 @@ Requires-Dist: tifffile==2023.7.18
|
|
|
26
26
|
Requires-Dist: qtrangeslider==0.1.5
|
|
27
27
|
Requires-Dist: PyQt6==6.8.0
|
|
28
28
|
Requires-Dist: scikit-learn==1.6.1
|
|
29
|
+
Requires-Dist: nibabel==5.2.0
|
|
29
30
|
Provides-Extra: cuda11
|
|
30
31
|
Requires-Dist: cupy-cuda11x; extra == "cuda11"
|
|
31
32
|
Provides-Extra: cuda12
|
|
@@ -42,3 +43,10 @@ for a video tutorial on using the GUI.
|
|
|
42
43
|
NetTracer3D is free to use/fork for academic/nonprofit use so long as citation is provided, and is available for commercial use at a fee (see license file for information).
|
|
43
44
|
|
|
44
45
|
NetTracer3D was developed by Liam McLaughlin while working under Dr. Sanjay Jain at Washington University School of Medicine.
|
|
46
|
+
|
|
47
|
+
-- Version 0.5.3 updates --
|
|
48
|
+
|
|
49
|
+
1. Improved calculate volumes method. Previous method used np.argwhere() to count voxels of labeled objects in parallel which was quite strenuous in large arrays with many objects. New method uses np.bincount() which uses optimized numpy C libraries to do the same.
|
|
50
|
+
2. scipy.ndimage.find_objects() method was replaced as the method to find bounding boxes for objects when searching for object neighborhoods for the morphological proximity network and the edge < > node interaction quantification. This new version should be substantially faster in big arrays with many labels. (Depending on how well this improves performance, I may reimplement the secondary network search algorithm, as a side-option, which uses the same parallel-search within subarray strategies, as opposed to the primary network search algorithm that uses distance transforms).
|
|
51
|
+
3. Image viewer window can now load in .nii format images, as well as .jpeg, .jpg, and .png. The nibabel library was added to the dependencies to enable .nii loading, although this is currently all it is used for (and the gui will still run without nibabel).
|
|
52
|
+
4. Fixed bug regarding deleting edge objects.
|
|
File without changes
|
|
File without changes
|
|
File without changes
|
|
File without changes
|
|
File without changes
|
|
File without changes
|
|
File without changes
|
|
File without changes
|
|
File without changes
|
|
File without changes
|
|
File without changes
|
|
File without changes
|
|
File without changes
|
|
File without changes
|
|
File without changes
|
|
File without changes
|
|
File without changes
|