nettracer3d 0.3.1__tar.gz → 0.3.3__tar.gz
This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
- {nettracer3d-0.3.1/src/nettracer3d.egg-info → nettracer3d-0.3.3}/PKG-INFO +4 -1
- {nettracer3d-0.3.1 → nettracer3d-0.3.3}/README.md +3 -0
- {nettracer3d-0.3.1 → nettracer3d-0.3.3}/pyproject.toml +1 -1
- {nettracer3d-0.3.1 → nettracer3d-0.3.3}/src/nettracer3d/community_extractor.py +15 -11
- {nettracer3d-0.3.1 → nettracer3d-0.3.3}/src/nettracer3d/nettracer.py +26 -4
- {nettracer3d-0.3.1 → nettracer3d-0.3.3}/src/nettracer3d/nettracer_gui.py +59 -22
- {nettracer3d-0.3.1 → nettracer3d-0.3.3/src/nettracer3d.egg-info}/PKG-INFO +4 -1
- {nettracer3d-0.3.1 → nettracer3d-0.3.3}/LICENSE +0 -0
- {nettracer3d-0.3.1 → nettracer3d-0.3.3}/setup.cfg +0 -0
- {nettracer3d-0.3.1 → nettracer3d-0.3.3}/src/nettracer3d/__init__.py +0 -0
- {nettracer3d-0.3.1 → nettracer3d-0.3.3}/src/nettracer3d/hub_getter.py +0 -0
- {nettracer3d-0.3.1 → nettracer3d-0.3.3}/src/nettracer3d/modularity.py +0 -0
- {nettracer3d-0.3.1 → nettracer3d-0.3.3}/src/nettracer3d/morphology.py +0 -0
- {nettracer3d-0.3.1 → nettracer3d-0.3.3}/src/nettracer3d/network_analysis.py +0 -0
- {nettracer3d-0.3.1 → nettracer3d-0.3.3}/src/nettracer3d/network_draw.py +0 -0
- {nettracer3d-0.3.1 → nettracer3d-0.3.3}/src/nettracer3d/node_draw.py +0 -0
- {nettracer3d-0.3.1 → nettracer3d-0.3.3}/src/nettracer3d/proximity.py +0 -0
- {nettracer3d-0.3.1 → nettracer3d-0.3.3}/src/nettracer3d/simple_network.py +0 -0
- {nettracer3d-0.3.1 → nettracer3d-0.3.3}/src/nettracer3d/smart_dilate.py +0 -0
- {nettracer3d-0.3.1 → nettracer3d-0.3.3}/src/nettracer3d.egg-info/SOURCES.txt +0 -0
- {nettracer3d-0.3.1 → nettracer3d-0.3.3}/src/nettracer3d.egg-info/dependency_links.txt +0 -0
- {nettracer3d-0.3.1 → nettracer3d-0.3.3}/src/nettracer3d.egg-info/requires.txt +0 -0
- {nettracer3d-0.3.1 → nettracer3d-0.3.3}/src/nettracer3d.egg-info/top_level.txt +0 -0
|
@@ -1,6 +1,6 @@
|
|
|
1
1
|
Metadata-Version: 2.2
|
|
2
2
|
Name: nettracer3d
|
|
3
|
-
Version: 0.3.
|
|
3
|
+
Version: 0.3.3
|
|
4
4
|
Summary: Scripts for intializing and analyzing networks from segmentations of three dimensional images.
|
|
5
5
|
Author-email: Liam McLaughlin <boom2449@gmail.com>
|
|
6
6
|
Project-URL: User_Manual, https://drive.google.com/drive/folders/1fTkz3n4LN9_VxKRKC8lVQSlrz_wq0bVn?usp=drive_link
|
|
@@ -34,8 +34,11 @@ Requires-Dist: cupy; extra == "cupy"
|
|
|
34
34
|
NetTracer3D is a python package developed for both 2D and 3D analysis of microscopic images in the .tif file format. It supports generation of 3D networks showing the relationships between objects (or nodes) in three dimensional space, either based on their own proximity or connectivity via connecting objects such as nerves or blood vessels. In addition to these functionalities are several advanced 3D data processing algorithms, such as labeling of branched structures or abstraction of branched structures into networks. Note that nettracer3d uses segmented data, which can be segmented from other softwares such as ImageJ and imported into NetTracer3D, although it does offer its own segmentation via intensity or volumetric thresholding. NetTracer3D currently has a fully functional GUI. To use the GUI, after installing the nettracer3d package via pip, run a python script in your env with the following commands:
|
|
35
35
|
|
|
36
36
|
#Start
|
|
37
|
+
|
|
37
38
|
from nettracer3d import nettracer_gui
|
|
39
|
+
|
|
38
40
|
nettracer_gui.run_gui()
|
|
41
|
+
|
|
39
42
|
#End
|
|
40
43
|
|
|
41
44
|
This gui is built from the PyQt6 package and therefore may not function on dockers or virtual envs that are unable to support PyQt6 displays. More advanced documentation (especially for the GUI) is coming down the line, but for now please see: https://drive.google.com/drive/folders/1fTkz3n4LN9_VxKRKC8lVQSlrz_wq0bVn?usp=drive_link
|
|
@@ -1,8 +1,11 @@
|
|
|
1
1
|
NetTracer3D is a python package developed for both 2D and 3D analysis of microscopic images in the .tif file format. It supports generation of 3D networks showing the relationships between objects (or nodes) in three dimensional space, either based on their own proximity or connectivity via connecting objects such as nerves or blood vessels. In addition to these functionalities are several advanced 3D data processing algorithms, such as labeling of branched structures or abstraction of branched structures into networks. Note that nettracer3d uses segmented data, which can be segmented from other softwares such as ImageJ and imported into NetTracer3D, although it does offer its own segmentation via intensity or volumetric thresholding. NetTracer3D currently has a fully functional GUI. To use the GUI, after installing the nettracer3d package via pip, run a python script in your env with the following commands:
|
|
2
2
|
|
|
3
3
|
#Start
|
|
4
|
+
|
|
4
5
|
from nettracer3d import nettracer_gui
|
|
6
|
+
|
|
5
7
|
nettracer_gui.run_gui()
|
|
8
|
+
|
|
6
9
|
#End
|
|
7
10
|
|
|
8
11
|
This gui is built from the PyQt6 package and therefore may not function on dockers or virtual envs that are unable to support PyQt6 displays. More advanced documentation (especially for the GUI) is coming down the line, but for now please see: https://drive.google.com/drive/folders/1fTkz3n4LN9_VxKRKC8lVQSlrz_wq0bVn?usp=drive_link
|
|
@@ -781,16 +781,16 @@ def generate_distinct_colors(n_colors: int) -> List[Tuple[int, int, int]]:
|
|
|
781
781
|
colors.append(rgb)
|
|
782
782
|
return colors
|
|
783
783
|
|
|
784
|
-
def assign_community_colors(community_dict: Dict[int, int], labeled_array: np.ndarray) -> np.ndarray:
|
|
784
|
+
def assign_community_colors(community_dict: Dict[int, int], labeled_array: np.ndarray) -> Tuple[np.ndarray, Dict[int, str]]:
|
|
785
785
|
"""
|
|
786
|
-
Assign distinct colors to communities and create an
|
|
786
|
+
Assign distinct colors to communities and create an RGBA image.
|
|
787
787
|
|
|
788
788
|
Args:
|
|
789
789
|
community_dict: Dictionary mapping node IDs to community numbers
|
|
790
790
|
labeled_array: 3D numpy array with labels corresponding to node IDs
|
|
791
791
|
|
|
792
792
|
Returns:
|
|
793
|
-
|
|
793
|
+
Tuple of (RGBA-coded numpy array (H, W, D, 4), dictionary mapping nodes to color names)
|
|
794
794
|
"""
|
|
795
795
|
# Get unique communities and their sizes
|
|
796
796
|
communities = set(community_dict.values())
|
|
@@ -802,26 +802,30 @@ def assign_community_colors(community_dict: Dict[int, int], labeled_array: np.nd
|
|
|
802
802
|
# Generate distinct colors
|
|
803
803
|
colors = generate_distinct_colors(len(communities))
|
|
804
804
|
|
|
805
|
+
# Convert RGB colors to RGBA by adding alpha channel
|
|
806
|
+
colors_rgba = [(r, g, b, 255) for r, g, b in colors] # Full opacity for colored regions
|
|
807
|
+
|
|
805
808
|
# Create mapping from community to color
|
|
806
|
-
community_to_color = {comm:
|
|
809
|
+
community_to_color = {comm: colors_rgba[i] for i, comm in enumerate(sorted_communities)}
|
|
807
810
|
|
|
808
811
|
# Create mapping from node ID to color
|
|
809
812
|
node_to_color = {node: community_to_color[comm] for node, comm in community_dict.items()}
|
|
810
813
|
|
|
811
|
-
# Create
|
|
812
|
-
|
|
814
|
+
# Create RGBA array (initialize with transparent background)
|
|
815
|
+
rgba_array = np.zeros((*labeled_array.shape, 4), dtype=np.uint8)
|
|
813
816
|
|
|
814
817
|
# Assign colors to each voxel based on its label
|
|
815
818
|
for label in np.unique(labeled_array):
|
|
816
819
|
if label in node_to_color: # Skip background (usually label 0)
|
|
817
820
|
mask = labeled_array == label
|
|
818
|
-
for i in range(
|
|
819
|
-
|
|
820
|
-
|
|
821
|
-
node_to_color_names = convert_node_colors_to_names(community_to_color)
|
|
821
|
+
for i in range(4): # RGBA channels
|
|
822
|
+
rgba_array[mask, i] = node_to_color[label][i]
|
|
822
823
|
|
|
824
|
+
# Convert the RGB portion of community_to_color back to RGB for color naming
|
|
825
|
+
community_to_color_rgb = {k: tuple(v[:3]) for k, v in community_to_color.items()}
|
|
826
|
+
node_to_color_names = convert_node_colors_to_names(community_to_color_rgb)
|
|
823
827
|
|
|
824
|
-
return
|
|
828
|
+
return rgba_array, node_to_color_names
|
|
825
829
|
|
|
826
830
|
def assign_community_grays(community_dict: Dict[int, Union[int, str, Any]], labeled_array: np.ndarray) -> np.ndarray:
|
|
827
831
|
"""
|
|
@@ -950,9 +950,17 @@ def dilate_3D(tiff_array, dilated_x, dilated_y, dilated_z):
|
|
|
950
950
|
dilated_slice = cv2.dilate(tiff_slice, kernel, iterations=1)
|
|
951
951
|
return y, dilated_slice
|
|
952
952
|
|
|
953
|
+
"""
|
|
954
|
+
def process_slice_third(x):
|
|
955
|
+
tiff_slice = tiff_array[:, :, x].astype(np.uint8)
|
|
956
|
+
dilated_slice = cv2.dilate(tiff_slice, kernel, iterations=1)
|
|
957
|
+
return x, dilated_slice
|
|
958
|
+
"""
|
|
959
|
+
|
|
953
960
|
# Create empty arrays to store the dilated results for the XY and XZ planes
|
|
954
961
|
dilated_xy = np.zeros_like(tiff_array, dtype=np.uint8)
|
|
955
962
|
dilated_xz = np.zeros_like(tiff_array, dtype=np.uint8)
|
|
963
|
+
#dilated_yz = np.zeros_like(tiff_array, dtype=np.uint8)
|
|
956
964
|
|
|
957
965
|
kernel_x = int(dilated_x)
|
|
958
966
|
kernel = create_circular_kernel(kernel_x)
|
|
@@ -969,7 +977,10 @@ def dilate_3D(tiff_array, dilated_x, dilated_y, dilated_z):
|
|
|
969
977
|
kernel_x = int(dilated_x)
|
|
970
978
|
kernel_z = int(dilated_z)
|
|
971
979
|
|
|
972
|
-
|
|
980
|
+
if kernel_x == kernel_z:
|
|
981
|
+
kernel = create_circular_kernel(kernel_z)
|
|
982
|
+
else:
|
|
983
|
+
kernel = create_ellipsoidal_kernel(kernel_x, kernel_z)
|
|
973
984
|
|
|
974
985
|
with ThreadPoolExecutor(max_workers=num_cores) as executor:
|
|
975
986
|
futures = {executor.submit(process_slice_other, y): y for y in range(tiff_array.shape[1])}
|
|
@@ -978,12 +989,23 @@ def dilate_3D(tiff_array, dilated_x, dilated_y, dilated_z):
|
|
|
978
989
|
y, dilated_slice = future.result()
|
|
979
990
|
dilated_xz[:, y, :] = dilated_slice
|
|
980
991
|
|
|
992
|
+
"""
|
|
993
|
+
with ThreadPoolExecutor(max_workers=num_cores) as executor:
|
|
994
|
+
futures = {executor.submit(process_slice_other, x): x for x in range(tiff_array.shape[2])}
|
|
995
|
+
|
|
996
|
+
for future in as_completed(futures):
|
|
997
|
+
x, dilated_slice = future.result()
|
|
998
|
+
dilated_yz[:, :, x] = dilated_slice
|
|
999
|
+
"""
|
|
1000
|
+
|
|
981
1001
|
|
|
982
1002
|
# Overlay the results
|
|
983
|
-
final_result = dilated_xy | dilated_xz
|
|
1003
|
+
final_result = (dilated_xy | dilated_xz)
|
|
984
1004
|
|
|
985
1005
|
return final_result
|
|
986
1006
|
|
|
1007
|
+
|
|
1008
|
+
|
|
987
1009
|
def dilate_3D_recursive(tiff_array, dilated_x, dilated_y, dilated_z, step_size=None):
|
|
988
1010
|
"""Recursive 3D dilation method that handles odd-numbered dilations properly.
|
|
989
1011
|
|
|
@@ -1000,7 +1022,7 @@ def dilate_3D_recursive(tiff_array, dilated_x, dilated_y, dilated_z, step_size=N
|
|
|
1000
1022
|
# For small dilations relative to array size, don't use recursion
|
|
1001
1023
|
max_dilation = max(dilated_x, dilated_y, dilated_z)
|
|
1002
1024
|
if max_dilation < (0.2 * min_dim):
|
|
1003
|
-
return
|
|
1025
|
+
return dilate_3D(tiff_array, dilated_x, dilated_y, dilated_z)
|
|
1004
1026
|
|
|
1005
1027
|
# Initialize step_size for first call
|
|
1006
1028
|
if step_size is None:
|
|
@@ -1379,7 +1401,7 @@ def downsample(data, factor, directory=None, order=0):
|
|
|
1379
1401
|
tifffile.imwrite(filename, data)
|
|
1380
1402
|
|
|
1381
1403
|
return data
|
|
1382
|
-
|
|
1404
|
+
|
|
1383
1405
|
def binarize(arrayimage, directory = None):
|
|
1384
1406
|
"""
|
|
1385
1407
|
Can be used to binarize an image. Binary output will be saved to the active directory if none is specified.
|
|
@@ -476,7 +476,7 @@ class ImageViewerWindow(QMainWindow):
|
|
|
476
476
|
return np.vstack(chunk_results)
|
|
477
477
|
|
|
478
478
|
# Process nodes and edges in parallel using multiprocessing
|
|
479
|
-
with ThreadPoolExecutor(max_workers=
|
|
479
|
+
with ThreadPoolExecutor(max_workers=mp.cpu_count()) as executor:
|
|
480
480
|
future_nodes = executor.submit(process_channel, self.channel_data[0], node_indices, full_shape)
|
|
481
481
|
future_edges = executor.submit(process_channel, self.channel_data[1], edge_indices, full_shape)
|
|
482
482
|
future_overlay1 = executor.submit(process_channel, self.channel_data[2], overlay1_indices, full_shape)
|
|
@@ -2514,10 +2514,17 @@ class ImageViewerWindow(QMainWindow):
|
|
|
2514
2514
|
msg.setStandardButtons(QMessageBox.StandardButton.Yes | QMessageBox.StandardButton.No)
|
|
2515
2515
|
return msg.exec() == QMessageBox.StandardButton.Yes
|
|
2516
2516
|
|
|
2517
|
-
def load_channel(self, channel_index, channel_data=None, data=False, assign_shape = True):
|
|
2517
|
+
def load_channel(self, channel_index, channel_data=None, data=False, assign_shape = True, preserve_zoom = None):
|
|
2518
2518
|
"""Load a channel and enable active channel selection if needed."""
|
|
2519
2519
|
|
|
2520
2520
|
try:
|
|
2521
|
+
# Store current zoom limits if they exist and weren't provided
|
|
2522
|
+
if preserve_zoom is None and hasattr(self, 'ax'):
|
|
2523
|
+
current_xlim = self.ax.get_xlim() if self.ax.get_xlim() != (0, 1) else None
|
|
2524
|
+
current_ylim = self.ax.get_ylim() if self.ax.get_ylim() != (0, 1) else None
|
|
2525
|
+
else:
|
|
2526
|
+
current_xlim, current_ylim = preserve_zoom if preserve_zoom else (None, None)
|
|
2527
|
+
|
|
2521
2528
|
if not data: # For solo loading
|
|
2522
2529
|
import tifffile
|
|
2523
2530
|
filename, _ = QFileDialog.getOpenFileName(
|
|
@@ -2543,7 +2550,6 @@ class ImageViewerWindow(QMainWindow):
|
|
|
2543
2550
|
self.channel_data[channel_index] = self.reduce_rgb_dimension(self.channel_data[channel_index])
|
|
2544
2551
|
|
|
2545
2552
|
|
|
2546
|
-
|
|
2547
2553
|
if channel_index == 0:
|
|
2548
2554
|
my_network.nodes = self.channel_data[channel_index]
|
|
2549
2555
|
elif channel_index == 1:
|
|
@@ -2563,19 +2569,25 @@ class ImageViewerWindow(QMainWindow):
|
|
|
2563
2569
|
self.active_channel_combo.setEnabled(True)
|
|
2564
2570
|
|
|
2565
2571
|
# Update slider range if this is the first channel loaded
|
|
2566
|
-
if len(self.channel_data[channel_index].shape) == 3:
|
|
2572
|
+
if len(self.channel_data[channel_index].shape) == 3 or len(self.channel_data[channel_index].shape) == 4:
|
|
2567
2573
|
if not self.slice_slider.isEnabled():
|
|
2568
2574
|
self.slice_slider.setEnabled(True)
|
|
2569
2575
|
self.slice_slider.setMinimum(0)
|
|
2570
2576
|
self.slice_slider.setMaximum(self.channel_data[channel_index].shape[0] - 1)
|
|
2571
|
-
self.slice_slider.
|
|
2572
|
-
|
|
2577
|
+
if self.slice_slider.value() < self.channel_data[channel_index].shape[0] - 1:
|
|
2578
|
+
self.current_slice = self.slice_slider.value()
|
|
2579
|
+
else:
|
|
2580
|
+
self.slice_slider.setValue(0)
|
|
2581
|
+
self.current_slice = 0
|
|
2573
2582
|
else:
|
|
2574
2583
|
self.slice_slider.setEnabled(True)
|
|
2575
2584
|
self.slice_slider.setMinimum(0)
|
|
2576
2585
|
self.slice_slider.setMaximum(self.channel_data[channel_index].shape[0] - 1)
|
|
2577
|
-
self.slice_slider.
|
|
2578
|
-
|
|
2586
|
+
if self.slice_slider.value() < self.channel_data[channel_index].shape[0] - 1:
|
|
2587
|
+
self.current_slice = self.slice_slider.value()
|
|
2588
|
+
else:
|
|
2589
|
+
self.current_slice = 0
|
|
2590
|
+
self.slice_slider.setValue(0)
|
|
2579
2591
|
else:
|
|
2580
2592
|
self.slice_slider.setEnabled(False)
|
|
2581
2593
|
|
|
@@ -2592,8 +2604,15 @@ class ImageViewerWindow(QMainWindow):
|
|
|
2592
2604
|
|
|
2593
2605
|
if assign_shape: #keep original shape tracked to undo resampling.
|
|
2594
2606
|
self.original_shape = self.channel_data[channel_index].shape
|
|
2595
|
-
|
|
2596
|
-
|
|
2607
|
+
|
|
2608
|
+
# Restore zoom limits if they existed
|
|
2609
|
+
if current_xlim is not None and current_ylim is not None:
|
|
2610
|
+
self.ax.set_xlim(current_xlim)
|
|
2611
|
+
self.ax.set_ylim(current_ylim)
|
|
2612
|
+
self.update_display(preserve_zoom = (current_xlim, current_ylim))
|
|
2613
|
+
else:
|
|
2614
|
+
self.update_display()
|
|
2615
|
+
|
|
2597
2616
|
|
|
2598
2617
|
|
|
2599
2618
|
except Exception as e:
|
|
@@ -2842,7 +2861,7 @@ class ImageViewerWindow(QMainWindow):
|
|
|
2842
2861
|
self.channel_data[channel] is not None):
|
|
2843
2862
|
|
|
2844
2863
|
# Check if we're dealing with RGB data
|
|
2845
|
-
is_rgb = len(self.channel_data[channel].shape) == 4 and self.channel_data[channel].shape[-1] == 3
|
|
2864
|
+
is_rgb = len(self.channel_data[channel].shape) == 4 and (self.channel_data[channel].shape[-1] == 3 or self.channel_data[channel].shape[-1] == 4)
|
|
2846
2865
|
|
|
2847
2866
|
if len(self.channel_data[channel].shape) == 3 and not is_rgb:
|
|
2848
2867
|
current_image = self.channel_data[channel][self.current_slice, :, :]
|
|
@@ -2851,10 +2870,13 @@ class ImageViewerWindow(QMainWindow):
|
|
|
2851
2870
|
else:
|
|
2852
2871
|
current_image = self.channel_data[channel]
|
|
2853
2872
|
|
|
2854
|
-
if is_rgb:
|
|
2873
|
+
if is_rgb and self.channel_data[channel].shape[-1] == 3:
|
|
2855
2874
|
# For RGB images, just display directly without colormap
|
|
2856
2875
|
self.ax.imshow(current_image,
|
|
2857
2876
|
alpha=0.7)
|
|
2877
|
+
elif is_rgb and self.channel_data[channel].shape[-1] == 4:
|
|
2878
|
+
self.ax.imshow(current_image) #For images that already have an alpha value and RGB, don't update alpha
|
|
2879
|
+
|
|
2858
2880
|
else:
|
|
2859
2881
|
# Regular channel processing with colormap
|
|
2860
2882
|
# Calculate brightness/contrast limits from entire volume
|
|
@@ -4148,8 +4170,11 @@ class ShuffleDialog(QDialog):
|
|
|
4148
4170
|
|
|
4149
4171
|
|
|
4150
4172
|
if accepted_target == 4:
|
|
4173
|
+
try:
|
|
4174
|
+
self.parent().highlight_overlay = n3d.binarize(active_data)
|
|
4175
|
+
except:
|
|
4176
|
+
self.parent().highlight_overlay = None
|
|
4151
4177
|
|
|
4152
|
-
self.parent().highlight_overlay = n3d.binarize(active_data)
|
|
4153
4178
|
else:
|
|
4154
4179
|
self.parent().load_channel(accepted_target, channel_data = active_data, data = True)
|
|
4155
4180
|
|
|
@@ -4949,7 +4974,7 @@ class ResizeDialog(QDialog):
|
|
|
4949
4974
|
if self.parent().highlight_overlay is not None:
|
|
4950
4975
|
self.parent().highlight_overlay = n3d.resize(self.parent().highlight_overlay, resize, order)
|
|
4951
4976
|
if my_network.search_region is not None:
|
|
4952
|
-
my_network.search_region = n3d.resize(search_region, resize, order)
|
|
4977
|
+
my_network.search_region = n3d.resize(my_network.search_region, resize, order)
|
|
4953
4978
|
|
|
4954
4979
|
|
|
4955
4980
|
else:
|
|
@@ -4966,7 +4991,7 @@ class ResizeDialog(QDialog):
|
|
|
4966
4991
|
if self.parent().highlight_overlay is not None:
|
|
4967
4992
|
self.parent().highlight_overlay = n3d.upsample_with_padding(self.parent().highlight_overlay, original_shape = self.parent().original_shape)
|
|
4968
4993
|
|
|
4969
|
-
my_network.search_region = n3d.upsample_with_padding(search_region, original_shape = self.parent().original_shape)
|
|
4994
|
+
my_network.search_region = n3d.upsample_with_padding(my_network.search_region, original_shape = self.parent().original_shape)
|
|
4970
4995
|
|
|
4971
4996
|
|
|
4972
4997
|
# Update slider range based on new z-dimension
|
|
@@ -5340,6 +5365,14 @@ class DilateDialog(QDialog):
|
|
|
5340
5365
|
def run_dilate(self):
|
|
5341
5366
|
try:
|
|
5342
5367
|
|
|
5368
|
+
try: #for retaining zoom params
|
|
5369
|
+
current_xlim = self.parent().ax.get_xlim()
|
|
5370
|
+
current_ylim = self.parent().ax.get_ylim()
|
|
5371
|
+
except:
|
|
5372
|
+
current_xlim = None
|
|
5373
|
+
current_ylim = None
|
|
5374
|
+
|
|
5375
|
+
|
|
5343
5376
|
accepted_mode = self.mode_selector.currentIndex()
|
|
5344
5377
|
|
|
5345
5378
|
# Get amount
|
|
@@ -5384,9 +5417,9 @@ class DilateDialog(QDialog):
|
|
|
5384
5417
|
)
|
|
5385
5418
|
|
|
5386
5419
|
# Update both the display data and the network object
|
|
5387
|
-
self.parent().load_channel(self.parent().active_channel, result, True)
|
|
5420
|
+
self.parent().load_channel(self.parent().active_channel, result, True, preserve_zoom=(current_xlim, current_ylim))
|
|
5388
5421
|
|
|
5389
|
-
self.parent().update_display()
|
|
5422
|
+
self.parent().update_display(preserve_zoom=(current_xlim, current_ylim))
|
|
5390
5423
|
self.accept()
|
|
5391
5424
|
|
|
5392
5425
|
except Exception as e:
|
|
@@ -5432,6 +5465,13 @@ class ErodeDialog(QDialog):
|
|
|
5432
5465
|
|
|
5433
5466
|
def run_erode(self):
|
|
5434
5467
|
try:
|
|
5468
|
+
|
|
5469
|
+
try: #for retaining zoom params
|
|
5470
|
+
current_xlim = self.parent().ax.get_xlim()
|
|
5471
|
+
current_ylim = self.parent().ax.get_ylim()
|
|
5472
|
+
except:
|
|
5473
|
+
current_xlim = None
|
|
5474
|
+
current_ylim = None
|
|
5435
5475
|
|
|
5436
5476
|
# Get amount
|
|
5437
5477
|
try:
|
|
@@ -5462,14 +5502,11 @@ class ErodeDialog(QDialog):
|
|
|
5462
5502
|
z_scale = z_scale,
|
|
5463
5503
|
)
|
|
5464
5504
|
|
|
5465
|
-
# Update both the display data and the network object
|
|
5466
|
-
self.parent().channel_data[self.parent().active_channel] = result
|
|
5467
5505
|
|
|
5506
|
+
self.parent().load_channel(self.parent().active_channel, result, True, preserve_zoom=(current_xlim, current_ylim))
|
|
5468
5507
|
|
|
5469
|
-
# Update the corresponding property in my_network
|
|
5470
|
-
setattr(my_network, network_properties[self.parent().active_channel], result)
|
|
5471
5508
|
|
|
5472
|
-
self.parent().update_display()
|
|
5509
|
+
self.parent().update_display(preserve_zoom=(current_xlim, current_ylim))
|
|
5473
5510
|
self.accept()
|
|
5474
5511
|
|
|
5475
5512
|
except Exception as e:
|
|
@@ -1,6 +1,6 @@
|
|
|
1
1
|
Metadata-Version: 2.2
|
|
2
2
|
Name: nettracer3d
|
|
3
|
-
Version: 0.3.
|
|
3
|
+
Version: 0.3.3
|
|
4
4
|
Summary: Scripts for intializing and analyzing networks from segmentations of three dimensional images.
|
|
5
5
|
Author-email: Liam McLaughlin <boom2449@gmail.com>
|
|
6
6
|
Project-URL: User_Manual, https://drive.google.com/drive/folders/1fTkz3n4LN9_VxKRKC8lVQSlrz_wq0bVn?usp=drive_link
|
|
@@ -34,8 +34,11 @@ Requires-Dist: cupy; extra == "cupy"
|
|
|
34
34
|
NetTracer3D is a python package developed for both 2D and 3D analysis of microscopic images in the .tif file format. It supports generation of 3D networks showing the relationships between objects (or nodes) in three dimensional space, either based on their own proximity or connectivity via connecting objects such as nerves or blood vessels. In addition to these functionalities are several advanced 3D data processing algorithms, such as labeling of branched structures or abstraction of branched structures into networks. Note that nettracer3d uses segmented data, which can be segmented from other softwares such as ImageJ and imported into NetTracer3D, although it does offer its own segmentation via intensity or volumetric thresholding. NetTracer3D currently has a fully functional GUI. To use the GUI, after installing the nettracer3d package via pip, run a python script in your env with the following commands:
|
|
35
35
|
|
|
36
36
|
#Start
|
|
37
|
+
|
|
37
38
|
from nettracer3d import nettracer_gui
|
|
39
|
+
|
|
38
40
|
nettracer_gui.run_gui()
|
|
41
|
+
|
|
39
42
|
#End
|
|
40
43
|
|
|
41
44
|
This gui is built from the PyQt6 package and therefore may not function on dockers or virtual envs that are unable to support PyQt6 displays. More advanced documentation (especially for the GUI) is coming down the line, but for now please see: https://drive.google.com/drive/folders/1fTkz3n4LN9_VxKRKC8lVQSlrz_wq0bVn?usp=drive_link
|
|
File without changes
|
|
File without changes
|
|
File without changes
|
|
File without changes
|
|
File without changes
|
|
File without changes
|
|
File without changes
|
|
File without changes
|
|
File without changes
|
|
File without changes
|
|
File without changes
|
|
File without changes
|
|
File without changes
|
|
File without changes
|
|
File without changes
|
|
File without changes
|