small-fish-gui 1.2.0__py3-none-any.whl → 1.3.0__py3-none-any.whl

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
small_fish_gui/README.md CHANGED
@@ -15,7 +15,10 @@ Time stacks are not yet supported.
15
15
  - Signal to noise analysis
16
16
  - multichannel colocalisation
17
17
 
18
+ <img src="https://github.com/2Echoes/small_fish_gui/blob/main/Segmentation%20example.jpg" width="500" title="Cell segmentation with Cellpose" alt="Cell segmentation - cellpose">| <img src="https://github.com/2Echoes/small_fish_gui/blob/main/napari_detection_example.png" width="500" title="Spot detection; clustering visualisation on Napari" alt="detection; Napari example">
19
+
18
20
  ## Installation
21
+ If you don't have a python installation yet I would recommend the [miniconda distribution](https://docs.anaconda.com/free/miniconda/miniconda-other-installer-links/); but any distribution should work.
19
22
 
20
23
  It is higly recommanded to create a specific [conda](https://docs.conda.io/projects/conda/en/latest/user-guide/tasks/manage-environments.html) or [virtual](https://docs.python.org/3.6/library/venv.html) environnement to install small fish.
21
24
 
@@ -46,7 +49,46 @@ python -m small_fish_gui
46
49
 
47
50
  ## Cellpose configuration
48
51
 
49
- If you want to train your own cellpose model or set-up your GPU you can follow the official cellpose documentation, just remember to **first activate your small_fish environnement**.
52
+ For the following steps first activate your small fish environnement :
53
+
54
+ ```bash
55
+ conda activate small_fish
56
+ ```
57
+ ### Setting up your GPU for cellpose (Windows / Linux)
58
+ This instructions describe how I installed CUDA and GPU cellpose on the machines I tested, unfortunatly, drivers installations don't always run smoothly, if you run into any difficulties please have a look at the *GPU version (CUDA) on Windows or Linux* section of the [cellpose documentation](https://github.com/MouseLand/cellpose) for assistance.
59
+
60
+ First step is to check that your GPU is CUDA compatible which it should be if from the brand NVIIDA.
61
+ Then you need to install CUDA from the [NVIDIA archives](https://developer.nvidia.com/cuda-toolkit-archive), any 11.x version should work but I recommend the 11.8 version.
62
+
63
+ Finally we need to make some modifcation to your small fish environnement :
64
+
65
+ Remove the CPU version of torch
66
+
67
+ ```bash
68
+ pip uninstall torch
69
+ ```
70
+ Then install pytorch and cudatoolkit :
71
+
72
+ ```bash
73
+ conda install pytorch==1.12.0 cudatoolkit=11.3 -c pytorch
74
+ ```
75
+ If the installation succeeded next time your run segmentation with small fish you should see the "GPU is ON" notice upon entering the segmentation parameters.
76
+ If you run into any problems I would recommend following the official cellpose instructions as mentionned above.
77
+
78
+
79
+ ### Training cellpose
80
+ If you want to train your own cellpose model or import custom model from exterior source I recommend doing so from the cellpose GUI
81
+
82
+ To install the GUI run :
83
+
84
+ ```bash
85
+ pip install cellpose[gui]
86
+ ```
87
+ Then to run cellpose
88
+ ```bash
89
+ cellpose
90
+ ```
91
+ Note that for training it is recommended to first set up your GPU as training computation can be quite long otherwise. To get started with how to train your models you can watch the [video](https://www.youtube.com/watch?v=5qANHWoubZU) from cellpose authors.
50
92
 
51
93
  ## Developpement
52
94
 
Binary file
@@ -38,4 +38,4 @@ ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
38
38
  SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
39
39
 
40
40
  """
41
- __version__ = "1.2.0"
41
+ __version__ = "1.3.0"
@@ -2,7 +2,7 @@ import PySimpleGUI as sg
2
2
  import pandas as pd
3
3
  import os
4
4
  import numpy as np
5
- from .layout import path_layout, parameters_layout, bool_layout, tuple_layout, combo_layout, add_header
5
+ from .layout import path_layout, parameters_layout, bool_layout, tuple_layout, combo_layout, add_header, path_layout
6
6
  from ..interface import open_image, check_format, FormatError
7
7
  from .help_module import ask_help
8
8
 
@@ -113,7 +113,7 @@ def output_image_prompt(filename) :
113
113
  relaunch = False
114
114
  layout = path_layout(['folder'], look_for_dir= True, header= "Output parameters :")
115
115
  layout += parameters_layout(["filename"], default_values= [filename + "_quantification"], size=25)
116
- layout += bool_layout(['Excel', 'Feather'])
116
+ layout += bool_layout(['csv','Excel', 'Feather'])
117
117
  layout.append([sg.Button('Cancel')])
118
118
 
119
119
  event,values= prompt(layout)
@@ -204,6 +204,23 @@ def detection_parameters_promt(is_3D_stack, is_multichannel, do_dense_region_dec
204
204
  default_segmentation = [default_dict.setdefault('nucleus channel signal', default_dict.setdefault('nucleus channel',0))]
205
205
  layout += parameters_layout(['nucleus channel signal'], default_values=default_segmentation) + [[sg.Text(" channel from which signal will be measured for nucleus features.")]]
206
206
 
207
+ layout += bool_layout(['Interactive threshold selector'], preset=[False])
208
+ layout += path_layout(
209
+ keys=['spots_extraction_folder'],
210
+ look_for_dir=True,
211
+ header= "Individual spot extraction",
212
+ preset= default_dict.setdefault('spots_extraction_folder', '')
213
+ )
214
+ layout += parameters_layout(
215
+ parameters=['spots_filename'],
216
+ default_values=[default_dict.setdefault('spots_filename','spots_extraction')],
217
+ size= 13
218
+ )
219
+ layout += bool_layout(
220
+ parameters= ['do_spots_csv', 'do_spots_excel', 'do_spots_feather'],
221
+ preset= [default_dict.setdefault('do_spots_csv',False), default_dict.setdefault('do_spots_excel',False),default_dict.setdefault('do_spots_feather',False)]
222
+ )
223
+
207
224
  event, values = prompt_with_help(layout, help='detection')
208
225
  if event == 'Cancel' : return None
209
226
  if is_3D_stack : values['dim'] = 3
@@ -2,7 +2,7 @@ import os
2
2
  import pandas as pd
3
3
  from bigfish.stack import check_parameter
4
4
 
5
-
5
+ MAX_LEN_EXCEL = 1048576
6
6
 
7
7
  def _cast_spot_to_tuple(spot) :
8
8
  return tuple([coord for coord in spot])
@@ -10,11 +10,11 @@ def _cast_spot_to_tuple(spot) :
10
10
  def _cast_spots_to_tuple(spots) :
11
11
  return tuple(list(map(_cast_spot_to_tuple, spots)))
12
12
 
13
- def write_results(dataframe: pd.DataFrame, path:str, filename:str, do_excel= True, do_feather= False) :
13
+ def write_results(dataframe: pd.DataFrame, path:str, filename:str, do_excel= True, do_feather= False, do_csv=False) :
14
14
  check_parameter(dataframe= pd.DataFrame, path= str, filename = str, do_excel = bool, do_feather = bool)
15
15
 
16
16
  if len(dataframe) == 0 : return True
17
- if not do_excel and not do_feather :
17
+ if not do_excel and not do_feather and not do_csv :
18
18
  return False
19
19
 
20
20
  if not path.endswith('/') : path +='/'
@@ -23,7 +23,7 @@ def write_results(dataframe: pd.DataFrame, path:str, filename:str, do_excel= Tru
23
23
 
24
24
  new_filename = filename
25
25
  i= 1
26
- while new_filename + '.xlsx' in os.listdir(path) or new_filename + '.feather' in os.listdir(path) :
26
+ while new_filename + '.xlsx' in os.listdir(path) or new_filename + '.feather' in os.listdir(path) or new_filename + '.csv' in os.listdir(path) :
27
27
  new_filename = filename + '_{0}'.format(i)
28
28
  i+=1
29
29
 
@@ -36,7 +36,14 @@ def write_results(dataframe: pd.DataFrame, path:str, filename:str, do_excel= Tru
36
36
  if 'clusters' in dataframe.columns :
37
37
  dataframe = dataframe.drop(['clusters'], axis= 1)
38
38
 
39
- if do_excel : dataframe.reset_index(drop=True).to_excel(path + filename + '.xlsx')
40
- if do_feather : dataframe.reset_index(drop=True).to_feather(path + filename + '.feather')
39
+ if do_feather : dataframe.reset_index(drop=True).to_feather(path + new_filename + '.feather')
40
+ if do_csv : dataframe.reset_index(drop=True).to_csv(path + new_filename + '.csv', sep=";")
41
+ if do_excel :
42
+ if len(dataframe) < MAX_LEN_EXCEL :
43
+ dataframe.reset_index(drop=True).to_excel(path + new_filename + '.xlsx')
44
+ else :
45
+ print("Error : Table too big to be saved in excel format.")
46
+ return False
47
+
41
48
 
42
49
  return True
@@ -2,7 +2,6 @@
2
2
  Contains Napari wrappers to visualise and correct spots/clusters.
3
3
  """
4
4
 
5
-
6
5
  import numpy as np
7
6
  import scipy.ndimage as ndi
8
7
  import napari
@@ -96,15 +95,15 @@ def correct_spots(image, spots, voxel_size= (1,1,1), clusters= None, cluster_siz
96
95
  scale = compute_anisotropy_coef(voxel_size)
97
96
  try :
98
97
  Viewer = napari.Viewer(ndisplay=2, title= 'Spot correction', axis_labels=['z','y','x'], show= False)
99
- Viewer.add_image(image, scale=scale, name= "rna signal", blending= 'additive', colormap='red')
98
+ Viewer.add_image(image, scale=scale, name= "rna signal", blending= 'additive', colormap='red', contrast_limits=[image.min(), image.max()])
100
99
  other_colors = ['green', 'blue', 'gray', 'cyan', 'bop orange', 'bop purple'] * ((len(other_images)-1 // 7) + 1)
101
100
  for im, color in zip(other_images, other_colors) :
102
- Viewer.add_image(im, scale=scale, blending='additive', visible=False, colormap=color)
101
+ Viewer.add_image(im, scale=scale, blending='additive', visible=False, colormap=color, contrast_limits=[im.min(), im.max()])
103
102
  layer_offset = len(other_images)
104
103
 
105
104
  Viewer.add_points(spots, size = 5, scale=scale, face_color= 'green', opacity= 1, symbol= 'ring', name= 'single spots') # spots
106
105
  if type(clusters) != type(None) : Viewer.add_points(clusters[:,:dim], size = 10, scale=scale, face_color= 'blue', opacity= 0.7, symbol= 'diamond', name= 'foci', features= {"spot_number" : clusters[:,dim], "id" : clusters[:,dim+1]}, feature_defaults= {"spot_number" : 0, "id" : -1}) # cluster
107
- if type(cell_label) != type(None) and np.array_equal(nucleus_label, cell_label) : Viewer.add_labels(cell_label, scale=scale, opacity= 0.2, blending= 'additive')
106
+ if type(cell_label) != type(None) and not np.array_equal(nucleus_label, cell_label) : Viewer.add_labels(cell_label, scale=scale, opacity= 0.2, blending= 'additive')
108
107
  if type(nucleus_label) != type(None) : Viewer.add_labels(nucleus_label, scale=scale, opacity= 0.2, blending= 'additive')
109
108
 
110
109
  #prepare cluster update
@@ -136,4 +135,94 @@ def correct_spots(image, spots, voxel_size= (1,1,1), clusters= None, cluster_siz
136
135
 
137
136
  return new_spots, new_clusters
138
137
 
138
+ def show_segmentation(
139
+ nuc_image : np.ndarray,
140
+ nuc_label : np.ndarray,
141
+ cyto_image : np.ndarray = None,
142
+ cyto_label : np.ndarray = None,
143
+ ) :
144
+ dim = nuc_image.ndim
145
+
146
+ if type(cyto_image) != type(None) :
147
+ if cyto_image.ndim != nuc_image.ndim : raise ValueError("Cyto and Nuc dimensions missmatch.")
148
+ if type(cyto_label) == type(None) : raise ValueError("If cyto image is passed cyto label must be passed too.")
149
+
150
+ if dim == 3 and nuc_label.ndim == 2 :
151
+ nuc_label = np.repeat(
152
+ nuc_label[np.newaxis],
153
+ repeats= len(nuc_image),
154
+ axis=0
155
+ )
156
+ if type(cyto_label) != type(None) :
157
+
158
+ if type(cyto_image) == type(None) : raise ValueError("If cyto label is passed cyto image must be passed too.")
159
+
160
+ if dim == 3 and cyto_label.ndim == 2 :
161
+ cyto_label = np.repeat(
162
+ cyto_label[np.newaxis],
163
+ repeats= len(nuc_image),
164
+ axis=0
165
+ )
166
+
167
+ #Init Napari viewer
168
+ Viewer = napari.Viewer(ndisplay=2, title= 'Show segmentation', axis_labels=['z','y','x'] if dim == 3 else ['y', 'x'], show= False)
169
+
170
+ # Adding channels
171
+ Viewer.add_image(nuc_image, name= "nucleus signal", blending= 'additive', colormap='blue', contrast_limits=[nuc_image.min(), nuc_image.max()])
172
+ Viewer.add_labels(nuc_label, opacity= 0.5, blending= 'additive')
173
+
174
+ #Adding labels
175
+ if type(cyto_image) != type(None) : Viewer.add_image(cyto_image, name= "cytoplasm signal", blending= 'additive', colormap='red', contrast_limits=[cyto_image.min(), cyto_image.max()])
176
+ if type(cyto_label) != type(None) : Viewer.add_labels(cyto_label, opacity= 0.4, blending= 'additive')
177
+
178
+ #Launch Napari
179
+ Viewer.show(block=False)
180
+ napari.run()
181
+
182
+ new_nuc_label = Viewer.layers[1].data
183
+ if type(cyto_label) != type(None) : new_cyto_label = Viewer.layers[3].data
184
+
185
+ return new_nuc_label, new_cyto_label
186
+
139
187
 
188
+
189
+ def threshold_selection(
190
+ image : np.ndarray,
191
+ filtered_image : np.ndarray,
192
+ threshold_slider,
193
+ voxel_size : tuple,
194
+ ) :
195
+
196
+ """
197
+ To view code for spot selection please have a look at magicgui instance created with `detection._create_threshold_slider` which is then passed to this napari wrapper as 'threshold_slider' argument.
198
+ """
199
+
200
+
201
+ Viewer = napari.Viewer(title= "Small fish - Threshold selector", ndisplay=2, show=True)
202
+ Viewer.add_image(
203
+ data= image,
204
+ contrast_limits= [image.min(), image.max()],
205
+ name= "raw signal",
206
+ colormap= 'green',
207
+ scale= voxel_size,
208
+ blending= 'additive'
209
+ )
210
+ Viewer.add_image(
211
+ data= filtered_image,
212
+ contrast_limits= [filtered_image.min(), filtered_image.max()],
213
+ colormap= 'gray',
214
+ scale=voxel_size,
215
+ blending='additive'
216
+ )
217
+
218
+ Viewer.window.add_dock_widget(threshold_slider, name='threshold_selector')
219
+ threshold_slider() #First occurence with auto or entered threshold.
220
+ napari.run()
221
+
222
+ spots = Viewer.layers[-1].data.astype(int)
223
+ if len(spots) == 0 :
224
+ threshold = filtered_image.max()
225
+ else :
226
+ threshold = Viewer.layers[-1].properties.get('threshold')[0]
227
+
228
+ return spots, threshold
@@ -1,5 +1,5 @@
1
1
  import numpy as np
2
- import pandas as pd
2
+ import os
3
3
  import PySimpleGUI as sg
4
4
  from ..gui import _error_popup, _warning_popup, parameters_layout, add_header, prompt, prompt_with_help
5
5
 
@@ -253,6 +253,11 @@ def check_integrity(values: dict, do_dense_region_deconvolution, multichannel,se
253
253
  raise ParameterInputError("Channel to compute is out of range for image.\nPlease select from {0}".format(list(range(ch_len))))
254
254
  values['channel to compute'] = ch
255
255
 
256
+ #Spot extraction
257
+ if not os.path.isdir(values['spots_extraction_folder']) and values['spots_extraction_folder'] != '':
258
+ raise ParameterInputError("Incorrect spot extraction folder.")
259
+
260
+
256
261
  return values
257
262
 
258
263
 
@@ -6,6 +6,7 @@ from cellpose.core import use_gpu
6
6
  from skimage.measure import label
7
7
  from ..gui.layout import _segmentation_layout
8
8
  from ..gui import prompt, prompt_with_help, ask_cancel_segmentation
9
+ from ._napari_wrapper import show_segmentation as napari_show_segmentation
9
10
 
10
11
  import cellpose.models as models
11
12
  import numpy as np
@@ -152,19 +153,18 @@ def launch_segmentation(image: np.ndarray, user_parameters: dict) :
152
153
  )
153
154
 
154
155
  finally : window.close()
155
- if show_segmentation or type(output_path) != type(None) :
156
- nuc_proj = image[nucleus_channel]
157
- im_proj = image[cytoplasm_channel]
158
- if im_proj.ndim == 3 :
159
- im_proj = stack.maximum_projection(im_proj)
160
- if nuc_proj.ndim == 3 :
161
- nuc_proj = stack.maximum_projection(nuc_proj)
162
- plot.plot_segmentation_boundary(nuc_proj, cytoplasm_label, nucleus_label, boundary_size=2, contrast=True, show=show_segmentation, path_output=None, title= "Nucleus segmentation (blue)", remove_frame=False,)
163
- if type(nuc_path) != type(None) : plot.plot_segmentation_boundary(nuc_proj, cytoplasm_label, nucleus_label, boundary_size=2, contrast=True, show=False, path_output=nuc_path, title= "Nucleus segmentation (blue)", remove_frame=True,)
164
- if not do_only_nuc :
165
- plot.plot_segmentation_boundary(im_proj, cytoplasm_label, nucleus_label, boundary_size=2, contrast=True, show=show_segmentation, path_output=cyto_path, title="Cytoplasm Segmentation (red)", remove_frame=False)
166
- if type(cyto_path) != type(None) : plot.plot_segmentation_boundary(im_proj, cytoplasm_label, nucleus_label, boundary_size=2, contrast=True, show=False, path_output=cyto_path, title="Cytoplasm Segmentation (red)", remove_frame=True)
156
+
167
157
  if show_segmentation :
158
+ nucleus_label, cytoplasm_label = napari_show_segmentation(
159
+ nuc_image=image[nucleus_channel],
160
+ nuc_label= nucleus_label,
161
+ cyto_image=image[cytoplasm_channel],
162
+ cyto_label=cytoplasm_label,
163
+ )
164
+
165
+ if nucleus_label.ndim == 3 : nucleus_label = np.max(nucleus_label, axis=0)
166
+ if cytoplasm_label.ndim == 3 : cytoplasm_label = np.max(cytoplasm_label, axis=0)
167
+
168
168
  layout = [
169
169
  [sg.Text("Proceed with current segmentation ?")],
170
170
  [sg.Button("Yes"), sg.Button("No")]
@@ -174,6 +174,20 @@ def launch_segmentation(image: np.ndarray, user_parameters: dict) :
174
174
  if event == "No" :
175
175
  continue
176
176
 
177
+ if type(output_path) != type(None) :
178
+ nuc_proj = image[nucleus_channel]
179
+ im_proj = image[cytoplasm_channel]
180
+ if im_proj.ndim == 3 :
181
+ im_proj = stack.maximum_projection(im_proj)
182
+ if nuc_proj.ndim == 3 :
183
+ nuc_proj = stack.maximum_projection(nuc_proj)
184
+ plot.plot_segmentation_boundary(nuc_proj, cytoplasm_label, nucleus_label, boundary_size=2, contrast=True, show=False, path_output=nuc_path, title= "Nucleus segmentation (blue)", remove_frame=True,)
185
+ if not do_only_nuc :
186
+ plot.plot_segmentation_boundary(im_proj, cytoplasm_label, nucleus_label, boundary_size=2, contrast=True, show=False, path_output=cyto_path, title="Cytoplasm Segmentation (red)", remove_frame=True)
187
+
188
+
189
+
190
+
177
191
  if cytoplasm_label.max() == 0 : #No cell segmented
178
192
  layout = [
179
193
  [sg.Text("No cell segmented. Proceed anyway ?")],
@@ -4,13 +4,14 @@ from ._preprocess import map_channels, prepare_image_detection, reorder_shape, r
4
4
  from .detection import ask_input_parameters, initiate_detection, launch_detection, launch_features_computation, get_nucleus_signal
5
5
  from ._segmentation import launch_segmentation
6
6
  from ._colocalisation import initiate_colocalisation, launch_colocalisation
7
+ from .spots import launch_spots_extraction
7
8
 
8
9
  import pandas as pd
9
10
  import PySimpleGUI as sg
10
11
 
11
12
  def add_detection(user_parameters, segmentation_done, acquisition_id, cytoplasm_label, nucleus_label) :
12
13
  """
13
- #TODO : list all keys added to user_parameters when returned
14
+ #TODO : list all keys added to user_parameters when returned.
14
15
  """
15
16
 
16
17
  new_results_df = pd.DataFrame()
@@ -84,7 +85,20 @@ def add_detection(user_parameters, segmentation_done, acquisition_id, cytoplasm_
84
85
  if ask_detection_confirmation(user_parameters.get('threshold')) : break
85
86
  else :
86
87
  break
87
-
88
+
89
+ if user_parameters['spots_extraction_folder'] != '' and type(user_parameters['spots_extraction_folder']) != type(None) :
90
+ if user_parameters['spots_filename'] != '' and type(user_parameters['spots_filename']) != type(None) :
91
+ if any((user_parameters['do_spots_excel'], user_parameters['do_spots_csv'], user_parameters['do_spots_feather'])) :
92
+ print((user_parameters['do_spots_excel'], user_parameters['do_spots_csv'], user_parameters['do_spots_feather']))
93
+ launch_spots_extraction(
94
+ acquisition_id=acquisition_id,
95
+ user_parameters=user_parameters,
96
+ image=image,
97
+ spots=spots,
98
+ nucleus_label= nucleus_label,
99
+ cell_label= cytoplasm_label,
100
+ )
101
+
88
102
  #Features computation
89
103
  new_results_df, new_cell_results_df = launch_features_computation(
90
104
  acquisition_id=acquisition_id,
@@ -108,9 +122,10 @@ def save_results(result_df, cell_result_df, coloc_df) :
108
122
  filename = dic['filename']
109
123
  do_excel = dic['Excel']
110
124
  do_feather = dic['Feather']
111
- sucess1 = write_results(result_df, path= path, filename=filename, do_excel= do_excel, do_feather= do_feather)
112
- sucess2 = write_results(cell_result_df, path= path, filename=filename + '_cell_result', do_excel= do_excel, do_feather= do_feather)
113
- sucess3 = write_results(coloc_df, path= path, filename=filename + '_coloc_result', do_excel= do_excel, do_feather= do_feather)
125
+ do_csv = dic['csv']
126
+ sucess1 = write_results(result_df, path= path, filename=filename, do_excel= do_excel, do_feather= do_feather, do_csv=do_csv)
127
+ sucess2 = write_results(cell_result_df, path= path, filename=filename + '_cell_result', do_excel= do_excel, do_feather= do_feather, do_csv=do_csv)
128
+ sucess3 = write_results(coloc_df, path= path, filename=filename + '_coloc_result', do_excel= do_excel, do_feather= do_feather, do_csv=do_csv)
114
129
  if sucess1 and sucess2 and sucess3 : sg.popup("Sucessfully saved at {0}.".format(path))
115
130
 
116
131
  else :
@@ -141,20 +156,19 @@ def delete_acquisitions(selected_acquisitions : pd.DataFrame,
141
156
  sg.popup("Please select the acquisitions you would like to delete.")
142
157
  else :
143
158
  acquisition_ids = list(result_df.iloc[list(selected_acquisitions)]['acquisition_id'])
144
- print("Acquisitions to delete : ", acquisition_ids)
145
159
  result_drop_idx = result_df[result_df['acquisition_id'].isin(acquisition_ids)].index
146
- print("{0} acquisitions to delete.".format(len(result_drop_idx)))
160
+ print("{0} acquisitions deleted.".format(len(result_drop_idx)))
147
161
 
148
162
  if len(cell_result_df) > 0 :
149
163
  cell_result_df_drop_idx = cell_result_df[cell_result_df['acquisition_id'].isin(acquisition_ids)].index
150
- print("{0} cells to delete.".format(len(cell_result_df_drop_idx)))
164
+ print("{0} cells deleted.".format(len(cell_result_df_drop_idx)))
151
165
  cell_result_df = cell_result_df.drop(cell_result_df_drop_idx, axis=0)
152
166
 
153
167
  if len(coloc_df) > 0 :
154
168
  coloc_df_drop_idx = coloc_df[(coloc_df["acquisition_id_1"].isin(acquisition_ids)) | (coloc_df['acquisition_id_2'].isin(acquisition_ids))].index
155
- print("{0} coloc measurement to delete.".format(len(coloc_df_drop_idx)))
169
+ print("{0} coloc measurement deleted.".format(len(coloc_df_drop_idx)))
156
170
  coloc_df = coloc_df.drop(coloc_df_drop_idx, axis=0)
157
171
 
158
172
  result_df = result_df.drop(result_drop_idx, axis=0)
159
173
 
160
- return result_df, cell_result_df, coloc_df
174
+ return result_df, cell_result_df, coloc_df
@@ -5,9 +5,13 @@ Contains code to handle detection as well as bigfish wrappers related to spot de
5
5
  from ._preprocess import ParameterInputError
6
6
  from ._preprocess import check_integrity, convert_parameters_types
7
7
  from ._signaltonoise import compute_snr_spots
8
- from ._detection_visualisation import correct_spots, _update_clusters
8
+ from ._napari_wrapper import correct_spots, _update_clusters, threshold_selection
9
9
  from ..gui import add_default_loading
10
10
  from ..gui import detection_parameters_promt, input_image_prompt
11
+ from .spots import compute_Spots
12
+ from magicgui import magicgui
13
+ from napari.layers import Image, Points
14
+ from napari.types import LayerDataTuple
11
15
 
12
16
  import numpy as np
13
17
  import pandas as pd
@@ -20,6 +24,7 @@ import bigfish.multistack as multistack
20
24
  import bigfish.classification as classification
21
25
  from bigfish.detection.spot_detection import get_object_radius_pixel
22
26
  from types import GeneratorType
27
+ from skimage.measure import regionprops
23
28
 
24
29
 
25
30
  def ask_input_parameters(ask_for_segmentation=True) :
@@ -284,7 +289,7 @@ def initiate_detection(user_parameters, segmentation_done, map, shape) :
284
289
  return user_parameters
285
290
 
286
291
  @add_default_loading
287
- def _launch_detection(image, image_input_values: dict, time_stack_gen=None) :
292
+ def _launch_detection(image, image_input_values: dict) :
288
293
 
289
294
  """
290
295
  Performs spots detection
@@ -297,25 +302,48 @@ def _launch_detection(image, image_input_values: dict, time_stack_gen=None) :
297
302
  spot_size = image_input_values.get('spot_size')
298
303
  log_kernel_size = image_input_values.get('log_kernel_size')
299
304
  minimum_distance = image_input_values.get('minimum_distance')
305
+ threshold_user_selection = image_input_values.get('Interactive threshold selector')
300
306
 
301
- if type(threshold) == type(None) :
302
- #detection
303
- if type(time_stack_gen) != type(None) :
304
- image_sample = time_stack_gen()
305
- else :
306
- image_sample = image
307
-
308
- threshold = compute_auto_threshold(image_sample, voxel_size=voxel_size, spot_radius=spot_size) * threshold_penalty
307
+ if type(threshold) == type(None) :
308
+ threshold = compute_auto_threshold(image, voxel_size=voxel_size, spot_radius=spot_size, log_kernel_size=log_kernel_size, minimum_distance=minimum_distance) * threshold_penalty
309
309
 
310
- spots = detection.detect_spots(
311
- images= image,
312
- threshold=threshold,
313
- return_threshold= False,
310
+ filtered_image = _apply_log_filter(
311
+ image=image,
312
+ voxel_size=voxel_size,
313
+ spot_radius=spot_size,
314
+ log_kernel_size = log_kernel_size,
315
+ )
316
+
317
+ local_maxima = _local_maxima_mask(
318
+ image_filtered=filtered_image,
314
319
  voxel_size=voxel_size,
315
- spot_radius= spot_size,
316
- log_kernel_size=log_kernel_size,
320
+ spot_radius=spot_size,
317
321
  minimum_distance=minimum_distance
322
+ )
323
+
324
+ if threshold_user_selection :
325
+
326
+ threshold_slider = _create_threshold_slider(
327
+ logfiltered_image=filtered_image,
328
+ local_maxima=local_maxima,
329
+ default=threshold,
330
+ min=filtered_image[local_maxima].min(),
331
+ max=filtered_image[local_maxima].max(),
332
+ voxel_size=voxel_size
318
333
  )
334
+
335
+ spots, threshold = threshold_selection(
336
+ image=image,
337
+ filtered_image=filtered_image,
338
+ threshold_slider=threshold_slider,
339
+ voxel_size=voxel_size
340
+ )
341
+ else :
342
+ spots = detection.spots_thresholding(
343
+ image=filtered_image,
344
+ mask_local_max=local_maxima,
345
+ threshold=threshold
346
+ )[0]
319
347
 
320
348
  return spots, threshold
321
349
 
@@ -449,6 +477,7 @@ def launch_cell_extraction(acquisition_id, spots, clusters, image, nucleus_signa
449
477
  #Nucleus features : area is computed in bigfish
450
478
  features_names += ['nucleus_mean_signal', 'nucleus_median_signal', 'nucleus_max_signal', 'nucleus_min_signal']
451
479
  features_names += ['snr_mean', 'snr_median', 'snr_std']
480
+ features_names += ['cell_center_coord','foci_number','foci_in_nuc_number']
452
481
 
453
482
  result_frame = pd.DataFrame()
454
483
 
@@ -485,6 +514,30 @@ def launch_cell_extraction(acquisition_id, spots, clusters, image, nucleus_signa
485
514
  compute_topography=True
486
515
  )
487
516
 
517
+ #center of cell coordinates
518
+ local_cell_center = regionprops(
519
+ label_image=cell_mask.astype(int)
520
+ )[0]['centroid']
521
+ cell_center = (local_cell_center[0] + min_y, local_cell_center[1] + min_x)
522
+
523
+ #foci in nucleus
524
+ if type(foci_coords) != type(None) :
525
+ if len(foci_coords) == 0 :
526
+ foci_number = 0
527
+ foci_in_nuc_number = 0
528
+ else :
529
+ foci_number = len(foci_coords)
530
+ foci_index = list(zip(*foci_coords))
531
+ if len(foci_index) == 5 :
532
+ foci_index = foci_index[1:3]
533
+ elif len(foci_index) == 4 :
534
+ foci_index = foci_index[:2]
535
+ else : raise AssertionError("Impossible number of dim for foci : ", len(foci_index))
536
+ foci_in_nuc_number = nuc_mask[tuple(foci_index)].astype(bool).sum()
537
+ else :
538
+ foci_number = np.NaN
539
+ foci_in_nuc_number = np.NaN
540
+
488
541
  #Signal to noise
489
542
  snr_dict = _compute_cell_snr(
490
543
  image,
@@ -501,6 +554,7 @@ def launch_cell_extraction(acquisition_id, spots, clusters, image, nucleus_signa
501
554
  features = list(features)
502
555
  features += [np.mean(nuc_signal), np.median(nuc_signal), np.max(nuc_signal), np.min(nuc_signal)]
503
556
  features += [snr_mean, snr_median, snr_std]
557
+ features += [cell_center, foci_number, foci_in_nuc_number]
504
558
 
505
559
  features = [acquisition_id, cell_id, cell_bbox] + features
506
560
 
@@ -686,4 +740,83 @@ def get_nucleus_signal(image, other_images, user_parameters) :
686
740
 
687
741
  return nucleus_signal
688
742
  else :
689
- return image
743
+ return image
744
+
745
+ def _create_threshold_slider(
746
+ logfiltered_image : np.ndarray,
747
+ local_maxima : np.ndarray,
748
+ default : int,
749
+ min : int,
750
+ max : int,
751
+ voxel_size
752
+ ) :
753
+
754
+ if isinstance(default, float) : default = round(default)
755
+
756
+ @magicgui(
757
+ threshold={'widget_type' : 'Slider', 'value' : default, 'min' : min, 'max' : max},
758
+ auto_call=True
759
+ )
760
+ def threshold_slider(threshold: int) -> LayerDataTuple:
761
+ spots = detection.spots_thresholding(
762
+ image=logfiltered_image,
763
+ mask_local_max=local_maxima,
764
+ threshold=threshold
765
+ )[0]
766
+ layer_args = {
767
+ 'size': 7,
768
+ 'scale' : voxel_size,
769
+ 'face_color' : 'transparent',
770
+ 'edge_color' : 'blue',
771
+ 'symbol' : 'ring',
772
+ 'opacity' : 0.7,
773
+ 'blending' : 'additive',
774
+ 'name': 'single spots',
775
+ 'features' : {'threshold' : threshold}
776
+ }
777
+ return (spots, layer_args , 'points')
778
+ return threshold_slider
779
+
780
+ def _apply_log_filter(
781
+ image: np.ndarray,
782
+ voxel_size : tuple,
783
+ spot_radius : tuple,
784
+ log_kernel_size,
785
+
786
+ ) :
787
+ """
788
+ Apply spot detection steps until local maxima step (just before final threshold).
789
+ Return filtered image.
790
+ """
791
+
792
+ ndim = image.ndim
793
+
794
+ if type(log_kernel_size) == type(None) :
795
+ log_kernel_size = get_object_radius_pixel(
796
+ voxel_size_nm=voxel_size,
797
+ object_radius_nm=spot_radius,
798
+ ndim=ndim)
799
+
800
+
801
+ image_filtered = stack.log_filter(image, log_kernel_size)
802
+
803
+ return image_filtered
804
+
805
+ def _local_maxima_mask(
806
+ image_filtered: np.ndarray,
807
+ voxel_size : tuple,
808
+ spot_radius : tuple,
809
+ minimum_distance
810
+
811
+ ) :
812
+
813
+ ndim = image_filtered.ndim
814
+
815
+ if type(minimum_distance) == type(None) :
816
+ minimum_distance = get_object_radius_pixel(
817
+ voxel_size_nm=voxel_size,
818
+ object_radius_nm=spot_radius,
819
+ ndim=ndim)
820
+ mask_local_max = detection.local_maximum_detection(image_filtered, minimum_distance)
821
+
822
+ return mask_local_max.astype(bool)
@@ -0,0 +1,71 @@
1
+ """
2
+ Sub-module to handle individual spot extraction.
3
+
4
+ """
5
+
6
+ import numpy as np
7
+ import pandas as pd
8
+ from ..interface.output import write_results
9
+
10
+ def launch_spots_extraction(
11
+ acquisition_id,
12
+ user_parameters,
13
+ image,
14
+ spots,
15
+ nucleus_label,
16
+ cell_label,
17
+ ) :
18
+ Spots = compute_Spots(
19
+ acquisition_id=acquisition_id,
20
+ image=image,
21
+ spots=spots,
22
+ nucleus_label=nucleus_label,
23
+ cell_label=cell_label,
24
+ )
25
+
26
+ did_output = write_results(
27
+ Spots,
28
+ path= user_parameters['spots_extraction_folder'],
29
+ filename= user_parameters['spots_filename'],
30
+ do_excel=user_parameters['do_spots_excel'],
31
+ do_csv=user_parameters['do_spots_csv'],
32
+ do_feather=user_parameters['do_spots_feather'],
33
+ )
34
+
35
+ if did_output : print("Individual spots extracted at {0}".format(user_parameters['spots_extraction_folder']))
36
+
37
+ def compute_Spots(
38
+ acquisition_id : int,
39
+ image : np.ndarray,
40
+ spots : np.ndarray,
41
+ nucleus_label = None,
42
+ cell_label = None,
43
+ ) :
44
+
45
+ index = list(zip(*spots))
46
+ index = tuple(index)
47
+ spot_intensities_list = list(image[index])
48
+ if type(nucleus_label) != type(None) :
49
+ in_nuc_list = list(nucleus_label.astype(bool)[index[-2:]]) #Only plane coordinates
50
+ else :
51
+ in_nuc_list = np.NaN
52
+ if type(cell_label) != type(None) :
53
+ cell_label_list = list(cell_label[index[-2:]]) #Only plane coordinates
54
+ else :
55
+ cell_label_list = np.NaN
56
+ id_list = np.arange(len(spots))
57
+
58
+ coord_list = list(zip(*index))
59
+
60
+ Spots = pd.DataFrame({
61
+ 'acquisition_id' : [acquisition_id] * len(spots),
62
+ 'spot_id' : id_list,
63
+ 'intensity' : spot_intensities_list,
64
+ 'cell_label' : cell_label_list,
65
+ 'in_nucleus' : in_nuc_list,
66
+ 'coordinates' : coord_list,
67
+ })
68
+
69
+ return Spots
70
+
71
+
@@ -1,6 +1,6 @@
1
1
  Metadata-Version: 2.3
2
2
  Name: small_fish_gui
3
- Version: 1.2.0
3
+ Version: 1.3.0
4
4
  Summary: Small Fish is a python application for the analysis of smFish images. It provides a ready to use graphical interface to combine famous python packages for cell analysis without any need for coding.
5
5
  Project-URL: Homepage, https://github.com/2Echoes/small_fish
6
6
  Project-URL: Issues, https://github.com/2Echoes/small_fish/issues
@@ -1,9 +1,10 @@
1
1
  small_fish_gui/LICENSE,sha256=-iFy8VGBYs5VsHglKpk4D-hxqQ2jMJaqmfq_ulIzDks,1303
2
- small_fish_gui/README.md,sha256=LuvmYYwCVw7-rKhdhrtqxnxUfQxPUE-bbPlGNwzJh_4,1830
3
- small_fish_gui/__init__.py,sha256=X2D6chgv6h8daHaMPwY_iCAscI86KwOkCCez6vsShPw,1941
2
+ small_fish_gui/README.md,sha256=2c_homYDJXX6VsBiEs5obhBh3HpcTSMdyjLo-35WzE4,4062
3
+ small_fish_gui/Segmentation example.jpg,sha256=opfiSbjmfF6z8kBs08sg_FNR2Om0AcMPU5sSwSLHdoQ,215038
4
+ small_fish_gui/__init__.py,sha256=1eF6detE4uCLLHVMeTw-aRG6fjoUMdhrvnf-b48bVSk,1941
4
5
  small_fish_gui/__main__.py,sha256=EzSCoJ7jpSdK-QbzUwQLGZeQWjybNeq8VnCBucA8MZw,1372
6
+ small_fish_gui/napari_detection_example.png,sha256=l5EZlrbXemLiGqb5inSVsD6Kko1Opz528-go-fBfrw8,977350
5
7
  small_fish_gui/requirements.txt,sha256=9OMfUAnLdHevq6w_fVoDmVmkSMJeFofkOK_86_fu9C0,321
6
- small_fish_gui/start.py,sha256=HTbzsVcaMji1BZnWyjfn3bDGIQeXGG4zFzvCBepqFvA,108
7
8
  small_fish_gui/utils.py,sha256=tSoMb8N69WdKTtMItPb1DYZiIAz1mjI26BCKJAi6vuc,1798
8
9
  small_fish_gui/.github/workflows/python-publish.yml,sha256=5Ltnuhw9TevhzndlBmdUgYMnS73xEAxSyd1u8DHdn5s,1084
9
10
  small_fish_gui/gui/__init__.py,sha256=178HC3t2z4EnP0iBnMcaP_pyh5xHwOkEE6p3WJwBQeU,911
@@ -12,25 +13,26 @@ small_fish_gui/gui/general_help_screenshot.png,sha256=X4E6Td5f04K-pBUPDaBJRAE3D5
12
13
  small_fish_gui/gui/help_module.py,sha256=PmgkkDs7bZ2-po83A_PK9uldQcHjehYmqre21nYb6DQ,9600
13
14
  small_fish_gui/gui/layout.py,sha256=_ErOS2IUejeUuPLkDmPB3FzLkoHOWR-Iaxz-aUeETks,7695
14
15
  small_fish_gui/gui/mapping_help_screenshot.png,sha256=HcuRh5TYciUogUasza5vZ_QSshaiHsskQK23mh9vQS8,34735
15
- small_fish_gui/gui/prompts.py,sha256=74JPsGGtGMcHNdNQFjjHEN5XwdSd7hZKog_DTIcYaJ8,12389
16
+ small_fish_gui/gui/prompts.py,sha256=NAR7qjKwybiZZ2caO_lB8_CEttG8i4lHdt9lxjh6ESM,13160
16
17
  small_fish_gui/gui/segmentation_help_screenshot.png,sha256=rbSgIydT0gZtfMh1qk4mdMbEIyCaakvHmxa2eOrLwO0,118944
17
18
  small_fish_gui/gui/test.py,sha256=Pf-GW9AgW-0VL1mFbYtqRvPAaa8DgwCThv2dDUHCcmU,156
18
19
  small_fish_gui/interface/__init__.py,sha256=PB86R4Y9kV80aGZ-vP0ZW2KeaCwGbBbCtFCmbN2yl28,275
19
20
  small_fish_gui/interface/image.py,sha256=X1L7S5svxUwdoDcI3QM1PbN-c4Nz5w30hixq3IgqSn8,1130
20
- small_fish_gui/interface/output.py,sha256=wqXJHk-PzqZwYr8NHLg9jcEJlZQXZk8R76aeWcTxsEw,1337
21
+ small_fish_gui/interface/output.py,sha256=dyhpO1YrRCIbQYpqU_52E1DTNPf0wdktd--CB15iT3k,1712
21
22
  small_fish_gui/interface/parameters.py,sha256=lUugD-4W2TZyJF3TH1q70TlktEYhhPtcPCrvxm5Dk50,36
22
23
  small_fish_gui/interface/testing.py,sha256=MY5-GcPOUHagcrwR8A7QOjAmjZIDVC8Wz3NibLe3KQw,321
23
24
  small_fish_gui/pipeline/_colocalisation.py,sha256=peBw2Qz5m6wSejDkDz240UgvWl8ohNelrnmEgznbEsw,9635
24
25
  small_fish_gui/pipeline/_custom_errors.py,sha256=tQ-AUhgzIFpK30AZiQQrtHCHyGVRDdAoIjzL0Fk-1pA,43
25
- small_fish_gui/pipeline/_detection_visualisation.py,sha256=CNxCQpiCzC9Uk-2RqSuTp55Glf1URCL_s8zidwljY9Y,5774
26
- small_fish_gui/pipeline/_preprocess.py,sha256=RHbMeYG6GPYyPJzxksgCQ8bs2O3qSXU0V-z4NZWQhrA,10487
27
- small_fish_gui/pipeline/_segmentation.py,sha256=0f8M2Ujczm0tU5AVwxhfOyzRCi_P_gg1393S4RAvDFs,12968
26
+ small_fish_gui/pipeline/_napari_wrapper.py,sha256=WtqxxcM4l4NsEnqU-YDvm7-KJgXlsGur1HFUWUgvuao,9124
27
+ small_fish_gui/pipeline/_preprocess.py,sha256=szNoav19Xo3USmiUTjcFgkMn9QK53ZOydbLV5aMFLws,10676
28
+ small_fish_gui/pipeline/_segmentation.py,sha256=M2bQzgzw7Zt_DBeM3qvI0V4Pn0HFLwj0l8yV8M5aToo,12977
28
29
  small_fish_gui/pipeline/_signaltonoise.py,sha256=7A9t7xu7zghI6cr201Ldm-LjJ5NOuP56VSeJ8KIzcUo,8497
29
- small_fish_gui/pipeline/actions.py,sha256=eKKmT3SSDYKQz-zU8HKz9h0PPgqyYrj4qHbrw1hfpRQ,7118
30
- small_fish_gui/pipeline/detection.py,sha256=n-uuk2cP9Ls3WaZnuQfNHWyPoJWZNh8k9yW_8ZDC3fA,27484
30
+ small_fish_gui/pipeline/actions.py,sha256=EIGIOlwJ_DADX1NcLWwrTP_AidDX-4f4ggZV0gkIb58,7988
31
+ small_fish_gui/pipeline/detection.py,sha256=sZjcDeHujdmXHVifI_Ir0xudb30Y1cuJxI6YGtp4mRQ,31778
31
32
  small_fish_gui/pipeline/main.py,sha256=AAW-zK3b7Ece9cdHn9y6QG8lTa1HXG-8JtnvJ3m0HwA,3149
33
+ small_fish_gui/pipeline/spots.py,sha256=yHvqf1eD25UltELpzcouYXhLkxiXI_mOL1ANSzXK5pw,1907
32
34
  small_fish_gui/pipeline/test.py,sha256=w4ZMGDmUDXxVgWTlZ2TKw19W8q5gcE9gLMKe0SWnRrw,2827
33
- small_fish_gui-1.2.0.dist-info/METADATA,sha256=ayA13aIDIc9K64nQLNlxWcTNNM7tO17BaK3kEycGiQw,2567
34
- small_fish_gui-1.2.0.dist-info/WHEEL,sha256=zEMcRr9Kr03x1ozGwg5v9NQBKn3kndp6LSoSlVg-jhU,87
35
- small_fish_gui-1.2.0.dist-info/licenses/LICENSE,sha256=-iFy8VGBYs5VsHglKpk4D-hxqQ2jMJaqmfq_ulIzDks,1303
36
- small_fish_gui-1.2.0.dist-info/RECORD,,
35
+ small_fish_gui-1.3.0.dist-info/METADATA,sha256=i9buygANVdkEX6uNgJjUIiQU_0xxPTg3tsWFMfxZxw0,2567
36
+ small_fish_gui-1.3.0.dist-info/WHEEL,sha256=1yFddiXMmvYK7QYTqtRNtX66WJ0Mz8PYEiEUoOUUxRY,87
37
+ small_fish_gui-1.3.0.dist-info/licenses/LICENSE,sha256=-iFy8VGBYs5VsHglKpk4D-hxqQ2jMJaqmfq_ulIzDks,1303
38
+ small_fish_gui-1.3.0.dist-info/RECORD,,
@@ -1,4 +1,4 @@
1
1
  Wheel-Version: 1.0
2
- Generator: hatchling 1.24.2
2
+ Generator: hatchling 1.25.0
3
3
  Root-Is-Purelib: true
4
4
  Tag: py3-none-any
small_fish_gui/start.py DELETED
@@ -1,7 +0,0 @@
1
- import sys
2
-
3
- def main():
4
- import small_fish.pipeline.main
5
-
6
- if __name__ == "__main__":
7
- sys.exit(main())