caactus 0.1.5__tar.gz

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
Files changed (35) hide show
  1. caactus-0.1.5/.gitignore +1 -0
  2. caactus-0.1.5/LICENSE +21 -0
  3. caactus-0.1.5/PKG-INFO +372 -0
  4. caactus-0.1.5/README.md +342 -0
  5. caactus-0.1.5/caactus/__init__.py +1 -0
  6. caactus-0.1.5/caactus/__pycache__/__init__.cpython-310.pyc +0 -0
  7. caactus-0.1.5/caactus/__pycache__/background_processing.cpython-310.pyc +0 -0
  8. caactus-0.1.5/caactus/__pycache__/pln_modelling.cpython-310.pyc +0 -0
  9. caactus-0.1.5/caactus/__pycache__/summary_statistics.cpython-310.pyc +0 -0
  10. caactus-0.1.5/caactus/__pycache__/tif2h5py.cpython-310.pyc +0 -0
  11. caactus-0.1.5/caactus/background_processing.py +89 -0
  12. caactus-0.1.5/caactus/csv_summary.py +107 -0
  13. caactus-0.1.5/caactus/pln_modelling.py +138 -0
  14. caactus-0.1.5/caactus/renaming.py +93 -0
  15. caactus-0.1.5/caactus/summary_statistics.py +204 -0
  16. caactus-0.1.5/caactus/summary_statistics_eucast.py +245 -0
  17. caactus-0.1.5/caactus/tif2h5py.py +96 -0
  18. caactus-0.1.5/caactus.egg-info/PKG-INFO +372 -0
  19. caactus-0.1.5/caactus.egg-info/SOURCES.txt +33 -0
  20. caactus-0.1.5/caactus.egg-info/dependency_links.txt +1 -0
  21. caactus-0.1.5/caactus.egg-info/entry_points.txt +8 -0
  22. caactus-0.1.5/caactus.egg-info/requires.txt +11 -0
  23. caactus-0.1.5/caactus.egg-info/top_level.txt +1 -0
  24. caactus-0.1.5/config.toml +69 -0
  25. caactus-0.1.5/images/96_well_setup.png +0 -0
  26. caactus-0.1.5/images/caactus-workflow(1).png +0 -0
  27. caactus-0.1.5/images/export_multicut.JPG +0 -0
  28. caactus-0.1.5/images/export_objectclassification.JPG +0 -0
  29. caactus-0.1.5/images/export_probabilities.JPG +0 -0
  30. caactus-0.1.5/images/object_tableexport.JPG +0 -0
  31. caactus-0.1.5/images/pixel_classification_classes.JPG +0 -0
  32. caactus-0.1.5/images/watershed.png +0 -0
  33. caactus-0.1.5/pyproject.toml +80 -0
  34. caactus-0.1.5/setup.cfg +4 -0
  35. caactus-0.1.5/test/test.txt +1 -0
@@ -0,0 +1 @@
1
+ caactus.egg-info/
caactus-0.1.5/LICENSE ADDED
@@ -0,0 +1,21 @@
1
+ MIT License
2
+
3
+ Copyright (c) 2024 Jakob Scheler
4
+
5
+ Permission is hereby granted, free of charge, to any person obtaining a copy
6
+ of this software and associated documentation files (the "Software"), to deal
7
+ in the Software without restriction, including without limitation the rights
8
+ to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
9
+ copies of the Software, and to permit persons to whom the Software is
10
+ furnished to do so, subject to the following conditions:
11
+
12
+ The above copyright notice and this permission notice shall be included in all
13
+ copies or substantial portions of the Software.
14
+
15
+ THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
16
+ IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
17
+ FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
18
+ AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
19
+ LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
20
+ OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
21
+ SOFTWARE.
caactus-0.1.5/PKG-INFO ADDED
@@ -0,0 +1,372 @@
1
+ Metadata-Version: 2.4
2
+ Name: caactus
3
+ Version: 0.1.5
4
+ Summary: Package for pre- and post-processing of images and data for working with ilastik-software
5
+ Author-email: Jakob Scheler <jakobscheler@gmail.com>
6
+ Maintainer-email: Jakob Scheler <jakobscheler@gmail.com>
7
+ License: MIT License
8
+ Project-URL: Repository, https://github.com/mr2raccoon/caactus
9
+ Project-URL: Documentation, https://github.com/mr2raccoon/caactus
10
+ Keywords: python,count,data,ilastik,image data,image processing,PLN,image analysis
11
+ Classifier: Development Status :: 4 - Beta
12
+ Classifier: Intended Audience :: Science/Research
13
+ Classifier: License :: OSI Approved :: MIT License
14
+ Classifier: Programming Language :: Python
15
+ Requires-Python: >=3.10.12
16
+ Description-Content-Type: text/markdown
17
+ License-File: LICENSE
18
+ Requires-Dist: imagecodecs
19
+ Requires-Dist: tifffile
20
+ Requires-Dist: h5py
21
+ Requires-Dist: numpy
22
+ Requires-Dist: pathlib
23
+ Requires-Dist: pandas
24
+ Requires-Dist: matplotlib
25
+ Requires-Dist: pyPLNmodels
26
+ Requires-Dist: tomli
27
+ Requires-Dist: argparse
28
+ Requires-Dist: seaborn
29
+ Dynamic: license-file
30
+
31
+ # caactus
32
+ caactus (**c**ell **a**nalysis **a**nd **c**ounting **t**ool **u**sing ilastik **s**oftware) is a collection of python scripts to provide a streamlined workflow for [ilastik-software](https://www.ilastik.org/), including data preparation, processing and analysis. It aims to provide biologist with an easy-to-use tool for counting and analyzing cells from a large number of microscopy pictures.
33
+
34
+ ![workflow](https://github.com/mr2raccoon/caactus/blob/main/images/caactus-workflow(1).png)
35
+
36
+
37
+ # Introduction
38
+ The goal of this script collection is to provide an easy-to-use completion for the [Boundary-based segmentation with Multicut-workflow](https://www.ilastik.org/documentation/multicut/multicut) in [ilastik](https://www.ilastik.org/).
39
+ This workflow allows for the automatization of cell-counting from messy microscopic images with different (touching) cell types for biological research.
40
+ For easy copy & paste, commands are provided in `grey code boxes` with one-click copy & paste.
41
+
42
+ # Installation
43
+ ## Install miniconda, create an environment and install Python and vigra
44
+ - [Download and install miniconda](https://www.anaconda.com/docs/getting-started/miniconda/install#windows-installation) for your respective operating system according to the instructions.
45
+ - Miniconda provides a lightweight package and environment manager. It allows you to create isolated environments so that Python versions and package dependencies required by caactus do not interfere with your system Python or other projects.
46
+ - Once installed, create an environment for using `caactus` with the following command from your cmd-line
47
+ ```bash
48
+ conda create -n caactus-env -c conda-forge "python>=3.10.12" vigra
49
+
50
+ ## Install caactus
51
+ - Activate the `caactus-env` from the cmd-line with
52
+ ```bash
53
+ conda activate caactus-env
54
+ - To install `caactus` plus the needed dependencies inside your environment, use
55
+ ```bash
56
+ pip install caactus
57
+ - During the below described steps that call the `caactus-scripts`, make sure to have the `caactus-env` activated.
58
+
59
+
60
+ ## Install ilastik
61
+ - [Download and install ilastik](https://www.ilastik.org/download) for your respective operating system.
62
+
63
+ ## Quick Overview of the workflow
64
+ 1. **Culture** organism of interest in 96-well plate
65
+ 2. **Acquire** images of cells via microscopy.
66
+ 3. **Create** project directory
67
+ 4. **Rename** Files with the caactus-script ```renaming```
68
+ 5. **Convert** files to HDF5 Format with the caactus-script ```tif2h5py```
69
+ 6. Train a [pixel classification](https://www.ilastik.org/documentation/pixelclassification/pixelclassification) model in ilastik for and later run it batch-mode.
70
+ 7. Train a [boundary-based segmentation with Multicut](https://www.ilastik.org/documentation/multicut/multicut) model in ilastik for and later run it batch-mode.
71
+ 8. **Remove** the background from the images using ```background_processing```
72
+ 9. Train a [object classification](https://www.ilastik.org/documentation/objects/objects) model in ilastik for and later run it batch-mode.
73
+ 10. **Pool** all csv-tables from the individual images into one global table with ```csv_summary```
74
+ - output generated:
75
+ - "df_clean.csv"
76
+ 11. **Summarize** the data with ```summary_statistics```
77
+ - output generated:
78
+ - a) "df_summary_complete.csv" = .csv-table containing also "not usable" category,
79
+ - b) "df_refined_complete.csv" = .csv-table without "not usable" category",
80
+ - c) "counts.csv" dataframe used in PlnModelling
81
+ - d) bar graph ("barchart.png")
82
+ 13. **Model** the count data with ```pln_modelling```
83
+
84
+
85
+
86
+
87
+ # Detailed Description of the Workflow
88
+ ## 1. Culturing
89
+ - Culture your cells in a flat bottom plate of your choice and according to the needs of the organims being researched.
90
+ ## 2. Image acquisition
91
+ - In your respective microscopy software environment, save the images of interest to `.tif-format`.
92
+ - From the image metadata, copy the pixel size and magnification used.
93
+
94
+ ## 3. Data Preparation
95
+ ### 3.1 Create Project Directory
96
+
97
+ - For portability of the ilastik projects create the directory in the following structure:\
98
+ (Please note: the below example already includes examples of resulting files in each sub-directory)
99
+ - This allows you to copy an already trained workflow and use it multiple times with new datasets
100
+
101
+ ```
102
+ project_directory
103
+ ├── 1_pixel_classification.ilp
104
+ ├── 2_boundary_segmentation.ilp
105
+ ├── 3_object_classification.ilp
106
+ ├── renaming.csv
107
+ ├── conif.toml
108
+ ├── 0_1_original_tif_training_images
109
+ ├── training-1.tif
110
+ ├── training-2.tif
111
+ ├── ...
112
+ ├── 0_2_original_tif_batch_images
113
+ ├── image-1.tif
114
+ ├── image-2.tif
115
+ ├── ..
116
+ ├── 0_3_batch_tif_renamed
117
+ ├── strain-xx_day-yymmdd_condition1-yy_timepoint-zz_parallel-1.tif
118
+ ├── strain-xx_day-yymmdd_condition1-yy_timepoint-zz_parallel-2.tif
119
+ ├── ..
120
+ ├── 1_images
121
+ ├── training-1.h5
122
+ ├── training-2.h5
123
+ ├── ...
124
+ ├── 2_probabilities
125
+ ├── strain-xx_day-yymmdd_condition1-yy_timepoint-zz_parallel-1-data_Probabilities.h5
126
+ ├── strain-xx_day-yymmdd_condition1-yy_timepoint-zz_parallel-2-data_Probabilities.h5
127
+ ├── ...
128
+ ├── 3_multicut
129
+ ├── strain-xx_day-yymmdd_condition1-yy_timepoint-zz_parallel-1-data_Multicut Segmentation.h5
130
+ ├── strain-xx_day-yymmdd_condition1-yy_timepoint-zz_parallel-2-data_Multicut Segmentation.h5
131
+ ├── ...
132
+ ├── 4_objectclassification
133
+ ├── strain-xx_day-yymmdd_condition1-yy_timepoint-zz_parallel-1-data_Object Predictions.h5
134
+ ├── strain-xx_day-yymmdd_condition1-yy_timepoint-zz_parallel-1-data_table.csv
135
+ ├── strain-xx_day-yymmdd_condition1-yy_timepoint-zz_parallel-2-data_Object Predictions.h5
136
+ ├── strain-xx_day-yymmdd_condition1-yy_timepoint-zz_parallel-2-data_table.csv
137
+ ├── ...
138
+ ├── 5_batch_images
139
+ ├── strain-xx_day-yymmdd_condition1-yy_timepoint-zz_parallel-1.h5
140
+ ├── strain-xx_day-yymmdd_condition1-yy_timepoint-zz_parallel-2.h5
141
+ ├── ...
142
+ ├── 6_batch_probabilities
143
+ ├── strain-xx_day-yymmdd_condition1-yy_timepoint-zz_parallel-1-data_Probabilities.h5
144
+ ├── strain-xx_day-yymmdd_condition1-yy_timepoint-zz_parallel-2-data_Probabilities.h5
145
+ ├── ...
146
+ ├── 7_batch_multicut
147
+ ├── strain-xx_day-yymmdd_condition1-yy_timepoint-zz_parallel-1-data_Multicut Segmentation.h5
148
+ ├── strain-xx_day-yymmdd_condition1-yy_timepoint-zz_parallel-2-data_Multicut Segmentation.h5
149
+ ├── ...
150
+ ├── 8_batch_objectclassification
151
+ ├── strain-xx_day-yymmdd_condition1-yy_timepoint-zz_parallel-1-data_Object Predictions.h5
152
+ ├── strain-xx_day-yymmdd_condition1-yy_timepoint-zz_parallel-1-data_table.csv
153
+ ├── strain-xx_day-yymmdd_condition1-yy_timepoint-zz_parallel-2-data_Object Predictions.h5
154
+ ├── strain-xx_day-yymmdd_condition1-yy_timepoint-zz_parallel-2-data_table.csv
155
+ ├── ...
156
+ ├── 9_data_analysis
157
+
158
+ ```
159
+ ### 3.2 Setup config.toml-file
160
+ - copy config/config.toml to your working directory and modify it as needed.
161
+ - the caactus scripts are setup for pulling the information needed for running from the file
162
+ - CAVE: for Windows users make sure to change the backlash fro `/path/to/config.toml` to `\path\to\config.toml`, when copying the path to your working directory
163
+ - open the command line (for Windows: Anaconda Powershell) and save the path to your project file to a variable
164
+ - whole command UNIX:
165
+ ```bash
166
+ p = "\path\to\config.toml"
167
+ - whole command Windows:
168
+ ```bash
169
+ $p = "\path\to\config.toml"
170
+
171
+ ## 4. Training
172
+ ### 4.1. Selection of Training Images and Conversion
173
+ #### 4.1.1 Selection of Training data
174
+ - select a set of images that represant the different experimental conditions best
175
+ - store them in 0_1_original_tif_training_images
176
+
177
+ #### 4.1.2 Conversion
178
+ - call the `tif2h5py` script from the cmd prompt to transform all `.tif-files` to `.h5-format`.
179
+ The `.h5-format` allows for better [performance when working with ilastik](https://www.ilastik.org/documentation/basics/performance_tips).
180
+ - select "-c" and enter path to config.toml
181
+ - select "-m" and choose "training"
182
+ - whole command UNIX:
183
+ ```bash
184
+ tif2h5py -c "$p" -m training
185
+ - whole command Windows:
186
+ ```bash
187
+ tif2h5py.exe -c $p -m training
188
+
189
+ ### 4.2. Pixel Classification
190
+ #### 4.2.1 Project setup
191
+ - Follow the the [documentation for pixel classification with ilastik](https://www.ilastik.org/documentation/pixelclassification/pixelclassification).
192
+ - Create the `1_pixel_classification.ilp`-project file inside the project directory.
193
+ - For working with neighbouring / touching cells, it is suggested to create three classes: 0 = interior, 1 = background, 2 = boundary (This follows python's 0-indexing logic where counting is started at 0).
194
+
195
+ ![pixel_classes](https://github.com/mr2raccoon/caactus/blob/main/images/pixel_classification_classes.JPG)
196
+
197
+ #### 4.2.2 Export Probabilties
198
+ In prediction export change the settings to
199
+ - `Convert to Data Type: integer 8-bit`
200
+ - `Renormalize from 0.00 1.00 to 0 255`
201
+ - File:
202
+ ```bash
203
+ {dataset_dir}/../2_probabilties/{nickname}_{result_type}.h5
204
+
205
+ ![export_prob](https://github.com/mr2raccoon/caactus/blob/main/images/export_probabilities.JPG)
206
+
207
+
208
+ ### 4.3 Boundary-based Segmentation with Multicut
209
+ #### 4.3.1 Project setup
210
+ - Follow the the [documentation for boundary-based segmentation with Multicut](https://www.ilastik.org/documentation/multicut/multicut).
211
+ - Create the `2_boundary_segmentation.ilp`-project file inside the project directory.
212
+ - In `DT Watershed` use the input channel the corresponds to the order you used under project setup (in this case input channel = 2).
213
+
214
+ ![watershed](https://github.com/mr2raccoon/caactus/blob/main/images/watershed.png)
215
+
216
+
217
+ #### 4.3.2 Export Multicut Segmentation
218
+ In prediction export change the settings to
219
+ - `Convert to Data Type: integer 8-bit`
220
+ - `Renormalize from 0.00 1.00 to 0 255`
221
+ - Format: `compressed hdf5`
222
+ - File:
223
+ ```bash
224
+ {dataset_dir}/../3_multicut/{nickname}_{result_type}.h5
225
+
226
+ ![export_multicut](https://github.com/mr2raccoon/caactus/blob/main/images/export_multicut.JPG)
227
+
228
+
229
+ ### 4.4 Background Processing
230
+ For futher processing in the object classification, the background needs to eliminated from the multicut data sets. For this the next script will set the numerical value of the largest region to 0. It will thus be shown as transpartent in the next step of the workflow. This operation will be performed in-situ on all `.*data_Multicut Segmentation.h5`-files in the `project_directory/3_multicut/`.
231
+ - call the `background-processing` script from the cmd prompt
232
+ - select "-c" and enter path to config.toml
233
+ - enter "-m training" for training mode
234
+ - whole command UNIX:
235
+ ```bash
236
+ background_processing -c "$p" -m training
237
+ - whole command Windows:
238
+ ```bash
239
+ background_processing.exe -c $p -m training
240
+
241
+ ### 4.5. Object Classification
242
+ #### 4.5.1 Project setup
243
+ - Follow the the [documentation for object classification](https://www.ilastik.org/documentation/objects/objects).
244
+ - define your cell types plus an additional category for "not-usuable" objects, e.g. cell debris and cut-off objects on the side of the images
245
+ #### 4.5.2 Export Object Information
246
+ In `Choose Export Imager Settings` change settings to
247
+ - `Convert to Data Type: integer 8-bit`
248
+ - `Renormalize from 0.00 1.00 to 0 255`
249
+ - Format: `compressed hdf5`
250
+ - File:
251
+ ```bash
252
+ {dataset_dir}/../4_objectclassification/{nickname}_{result_type}.h5
253
+
254
+ ![export_multicut](https://github.com/mr2raccoon/caactus/blob/main/images/export_objectclassification.JPG)
255
+
256
+ In `Configure Feature Table Export General` change seetings to
257
+ - format `.csv` and output directory File:
258
+ ```bash
259
+ {dataset_dir}/../4_objectclassification/{nickname}.csv`
260
+ - select your features of interest for exporting
261
+
262
+
263
+ ![export_prob](https://github.com/mr2raccoon/caactus/blob/main/images/object_tableexport.JPG)
264
+
265
+
266
+ ## 5. Batch Processing
267
+ - Follow the [documentation for batch processing](https://www.ilastik.org/documentation/basics/batch)
268
+ - store the images you want to process in the 0_2_original_tif_batch_images directory
269
+ - Perform steps D.2 to D.5 in batch mode, as explained in detail below (E.2 to E.5)
270
+
271
+ ### 5.1 Rename Files
272
+ - Rename the `.tif-files` so that they contain information about your cells and experimental conditions
273
+ - Create a csv-file that contains the information you need in columns. Each row corresponds to one image. Follow the same order as the sequence of image acquisition.
274
+ - the only hardcoded columns that have to be added are `biorep` for "biological replicate" and `techrep` for "technical replicate". They are needed for downstream analysis for calculating the averages
275
+ - The script will rename your files in the following format ```columnA-value1_columnB-value2_columnC_etc.tif ``` eg. as seen in the example below picture 1 (well A1 from our plate) will be named ```strain-ATCC11559_date-20241707_timepoint-6h_biorep-A_techrep-1.tif ```
276
+ - Call the `rename` script from the cmd prompt to rename all your original `.tif-files` to their new name.
277
+ - whole command Unix:
278
+ ```bash
279
+ renaming -c "$p"
280
+ - whole command Windows:
281
+ ```bash
282
+ renaming.exe -c $p
283
+
284
+ #### 5.2 Conversion
285
+ - call the `tif2h5py` script from the cmd prompt to transform all `.tif-files` to `.h5-format`.
286
+ - select "-m" and choose "batch"
287
+ - whole command UNIX:
288
+ ```bash
289
+ tif2h5py -c "$p" -m batch
290
+ - whole command Windows:
291
+ ```bash
292
+ tif2h5py.exe -c $p -m batch
293
+
294
+ ![96-well-plate](https://github.com/mr2raccoon/caactus/blob/main/images/96_well_setup.png)
295
+
296
+ ### 5.3 Batch Processing Pixel Classification
297
+ - open the `1_pixel_classification.ilp` project file
298
+ - under `Prediction Export` change the export directory to `File`:
299
+ ```bash
300
+ {dataset_dir}/../6_batch_probabilities/{nickname}_{result_type}.h5
301
+ - under `Batch Processing` `Raw Data` select all files from `5_batch_images`
302
+
303
+ ### 5.4 Batch Processing Multicut Segmentation
304
+ - open the `2_boundary_segmentation.ilp` project file
305
+ - under `Choose Export Image Settings` change the export directory to `File`:
306
+ ```bash
307
+ {dataset_dir}/../7_batch_multicut/{nickname}_{result_type}.h5
308
+ - under `Batch Processing` `Raw Data` select all files from `5_batch_images`
309
+ - under `Batch Processing` `Probabilities` select all files from `6_batch_probabilities`
310
+
311
+ ### 5.5 Background Processing
312
+ For futher processing in the object classification, the background needs to eliminated from the multicut data sets. For this the next script will set the numerical value of the largest region to 0. It will thus be shown as transpartent in the next step of the workflow. This operation will be performed in-situ on all `.*data_Multicut Segmentation.h5`-files in the `project_directory/3_multicut/`.
313
+ - call the `background-processing.py` script from the cmd prompt
314
+ - enter "-m batch" for batch mode
315
+ - whole command Unix:
316
+ ```bash
317
+ background_processing -c "$p" -m batch
318
+ - whole command Windows:
319
+ ```bash
320
+ background_processing.exe -c $p -m batch
321
+
322
+
323
+ ### 5.6 Batch processing Object classification
324
+ - under `Choose Export Image Settings` change the export directory to `File`:
325
+ ```bash
326
+ {dataset_dir}/../8_batch_objectclassification/{nickname}_{result_type}.h5
327
+ - in `Configure Feature Table Export General` choose format `.csv` and change output directory to:
328
+ ```bash
329
+ {dataset_dir}/../8_batch_objectclassification/{nickname}.csv
330
+ - select your features of interest for exporting
331
+ - under `Batch Processing` `Raw Data` select all files from `5_batch_images`
332
+ - under `Batch Processing` `Segmentation Image` select all files from `7_batch_multicut`
333
+
334
+ ## 6. Post-Processing and Data Analysis
335
+ - Please be aware, the last two scripts, `summary_statisitcs.py` and `pln_modelling.py` at this stage are written for the analysis and visualization of two independent variables.
336
+ ### 6.1 Merging Data Tables and Table Export
337
+ The next script will combine all tables from all images into one global table for further analysis. Additionally, the information stored in the file name will be added as columns to the dataset.
338
+ - call the `csv_summary.py` script from the cmd prompt
339
+ - whole command Unix:
340
+ ```bash
341
+ csv_summary -c "$p"
342
+ - whole command Windows
343
+ ```bash
344
+ csv_summary.exe -c $p
345
+ - Technically from this point on, you can continue to use whatever software / workflow your that is easiest for use for subsequent data analysis.
346
+
347
+ ### 6.2 Creating Summary Statistics
348
+ - call the `summary_statistics.py` script from the cmd prompt
349
+ - whole command Unix:
350
+ ```bash
351
+ summary_statistics -c "$p"
352
+ - whole command Windows:
353
+ ```bash
354
+ summary_statistics.exe -c $p
355
+ - if working with EUCAST antifungal susceptibility testing, call `summary_statistics_eucast`
356
+
357
+
358
+ ### 6.3 PLN Modelling
359
+ - call the `pln_modelling.py` script from the cmd prompt`
360
+ - whole command Unix:
361
+ ```bash
362
+ pln_modelling -c "$p"
363
+ - whole command Windows:
364
+ ```bash
365
+ pln_modelling.exe -c $p
366
+ - please note: the limit of categories for display in the PCA-plot is n=15
367
+
368
+
369
+
370
+
371
+
372
+