NGSpeciesID 0.3.1__py2.py3-none-any.whl

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
@@ -0,0 +1,350 @@
1
+ Metadata-Version: 2.4
2
+ Name: NGSpeciesID
3
+ Version: 0.3.1
4
+ Summary: Reconstructs viral consensus sequences from a set of ONT reads.
5
+ Home-page: https://github.com/ksahlin/NGSpeciesID
6
+ Author: Kristoffer Sahlin
7
+ Author-email: ksahlin@math.su.se
8
+ Keywords: viral sequeces ONT Oxford Nanopore Technologies long reads
9
+ Classifier: Development Status :: 3 - Alpha
10
+ Classifier: Programming Language :: Python :: 3
11
+ Classifier: Programming Language :: Python :: 3.4
12
+ Classifier: Programming Language :: Python :: 3.5
13
+ Classifier: Programming Language :: Python :: 3.6
14
+ Classifier: Programming Language :: Python :: 3.7
15
+ Requires-Python: !=3.0.*, !=3.1.*, !=3.2.*, !=3.3.*, <4
16
+ License-File: LICENSE.txt
17
+ Requires-Dist: parasail==1.2.4
18
+ Requires-Dist: edlib>=1.1.2
19
+ Dynamic: author
20
+ Dynamic: author-email
21
+ Dynamic: classifier
22
+ Dynamic: description
23
+ Dynamic: home-page
24
+ Dynamic: keywords
25
+ Dynamic: license-file
26
+ Dynamic: requires-dist
27
+ Dynamic: requires-python
28
+ Dynamic: summary
29
+
30
+ NGSpeciesID
31
+ ===========
32
+
33
+ NGSpeciesID is a tool for clustering and consensus forming of long-read amplicon sequencing data (has been used with both PacBio and Oxford Nanopore data). The repository is a modified version of [isONclust](https://github.com/ksahlin/isONclust), where consensus, primer-removal, and polishing feautures have been added.
34
+
35
+ NGSpeciesID is distributed as a python package supported on Linux / OSX with python v3.6. [![Build Status](https://travis-ci.org/ksahlin/NGSpeciesID.svg?branch=master)](https://travis-ci.org/ksahlin/NGSpeciesID).
36
+
37
+ Table of Contents
38
+ =================
39
+
40
+ * [INSTALLATION](#installation)
41
+ * [Using conda](#using-conda)
42
+ * [Testing installation](#testing-installation)
43
+ * [USAGE](#usage)
44
+ * [Filtering and subsampling](#filtering-and-subsampling)
45
+ * [Removing primers](#removing-primers)
46
+ * [Output](#output)
47
+ * [EXAMPLE WORKFLOW](#example-workflow)
48
+ * [CREDITS](#credits)
49
+ * [LICENCE](#licence)
50
+
51
+
52
+
53
+ INSTALLATION
54
+ ----------------
55
+
56
+ <!---
57
+ **NOTE**: If you are experiencing issues (e.g. [this one](https://github.com/rvaser/spoa/issues/26)) with the third party tools [spoa](https://github.com/rvaser/spoa) or [medaka](https://github.com/nanoporetech/medaka) in the installation instructions below, please install the tools manually with their respective installation instructions [here](https://github.com/rvaser/spoa#installation) and [here](https://github.com/nanoporetech/medaka#installation).
58
+ -->
59
+
60
+ ### Using conda
61
+
62
+ **Recent update (2025-04-19)**
63
+
64
+ There have been many version updates of medaka and spoa since NGSpeciesID was first published. Below are instructions to install
65
+ NGSpeciesID with newer versions of spoa ([v4.1.4](https://bioconda.github.io/recipes/spoa/README.html)) and medaka (v2.0.1).
66
+
67
+ ```
68
+ conda create -n NGSpeciesID python=3.11 pip
69
+ conda activate NGSpeciesID
70
+ conda install --yes -c conda-forge -c bioconda medaka==2.0.1 openblas==0.3.3 spoa racon minimap2 samtools
71
+ pip install NGSpeciesID
72
+ ```
73
+
74
+ Make sure you [test the installation](#testing-installation).
75
+
76
+ **Published installation instructions (2021-01-11)**
77
+
78
+ Conda is the preferred way to install NGSpeciesID.
79
+
80
+ 1. Create and activate a new environment called NGSpeciesID
81
+
82
+ ```
83
+ conda create -n NGSpeciesID python=3.6 pip
84
+ conda activate NGSpeciesID
85
+ ```
86
+
87
+ 2. Install NGSpeciesID
88
+
89
+ ```
90
+ conda install --yes -c conda-forge -c bioconda medaka==0.11.5 openblas==0.3.3 spoa racon minimap2
91
+ pip install NGSpeciesID
92
+ ```
93
+ 3. You should now have 'NGSpeciesID' installed; try it:
94
+ ```
95
+ NGSpeciesID --help
96
+ ```
97
+
98
+ Upon start/login to your server/computer you need to activate the conda environment "NGSpeciesID" to run NGSpeciesID as:
99
+ ```
100
+ conda activate NGSpeciesID
101
+ ```
102
+
103
+
104
+
105
+ ### Testing installation
106
+
107
+ 0. Activate conda environment
108
+ ```
109
+ conda activate NGSpeciesID
110
+ ```
111
+
112
+ 1. Make a new directory and navigate to it
113
+ ```
114
+ mkdir test_ngspeciesID
115
+ cd test_ngspeciesID
116
+ ```
117
+
118
+ 2. Download the test fastq file called "sample_h1.fastq" (filesize 390kb)
119
+
120
+ ```
121
+ curl -LO https://raw.githubusercontent.com/ksahlin/NGSpeciesID/master/test/sample_h1.fastq
122
+ ```
123
+
124
+ 3. Run the NGSpecies command on test file. Outputs will be saved in "/test_ngspeciesID/sample_h1/", where the final polished consensus file ("consensus.fasta") is located in the "/test_ngspeciesID/sample_h1/medaka_cl_id_<cluster number>" directory.
125
+
126
+ ```
127
+ NGSpeciesID --ont --fastq sample_h1.fastq --outfolder ./sample_h1 --consensus --medaka
128
+ ```
129
+
130
+
131
+ USAGE
132
+ -------
133
+
134
+ NGSpeciesID needs a fastq file generated by an Oxford Nanopore basecaller.
135
+
136
+ ```
137
+ NGSpeciesID --ont --consensus --medaka --fastq [reads.fastq] --outfolder [/path/to/output]
138
+ ```
139
+ The argument `--ont` simply means `--k 13 --w 20`. These arguments can be set manually without the `--ont` flag. Specify number of cores with `--t`.
140
+
141
+
142
+ NGSpeciesID can also run with racon as polisher. For example
143
+
144
+ ```
145
+ NGSpeciesID --ont --consensus --racon --racon_iter 3 --fastq [reads.fastq] --outfolder [/path/to/output]
146
+ ```
147
+ will polish the consensus sequences with racon three times.
148
+
149
+ ### Filtering and subsampling
150
+
151
+ NGSpeciesID employs quality filtering of the reads based on read Phred scores. However, we recommend also removing reads much shorter or longer than the intended target, which often represent chimeras or contaminations. This can be done by specifying the `--m (intended target length)` and `--s (maximum deviation from target length)`. NGSpeciesID also has the feature of subsampling reads using parameter `--sample_size`. Altogether, if we want to filter out reads outside the length interval [700,800] and using a subset of 300 reads (if the dataset consists of more reads) we could run
152
+
153
+ ```
154
+ NGSpeciesID --ont --sample_size 300 --m 750 --s 50 --consensus --medaka --fastq [reads.fastq] --outfolder [/path/to/output]
155
+ ```
156
+
157
+ By default, length filtering and subsampling are not invoked if parameters are not specified.
158
+
159
+ ### Removing primers
160
+
161
+ If customized primers are to be expected in the reads thay can be detected and removed. The primer file is expected to be in fasta format. Here is an example of a primer file:
162
+
163
+ ```
164
+ >MCB869_ONT_R
165
+ CGATCAATCCCCTAACAAACTAGG
166
+ >MCB398_ONT_F
167
+ TACCATGAGGACAAATATCATTCTG
168
+ ```
169
+ NGSpeciesID searches for primes in a window of Xbp (parameter, default 150bp) at the beginning and end of each consensus.
170
+
171
+
172
+ Trimming of primers is performed after consensus forming and can be invoked as
173
+ ```
174
+ NGSpeciesID --ont --consensus --medaka --fastq [reads.fastq] --outfolder [/path/to/output] --primer_file [primers.fa]
175
+ ```
176
+
177
+ `NGSpeciesID` can also remove universal tails. Trimming of tails is performed after consensus forming and can be invoked as
178
+
179
+ ```
180
+ NGSpeciesID --ont --consensus --medaka --fastq [reads.fastq] --outfolder [/path/to/output] --remove_universal_tails
181
+ ```
182
+
183
+ The two options are mutually exclusive, i.e., only one of them can be run.
184
+
185
+ ### Output
186
+
187
+ The output consists of the polished consensus sequences along with some information about clustering.
188
+
189
+ * Polished consensus sequence(s). A folder named “medaka_cl_id_X”[/"racon_cl_id_X"] is created for each predicted consensus. Each such folder contains a sequence “consensus.fasta” which is the final output of NGSpeciesID.
190
+ * Draft spoa consensus sequences of each of the clusters are given as consensus_reference_X.fasta (where X is a number).
191
+ * The final cluster information is given in a tsv file `final_clusters.tsv` present in the specified output folder.
192
+
193
+
194
+ In the cluster TSV-file, the first column is the cluster ID and the second column is the read accession. For example:
195
+
196
+ ```
197
+ 0 read_X_acc
198
+ 0 read_Y_acc
199
+ ...
200
+ n read_Z_acc
201
+ ```
202
+ if there are n reads there will be n rows. Some reads might be singletons. The rows are ordered with respect to the size of the cluster (largest first).
203
+
204
+
205
+ EXAMPLE WORKFLOW
206
+ -----------------
207
+
208
+ The bioinformatics workflow below was developed as part of a step-by-step protocol for field-deployable DNA amplicon sequencing with the Oxford Nanopore Technologies MinION. The full protocol manuscript is in submission; a link will be posted here when available. The steps below correspond to step numbers in the protocol.
209
+
210
+ #### P2 | Generate custom indexes for uniquely identifying samples using [`barcode_generator`](https://github.com/lcomai/barcode_generator). This software uses Python3.
211
+
212
+ ```
213
+ python3 barcode_generator_3.4.py none 24 40 8
214
+ ```
215
+
216
+ Here, the parameters are set as:
217
+ - table_excluded_barcodes = 'none'
218
+ - index length = 24 base pairs
219
+ - number of barcodes to generate = 40
220
+ - hamming distance = 8
221
+
222
+ After lab steps are complete:
223
+
224
+ #### B1 | Basecalling and quality check (optional) with [Guppy](https://community.nanoporetech.com/downloads)
225
+
226
+ These commands use the fast basecalling model from Guppy.
227
+
228
+ Basecalling for R9.4 flow cell:
229
+
230
+ ```
231
+ guppy_basecaller --input_path minKNOW_input/ --save_path basecalled_fastqs/ -c dna_r9.4.1_450bps_fast.cfg --recursive --disable_pings
232
+ ```
233
+
234
+ Basecalling and filter reads by quality score (here, set to 7):
235
+
236
+ ```
237
+ guppy_basecaller --input_path minKNOW_input/ --save_path basecalled_fastqs/ -c dna_r9.4.1_450bps_fast.cfg --recursive --disable_pings --min_qscore 7
238
+ ```
239
+
240
+ Basecalling for R10.3 flow cell:
241
+
242
+ ```
243
+ guppy_basecaller --input_path minKNOW_input/ --save_path basecalled_fastqs/ -c dna_r10.3_450bps_fast.cfg --recursive --disable_pings
244
+ ```
245
+
246
+ #### B2 | Go to folder with the fastq files generated by Guppy
247
+
248
+ #### B3 | Concatenate all the read files into one large file
249
+
250
+ ```
251
+ cat *.fastq > sequencing_reads.fastq
252
+ ```
253
+
254
+ #### B4 | Check raw read quality/stats with [NanoPlot](https://github.com/wdecoster/NanoPlot)
255
+
256
+ ```
257
+ NanoPlot --fastq_rich sequencing_reads.fastq -o sequencing_run -p sequencing_run
258
+ ```
259
+
260
+ #### B5 | Demultiplexing of the sequencing data with [minibar](https://github.com/calacademy-research/minibar) or Guppy
261
+
262
+ Example files can be found in:
263
+ - [Supplementary Data 1](./test/Supplementary_File1_reads.fastq): 3,000 reads in fastq format from three fish species - Atlantic cod (*Gadus morhua*), Haddock (*Melanogrammus aeglefinus*), and Whiting (*Merlangius merlangus*) - sequenced on a Flongle flow cell.
264
+ - [Supplementary Data 2](./test/Supplementary_File2_minibar.txt): index file used for demultiplexing with minibar
265
+
266
+ The example files Supplementary Data 1 can be used for `sequencing_reads.fastq` and Supplementary Data 2 can be used for `indexes.txt`.
267
+
268
+ #### B5a | minibar (using example files):
269
+
270
+ ```
271
+ python minibar.py indexes.txt sequencing_reads.fastq -T -F -e 3 -E 11
272
+ ```
273
+
274
+ Here, the edit distance allowed between indexes (`-e`) is set to 3 base pairs and the edit distance allowed between primer sequences (`-E`) is set to 11 base pairs.
275
+
276
+ #### B5b | Guppy:
277
+
278
+ ```
279
+ guppy_barcoder -i sequencing_reads.fastq -s demultiplex_folder --trim_barcodes --disable_pings
280
+ ```
281
+
282
+ #### B6 | Read filtering, clustering, consensus generation and polishing with NGSpeciesID
283
+
284
+ For a single sample (using example primer file):
285
+
286
+ ```
287
+ NGSpeciesID --ont --consensus --sample_size 500 --m 800 --s 100 --medaka --primer_file primers.txt --fastq barcode0.fastq --outfolder barcode0_consensus
288
+ ```
289
+
290
+ Here, the parameters are set as:
291
+ - the data is from ONT MinION (`--ont`)
292
+ - we want to generate consensus sequences (`--consensus`)
293
+ - subsample of reads (`--sample_size`) = 500 reads subsampled per sample to analyze
294
+ - intended target length (`--m`) = 800 base pairs
295
+ - maximum deviation from target length (`--s`) = 100 base pairs
296
+ - use [Medaka](https://github.com/nanoporetech/medaka) to polish the final consensus sequences (`--medaka`)
297
+ - if a `--primer_file` is given, NGSpeciesID will check to remove any remaining primer sequence. The example primer file is available in [Supplementary Data 3](./test/Supplementary_File3_primer.txt). The primers were developed in Mikkelsen, P.M., Bieler, R., Kappner, I., & Rawlings, T.A. (2006). Phylogeny of Veneroidea (Mollusca: Bivalvia) based on morphology and molecules. *Zoological Journal of the Linnean Society*, 148(3), 439-521.
298
+ - the input file of demultiplexed reads is specified by `--fastq` (output from step B5)
299
+ - the output consensus files will be saved to `--outfolder`
300
+
301
+ To run this step on **more than one sample**, use a bash script with a for loop:
302
+
303
+ ```
304
+ for file in *.fastq; do
305
+ bn=`basename $file .fastq`
306
+ NGSpeciesID --ont --consensus --sample_size 500 --m 800 --s 100 --medaka --primer_file primers.txt --fastq $file --outfolder ${bn}
307
+ done
308
+ ```
309
+
310
+ This loop uses the wildcard `*` to indicate you want to analyze all files with the `.fastq` extension and assumes the command is run from the directory that contains the read files (if not, be sure to change the file path: `path/to/*.fastq`).
311
+
312
+ This loop code can be entered at a UNIX/Mac terminal (be sure the spacing/indentation is correct) or saved as a script (see [`consensus.sh`](./test/consensus.sh). The script should be run from the terminal and in the directory that contains the read files as:
313
+
314
+ ```
315
+ ./consensus.sh
316
+ ```
317
+
318
+ #### B7 | Compare consensus sequences to reference database with [BLAST](https://blast.ncbi.nlm.nih.gov/Blast.cgi?PAGE_TYPE=BlastDocs&DOC_TYPE=Download)
319
+
320
+ - Create/format database for BLAST search:
321
+
322
+ ```
323
+ makeblastdb -in database.fasta -dbtype nucl -out database
324
+ ```
325
+
326
+ - Conduct BLAST search:
327
+
328
+ ```
329
+ blastn -db database -query barcode0_consensus.fasta -outfmt 6 -out barcode0_consensus_blast.out
330
+ ```
331
+
332
+ Check the results and refine the search or database as needed to better identify the sequence identity of your samples!
333
+
334
+
335
+
336
+ CREDITS
337
+ ----------------
338
+
339
+ Please cite [1] when using NGSpeciesID.
340
+
341
+ 1. Sahlin, K, Lim, MCW, Prost, S. NGSpeciesID: DNA barcode and amplicon consensus generation from long‐read sequencing data. Ecol Evol. 2021; 00: 1– 7. https://doi.org/10.1002/ece3.7146
342
+
343
+
344
+
345
+ LICENCE
346
+ ----------------
347
+
348
+ GPL v3.0, see [LICENSE.txt](https://github.com/ksahlin/NGSpeciesID/blob/master/LICENCE.txt).
349
+
350
+
@@ -0,0 +1,14 @@
1
+ modules/__init__.py,sha256=47DEQpj8HBSa-_TImW-5JCeuQeRkm5NMpJWZG3hSuFU,0
2
+ modules/barcode_trimmer.py,sha256=3ckTUAOwbxAuzKCmLixZ7Xpg-E6gy8giGSqEq7cH6D4,4592
3
+ modules/cluster.py,sha256=8ASPRDPUm0k8dAz4qgZcQQRNeI4a8VWP2Isl78CEhrQ,18914
4
+ modules/consensus.py,sha256=0Qm2xkaQ0kFqZ6aULg614ORPIj2tA7xBmJ25zEap4Og,13249
5
+ modules/get_sorted_fastq_for_cluster.py,sha256=6B-imvR8d612NJUa-pelFg4xKkDVlP9FfYlipclJKIo,8722
6
+ modules/help_functions.py,sha256=JaHme_a6tVXYWEGsyZ4Cea6Es6rY58JEzS-7N3NKcWo,3067
7
+ modules/p_minimizers_shared.py,sha256=6vfxx3fB21wkldLAdbbS6XOycKxwp-XiJR_7RVlmedI,1790771
8
+ modules/parallelize.py,sha256=n1OgjBrp18LirbteglZye_6pwO3K8phLh_7Q_t1QaPg,9186
9
+ ngspeciesid-0.3.1.data/scripts/NGSpeciesID,sha256=PkZTHL881lkIitNqEiaxSdH80nr44HXul0m04tPPYXQ,17342
10
+ ngspeciesid-0.3.1.dist-info/licenses/LICENSE.txt,sha256=xTplwv1WHIfqq_EHLvXcq4ZTBCvBUwhGX1JBNYXrYnE,35146
11
+ ngspeciesid-0.3.1.dist-info/METADATA,sha256=s7rtYnG4HNcHr13PlKavkP8Rpd1WwshHt3EtVtEktyg,13615
12
+ ngspeciesid-0.3.1.dist-info/WHEEL,sha256=Td9E1opt19FSuwsk_gcDwtsGPmyXw7uz9xQf-y2gvl8,109
13
+ ngspeciesid-0.3.1.dist-info/top_level.txt,sha256=F7U4jdIxH3MVMmyX9rbajWuG0bj5tcZN4L1DZSxW72E,8
14
+ ngspeciesid-0.3.1.dist-info/RECORD,,
@@ -0,0 +1,6 @@
1
+ Wheel-Version: 1.0
2
+ Generator: setuptools (79.0.0)
3
+ Root-Is-Purelib: true
4
+ Tag: py2-none-any
5
+ Tag: py3-none-any
6
+