ForenSight 0.1.0__tar.gz

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
@@ -0,0 +1,341 @@
1
+ Metadata-Version: 2.4
2
+ Name: ForenSight
3
+ Version: 0.1.0
4
+ Summary: Asynchronous genomic comparison and visualization toolkit to perform on local devices(parsers, loaders, matchers, visualizers).
5
+ Author: Onur Kavrik
6
+ Maintainer: Biotronics Ai
7
+ License: MIT
8
+ Project-URL: Homepage, https://biotronics.ai
9
+ Requires-Python: >=3.11
10
+ Description-Content-Type: text/markdown
11
+ License-File: LICENSE
12
+ Requires-Dist: aiofiles==25.1.0
13
+ Requires-Dist: argcomplete==3.6.3
14
+ Requires-Dist: argh==0.31.3
15
+ Requires-Dist: biopython==1.86
16
+ Requires-Dist: gffutils==0.13
17
+ Requires-Dist: numpy==2.3.4
18
+ Requires-Dist: packaging==25.0
19
+ Requires-Dist: pillow==12.0.0
20
+ Requires-Dist: psutil==7.1.3
21
+ Requires-Dist: pyBigWig==0.3.24
22
+ Requires-Dist: pyfaidx==0.9.0.3
23
+ Requires-Dist: pysam==0.23.3
24
+ Requires-Dist: simplejson==3.20.2
25
+ Requires-Dist: bx-python
26
+ Provides-Extra: dev
27
+ Requires-Dist: pytest; extra == "dev"
28
+ Requires-Dist: pytest-asyncio; extra == "dev"
29
+ Requires-Dist: ruff; extra == "dev"
30
+ Dynamic: license-file
31
+
32
+ <p align="center">
33
+ <img src="images/Bio%20(Yeni%20LinkedIn%20Banner'%C4%B1).png" alt="Biotronics Ai Banner">
34
+ </p>
35
+ # Forensight
36
+
37
+ # *ForenSight*
38
+
39
+ An open-source toolkit for forensic genomic comparison and data storage, built by Biotronics AI. It streams large files, minimizes RAM with memory mapping, surfaces rich logging, and runs locally for forensic/genomic analysis.
40
+
41
+ **What it delivers in practice?**
42
+
43
+ - **Multi-DNA comparison within seconds:** stream batches, eliminate weak candidates fast, then score the top match with cosine similarity.
44
+ - **STR pattern detection in a second across multiple DNA samples:** sliding-window STR search scans whole genomes without loading everything into memory.
45
+ - **HID/ABI/FSA to CSV in one pass:** extract peaks, traces, and metadata for downstream mixture/QC without proprietary viewers.
46
+ - **Multi-format handling:** Normalize the sequence formats to be compatible and comparable to each other.
47
+
48
+ ## Features
49
+
50
+ **Supported Genomic Data File Formats**:
51
+
52
+ * **Sequence:** Data types including sequence information FASTA/FASTQ/SEQ/SAM/BAM/CRAM files.
53
+ * **Annotation:** VCF/BED/BigBed/BigWig/WIG/GFF/GTF files.
54
+ * **Electrophoregram:** FSA/ABI/AB1/HID with async batch reading and N-stripping.
55
+
56
+ **Normalization**: Cross-format sequence normalization **(SequenceNormalizationManager)** for mixed sequence inputs with async batch reading and N-stripping.
57
+
58
+ **Core engines**:
59
+
60
+ - `SampleLoader` (batch/memmap streaming)
61
+ - `SequenceMatcher` (statistical elimination with Wilson intervals)
62
+ - `DoubleSampleComparator` (cosine-like similarity, chunked)
63
+ - `STRSearcher` (sliding-window similarity search)
64
+
65
+ **Visualizers → CSV**: HID/ABI/FSA emit multiple CSVs (main peaks, trace/basecall, excluded fields, APrX sidecars). Band visualizer for sequence/feature pairs.
66
+
67
+ **Memory-aware**: Streaming, per-batch processing, optional memmap buffers, explicit cleanup paths.
68
+
69
+ ## Install
70
+
71
+ ```bash
72
+ python -m venv .venv
73
+ source .venv/bin/activate
74
+ pip install forensight
75
+ ```
76
+
77
+ Dependencies are defined in `pyproject.toml` / `requirements.txt`.
78
+
79
+ ## Project layout
80
+
81
+ - `models.py` — parsers factory, loader, matchers, comparators, STR search.
82
+ - `parsers.py` — file-format parsers (async batch, N-stripping).
83
+ - `visualize.py` — CSV visualizers (HID/ABI/FSA), band visualizer.
84
+ - `test.py` — runnable scenarios (pipelines, visualizers, STR search).
85
+ - `base.py` — base components/logging helpers.
86
+ - `data_samples/` — sample data paths expected by `test.py` (adjust as needed).
87
+ - `mem_map/`, `logs/` — runtime outputs (ignored by git).
88
+
89
+ ## Quick start (scenarios)
90
+
91
+ Edit `test.py` and uncomment the scenarios you want, then:
92
+
93
+ ```bash
94
+ source .venv/bin/activate
95
+ python test.py
96
+ ```
97
+
98
+ Scenarios (toggle in `main()`):
99
+
100
+ - `scenario1..5` — sequence comparison pipelines (SampleLoader → SequenceMatcher/DoubleSampleComparator).
101
+ - `scenario_hid` — HID → CSVs.
102
+ - `scenario_abi` — ABI/AB1 → CSVs.
103
+ - `scenario_fsa` — FSA → CSVs.
104
+ - `scenario_str_searcher` — sliding-window STR search across FASTA files.
105
+
106
+ Outputs land in `logs/`:
107
+
108
+ - HID/ABI/FSA: `{name}.csv` (peaks), `{name}.trace.csv` (PLOC/DATA/PBAS), `{name}.excluded.csv` (other fields), optional `{name}.aprx.csv/.xml` (APrX1).
109
+ - Band visualizer: WEBP tiles (if used) and optional metadata CSV.
110
+
111
+ ## Core usage examples (and real-world analogues)
112
+
113
+ ### 1) Dual-sample comparison
114
+
115
+ Use when you have two specimens of the same format (e.g., two FASTA genomes, or two FASTQ readsets) and want a similarity score.
116
+
117
+ ```python
118
+ from forensight import SampleLoader, DoubleSampleComparator
119
+
120
+ loader = SampleLoader()
121
+ batches = await loader.load_samples(
122
+ ["sample1.fa", "sample2.fa"],
123
+ batch_size=16384,
124
+ memmap_dir="mem_map",
125
+ )
126
+ s1, s2 = batches["sample1.fa"], batches["sample2.fa"]
127
+ comp = DoubleSampleComparator()
128
+ sim = await comp.compare(s1, s2, batch_size=16384)
129
+ print("cosine-like similarity:", sim)
130
+ ```
131
+
132
+ _Real life_: Basic “are these two references the same?” QC, or comparing two assemblies of the same chromosome.
133
+
134
+ ### 2) Multi-sample elimination to find the closest match (scenario2/3/4)
135
+
136
+ Use when you have one target and a pool of candidates (all sequence formats). The matcher samples loci, eliminates weak candidates with Wilson intervals, and returns the best; then you can stream a full comparison against the winner. Keep `memmap_dir` set for all loaders.
137
+
138
+ ```python
139
+ from forensight import SampleLoader, SequenceMatcher, DoubleSampleComparator
140
+
141
+ loader = SampleLoader()
142
+ matcher = SequenceMatcher(target=None, pool=[])
143
+ best = await matcher.match_streaming_paths(
144
+ "target.fa",
145
+ ["cand1.fa", "cand2.fa", "cand3.fa"],
146
+ batch_size=16384,
147
+ loader=loader,
148
+ memmap_dir="mem_map",
149
+ )
150
+ winner = best["best_match"]
151
+ comp = DoubleSampleComparator()
152
+ sim = await comp.compare_stream("target.fa", winner, batch_size=16384)
153
+ print("best:", winner, "similarity:", sim)
154
+ ```
155
+
156
+ _Real life_: Pick the closest specimen in a large archive to a query genome/contig, without loading everything into RAM.
157
+
158
+ ### 2b) Mixed sequence formats handled seamlessly
159
+
160
+ When the target and candidates are different sequence file types (e.g., FASTA + FASTQ + SEQ), the ParserFactory + SequenceNormalizationManager normalize non-FASTA inputs to a common sequence form before comparison. Ordering is preserved batch-by-batch. Use memmaps consistently.
161
+
162
+ ```python
163
+ from forensight import SampleLoader, SequenceMatcher
164
+
165
+ files = ["target.fa", "reads.fastq", "sample.seq"]
166
+ matcher = SequenceMatcher(target=None, pool=[])
167
+ best = await matcher.match_streaming_paths(
168
+ files[0],
169
+ files[1:],
170
+ batch_size=16384,
171
+ loader=SampleLoader(),
172
+ memmap_dir="mem_map",
173
+ )
174
+ print("best match across mixed formats:", best)
175
+ ```
176
+
177
+ _Real life_: Compare a reference contig (FASTA) against sequencing reads (FASTQ) and a legacy SEQ file in one pass, without pre-conversion steps.
178
+
179
+ ### 3) STR sliding-window search (scenario_str_searcher)
180
+
181
+ Use when you need to find the most similar occurrence of a short STR pattern across multiple sequences.
182
+
183
+ ```python
184
+ from forensight import STRSearcher
185
+
186
+ searcher = STRSearcher()
187
+ result = await searcher.search("ATGCTAGCTA", ["genome1.fa", "genome2.fa"])
188
+ print(result) # file, position, substring, similarity
189
+ ```
190
+
191
+ _Real life_: Forensic STR probe search across multiple chromosomes/assemblies; finds best match even with minor mismatches.
192
+
193
+ ### 4) HID/ABI/FSA to CSV (and optional WEBP traces)
194
+
195
+ Use when you have capillary electrophoresis outputs and need structured CSVs of signals/metadata. You can toggle plotting of DATA9–12 traces with `visualize=True/False`.
196
+
197
+ ```python
198
+ from forensight import HIDVisualizer, ABIChromatogramVisualizer, FSAElectropherogramVisualizer
199
+
200
+ # HID: CSVs + WEBP
201
+ await HIDVisualizer().visualize("sample.hid", output_path="hid_output.csv", visualize=True)
202
+ # HID: CSVs only (skip WEBP)
203
+ # await HIDVisualizer().visualize("sample.hid", output_path="hid_output.csv", visualize=False)
204
+
205
+ # ABI: CSVs + WEBP (or set visualize=False to skip)
206
+ await ABIChromatogramVisualizer().visualize("sample.abi", output_path="abi_output.csv", visualize=True)
207
+
208
+ # FSA: CSVs + WEBP (or set visualize=False to skip)
209
+ await FSAElectropherogramVisualizer().visualize("sample.fsa", output_path="fsa_output.csv", visualize=True)
210
+
211
+ # Sidecars: .trace.csv, .excluded.csv, .aprx.csv/.xml (if present)
212
+ # WEBP legend: A=DATA9 (blue), C=DATA10 (green), G=DATA11 (yellow), T=DATA12 (magenta)
213
+ ```
214
+
215
+ _Real life_: Extract instrument settings, traces/basecalls, and metadata from CE runs for downstream mixture/trace analysis or QC.
216
+
217
+ ### 5) Trace visualization from HID/ABI/FSA traces
218
+
219
+ Use when you want a quick look at DATA9–12 traces without rerunning the extractor. Pass `visualize=False` to the main visualizers to skip auto-WEBP, then render later:
220
+
221
+ ```python
222
+ import csv
223
+ from forensight import DataBandVisualizer
224
+
225
+ with open("hid_output.trace.csv", newline="") as fh:
226
+ trace_rows = list(csv.reader(fh))
227
+
228
+ DataBandVisualizer().render_from_trace_rows(trace_rows, "hid_output.trace.webp")
229
+ # Legend: A=DATA9 (blue), C=DATA10 (green), G=DATA11 (yellow), T=DATA12 (magenta)
230
+ ```
231
+
232
+ _Real life_: Inspect electropherogram channel intensities quickly without heavy GUIs.
233
+
234
+ ### 6) Kernel matrix (memmap) with and without saved vectors/ids
235
+
236
+ Use when you want a reusable kernel over many samples. Always set `memmap_dir` and `logs_dir`.
237
+
238
+ ```python
239
+ from forensight import KernelMatrix, DNASample
240
+ import numpy as np
241
+
242
+ # Synthetic samples
243
+ samples = [
244
+ DNASample(f"sample_{i}", np.random.rand(1024).astype(np.float32), "synthetic")
245
+ for i in range(100)
246
+ ]
247
+
248
+ # Case A: no vector save, conditional off
249
+ km_a = KernelMatrix(
250
+ samples,
251
+ memmap_dir="mem_map",
252
+ logs_dir="logs",
253
+ conditional=False,
254
+ save_vectors_path=None, # nothing persisted
255
+ )
256
+ best_a = km_a.best_match("sample_0")
257
+ km_a.cleanup()
258
+
259
+ # Case B: save vectors and ids, conditional on
260
+ km_b = KernelMatrix(
261
+ samples,
262
+ memmap_dir="mem_map",
263
+ logs_dir="logs",
264
+ conditional=True, # optional, additional security layer before encryption
265
+ save_vectors_path="logs/kernel.npy", # persists stacked vectors
266
+ )
267
+ # ids are written to logs/kernel_vectors_ids.txt
268
+ best_b = km_b.best_match("sample_0")
269
+ km_b.cleanup()
270
+ ```
271
+
272
+ ⚠️ **Hardware & data quality**: Kernel builds are memory-heavy (O(n²) for n samples). Ensure `memmap_dir` has disk space and your machine has enough RAM for the chosen sample count/length. Poor-quality or inconsistent sequences will degrade similarity results—verify inputs before kernelizing.
273
+
274
+ ### 7) Creating `DNASample` objects directly
275
+
276
+ `SampleLoader` usually creates `DNASample` objects for you (batch-by-batch, with N-stripping and optional memmaps). If you need to construct them manually (e.g., synthetic tests), use:
277
+
278
+ ```python
279
+ import numpy as np
280
+ from forensight import DNASample
281
+
282
+ vec = np.array([0, 1, 2, 3], dtype=np.int8) # your vectorized sequence
283
+ sample = DNASample(
284
+ sample_id="sample_0",
285
+ sequence=vec,
286
+ file_format="fq", # e.g., 'fa', 'fq', etc.
287
+ metadata={"note": "example"}, # optional: headers/fields from parser
288
+ memmap_path=None # optional: path if sequence is a memmap
289
+ )
290
+ ```
291
+
292
+ Note: In normal use, `SampleLoader` handles parsing, normalization, vectorization, N-removal, metadata attachment, and optional memmap creation automatically. The above is for users who need to craft `DNASample` objects by hand for custom pipelines or tests.
293
+
294
+ ## Notes on visualizers
295
+
296
+ - HID/ABI/FSA readers expect ABIF traces (PLOC*/DATA*). If traces are missing, peak CSVs may be empty but metadata sidecars still export.
297
+ - ABI/FSA/HID visualizers partition outputs:
298
+ - **main metadata**: curated fields (run/sample/dye/trace pointers).
299
+ - **trace/basecall**: PLOC1/2, DATA1–12, PBAS1/2.
300
+ - **excluded**: everything else.
301
+ - **APrX1**: parsed parameters + raw XML when present.
302
+ - **optional WEBP**: DATA9-12 line plot (A=DATA9, C=DATA10, G=DATA11, T=DATA12). Toggle with `visualize=True/False`.
303
+
304
+ ## Memory & performance tips
305
+
306
+ - Prefer streaming APIs: `SampleLoader.stream_samples`, `compare_stream`, `match_streaming_paths`.
307
+ - Use `memmap_dir` to offload large batches.
308
+ - Keep `batch_size` aligned across components; adjust for RAM via `utils.calculate_effort` (helper).
309
+ - Clean up memmaps after use (see `cleanup_memmaps` in `test.py`).
310
+
311
+ ## Testing
312
+
313
+ - Minimal scenarios live in `test.py`. Provide small sample files under `data_samples/`.
314
+ - Suggested additions: `pytest` + `pytest-asyncio` for unit coverage of parsers, matchers, visualizers.
315
+
316
+ ## CI (suggested)
317
+
318
+ - Add a GitHub Actions workflow to run `python -m compileall`, `ruff` (optional), and `pytest`.
319
+
320
+ ## Contributing
321
+
322
+ We value all kinds of contributions to the projects but the
323
+
324
+ - Standard PR/issue workflow.
325
+ - Keep new parsers async-friendly and streaming-capable.
326
+ - Preserve logging via `BaseComponent._log_state`.
327
+ - Development of ForenSight for other programming languages.
328
+ - Improve the system to overcome known limitations.
329
+ - Increase the number of supported file formats.
330
+ - Implement progress bar to the models properly.
331
+
332
+ ## Known limitations
333
+
334
+ - HID/ABI/FSA peak CSVs rely on available traces; when absent, only metadata is emitted.
335
+ - Band visualizer image output is tiled to respect WEBP size limits; text labels are minimal by design.
336
+
337
+ <p align="center">
338
+ <a href="https://biotronics.ai">
339
+ <img src="images/Bio%20kopyas%C4%B1.png" alt="Biotronics Ai Logo" width="180">
340
+ </a>
341
+ </p>
@@ -0,0 +1,18 @@
1
+ LICENSE
2
+ MANIFEST.in
3
+ README.md
4
+ base.py
5
+ forensight.py
6
+ kernel.py
7
+ models.py
8
+ parsers.py
9
+ pyproject.toml
10
+ requirements.txt
11
+ visualize.py
12
+ ForenSight.egg-info/PKG-INFO
13
+ ForenSight.egg-info/SOURCES.txt
14
+ ForenSight.egg-info/dependency_links.txt
15
+ ForenSight.egg-info/requires.txt
16
+ ForenSight.egg-info/top_level.txt
17
+ images/Bio (Yeni LinkedIn Banner'ı).png
18
+ images/Bio kopyası.png
@@ -0,0 +1,19 @@
1
+ aiofiles==25.1.0
2
+ argcomplete==3.6.3
3
+ argh==0.31.3
4
+ biopython==1.86
5
+ gffutils==0.13
6
+ numpy==2.3.4
7
+ packaging==25.0
8
+ pillow==12.0.0
9
+ psutil==7.1.3
10
+ pyBigWig==0.3.24
11
+ pyfaidx==0.9.0.3
12
+ pysam==0.23.3
13
+ simplejson==3.20.2
14
+ bx-python
15
+
16
+ [dev]
17
+ pytest
18
+ pytest-asyncio
19
+ ruff
@@ -0,0 +1,6 @@
1
+ base
2
+ forensight
3
+ kernel
4
+ models
5
+ parsers
6
+ visualize
@@ -0,0 +1,21 @@
1
+ MIT License
2
+
3
+ Copyright (c) 2026 Biotronics Ai
4
+
5
+ Permission is hereby granted, free of charge, to any person obtaining a copy
6
+ of this software and associated documentation files (the "Software"), to deal
7
+ in the Software without restriction, including without limitation the rights
8
+ to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
9
+ copies of the Software, and to permit persons to whom the Software is
10
+ furnished to do so, subject to the following conditions:
11
+
12
+ The above copyright notice and this permission notice shall be included in all
13
+ copies or substantial portions of the Software.
14
+
15
+ THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
16
+ IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
17
+ FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
18
+ AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
19
+ LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
20
+ OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
21
+ SOFTWARE.
@@ -0,0 +1,9 @@
1
+ include README.md
2
+ include requirements.txt
3
+ recursive-include images *
4
+
5
+ # Exclude build artifacts and envs
6
+ prune .venv
7
+ prune build
8
+ prune dist
9
+ prune __pycache__