spacr 0.0.70__py3-none-any.whl → 0.0.80__py3-none-any.whl

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
@@ -1,6 +1,6 @@
1
1
  Metadata-Version: 2.1
2
2
  Name: spacr
3
- Version: 0.0.70
3
+ Version: 0.0.80
4
4
  Summary: Spatial phenotype analysis of crisp screens (SpaCr)
5
5
  Home-page: https://github.com/EinarOlafsson/spacr
6
6
  Author: Einar Birnir Olafsson
@@ -9,7 +9,7 @@ Classifier: Programming Language :: Python :: 3
9
9
  Classifier: License :: OSI Approved :: MIT License
10
10
  Classifier: Operating System :: OS Independent
11
11
  License-File: LICENSE
12
- Requires-Dist: dgl
12
+ Requires-Dist: dgl ==0.9.1
13
13
  Requires-Dist: torch <3.0,>=2.2.1
14
14
  Requires-Dist: torchvision <1.0,>=0.17.1
15
15
  Requires-Dist: torch-geometric <3.0,>=2.5.1
@@ -46,6 +46,8 @@ Requires-Dist: opencv-python ; extra == 'full'
46
46
  Provides-Extra: headless
47
47
  Requires-Dist: opencv-python-headless ; extra == 'headless'
48
48
 
49
+ .. |Documentation Status| image:: https://readthedocs.org/projects/spacr/badge/?version=latest
50
+ :target: https://spacr.readthedocs.io/en/latest/?badge=latest
49
51
  .. |PyPI version| image:: https://badge.fury.io/py/spacr.svg
50
52
  :target: https://badge.fury.io/py/spacr
51
53
  .. |Python version| image:: https://img.shields.io/pypi/pyversions/spacr
@@ -55,25 +57,25 @@ Requires-Dist: opencv-python-headless ; extra == 'headless'
55
57
  .. |repo size| image:: https://img.shields.io/github/repo-size/EinarOlafsson/spacr
56
58
  :target: https://github.com/EinarOlafsson/spacr/
57
59
 
58
- |PyPI version| |Python version| |Licence: GPL v3| |repo size|
60
+ |Documentation Status| |PyPI version| |Python version| |Licence: GPL v3| |repo size|
59
61
 
60
62
  SpaCr
61
63
  =====
62
64
 
63
- Spatial phenotype analysis of CRISPR-Cas9 screens (SpaCr). The spatial organization of organelles and proteins within cells constitutes a key level of functional regulation. In the context of infectious disease, the spatial relationships between host cell structures and intracellular pathogens are critical to understand host clearance mechanisms and how pathogens evade them. Spacr is a Python-based software package for generating single cell image data for deep-learning sub-cellular/cellular phenotypic classification from pooled genetic CRISPR-Cas9 screens. Spacr provides a flexible toolset to extract single cell images and measurements from high content cell painting experiments, train deep-learning models to classify cellular/subcellular phenotypes, simulate and analyze pooled CRISPR-Cas9 imaging screens.
65
+ Spatial phenotype analysis of CRISPR-Cas9 screens (SpaCr). The spatial organization of organelles and proteins within cells constitutes a key level of functional regulation. In the context of infectious disease, the spatial relationships between host cell structures and intracellular pathogens are critical to understand host clearance mechanisms and how pathogens evade them. SpaCr is a Python-based software package for generating single-cell image data for deep-learning sub-cellular/cellular phenotypic classification from pooled genetic CRISPR-Cas9 screens. SpaCr provides a flexible toolset to extract single-cell images and measurements from high-content cell painting experiments, train deep-learning models to classify cellular/subcellular phenotypes, simulate, and analyze pooled CRISPR-Cas9 imaging screens.
64
66
 
65
67
  Features
66
68
  --------
67
69
 
68
70
  - **Generate Masks:** Generate cellpose masks of cell, nuclei, and pathogen objects.
69
71
 
70
- - **Object Measurements:** Measurements for each object including scikit-image-regionprops, intensity percentiles, shannon-entropy, pearsons and manders correlations, homogeneity and radial distribution. Measurements are saved to a SQL database in object level tables.
72
+ - **Object Measurements:** Measurements for each object including scikit-image-regionprops, intensity percentiles, shannon-entropy, pearsons and manders correlations, homogeneity, and radial distribution. Measurements are saved to a SQL database in object-level tables.
71
73
 
72
- - **Crop Images:** Objects (e.gcells) can be saved as PNGs from the object area or bounding box area of each object. Object paths are saved in a SQL database that can be annotated and used to train CNNs/Transformer models for classification tasks.
74
+ - **Crop Images:** Objects (e.g., cells) can be saved as PNGs from the object area or bounding box area of each object. Object paths are saved in a SQL database that can be annotated and used to train CNNs/Transformer models for classification tasks.
73
75
 
74
76
  - **Train CNNs or Transformers:** Train Torch Convolutional Neural Networks (CNNs) or Transformers to classify single object images. Train Torch models with IRM/ERM, checkpointing.
75
77
 
76
- - **Manual Annotation:** Supports manual annotation of single cell images and segmentation to refine training datasets for training CNNs/Transformers or cellpose, respectively.
78
+ - **Manual Annotation:** Supports manual annotation of single-cell images and segmentation to refine training datasets for training CNNs/Transformers or cellpose, respectively.
77
79
 
78
80
  - **Finetune Cellpose Models:** Adjust pre-existing Cellpose models to your specific dataset for improved performance.
79
81
 
@@ -93,7 +95,7 @@ Requires Tkinter for graphical user interface features.
93
95
  Ubuntu
94
96
  ~~~~~~
95
97
 
96
- Before installing spacr, ensure Tkinter is installed:
98
+ Before installing SpaCr, ensure Tkinter is installed:
97
99
 
98
100
  (Tkinter is included with the standard Python installation on macOS, and Windows)
99
101
 
@@ -0,0 +1,36 @@
1
+ spacr/__init__.py,sha256=nGEiMcMkCpf8kuq5X85HJ1LbDIr9xVS0yiW81libMIQ,1190
2
+ spacr/__main__.py,sha256=bkAJJD2kjIqOP-u1kLvct9jQQCeUXzlEjdgitwi1Lm8,75
3
+ spacr/alpha.py,sha256=Y95sLEfpK2OSYKRn3M8eUOU33JJeXfV8zhrC4KnwSTY,35244
4
+ spacr/annotate_app.py,sha256=w7t7Zilu31FSIRDKtIPae8X4MZGez3cJugFM3rOmnlQ,20617
5
+ spacr/chris.py,sha256=YlBjSgeZaY8HPy6jkrT_ISAnCMAKVfvCxF0I9eAZLFM,2418
6
+ spacr/cli.py,sha256=507jfOOEV8BoL4eeUcblvH-iiDHdBrEVJLu1ghAAPSc,1800
7
+ spacr/core.py,sha256=CHtBCYnx-oIU7f78X8QBMrVtHtaU0Dwu12zpYouUa7E,155454
8
+ spacr/deep_spacr.py,sha256=ljIakns6q74an5QwDU7j0xoj6jRCAz-ejY0QHj9X0d8,33193
9
+ spacr/foldseek.py,sha256=YIP1d4Ci6CeA9jSyiv-HTDbNmAmcSM9Y_DaOs7wYzLY,33546
10
+ spacr/get_alfafold_structures.py,sha256=ehx_MQgb12k3hFecP6cYVlm5TLO8iWjgevy8ESyS3cw,3544
11
+ spacr/graph_learning.py,sha256=M7KW1J72LA4hLfVNVBOqxf_4z9tXi-UyoZfhaLJXqSE,11986
12
+ spacr/gui.py,sha256=zu-i8ezLJ03jNRACK7CRgNhkM8g8-pJFwZ-OSDFzsPg,6498
13
+ spacr/gui_2.py,sha256=FPlmvGm1VIood_YBnG44IafgjjaVfagybTnjVEOs5Ig,3299
14
+ spacr/gui_classify_app.py,sha256=LY33wott1mR7AFYwBI9ZQZYY16lBB-wuaY4pL_poaQ0,7884
15
+ spacr/gui_mask_app.py,sha256=WKkAH0jv-SnfaZdJ8MkC7mkUIVSSrNE8lUfH3QBvUak,9747
16
+ spacr/gui_measure_app.py,sha256=5vjjds5NFaOcE8XeuWDug9k-NI4jbTrwp54sJ7DNaNI,9625
17
+ spacr/gui_sim_app.py,sha256=47DEQpj8HBSa-_TImW-5JCeuQeRkm5NMpJWZG3hSuFU,0
18
+ spacr/gui_utils.py,sha256=JRWwmGEEVSPgs0UtZRukdNwIUJepbP675_Fvs5qocPk,49718
19
+ spacr/io.py,sha256=Ix0nzh-4n4f4mIayxDF6YVBAmP_mTckrueCJ81uCP7s,105040
20
+ spacr/logger.py,sha256=7Zqr3TuuOQLWT32gYr2q1qvv7x0a2JhLANmZcnBXAW8,670
21
+ spacr/mask_app.py,sha256=jlKmj_evveIkkyH3PYEcAshcLXN0DOPWB1oc4hAwq9E,44201
22
+ spacr/measure.py,sha256=-pR43dO1MPiwIa7zACcWyNTBpHYDyiYFV_6sTo3qqRk,54975
23
+ spacr/old_code.py,sha256=jw67DAGoLBd7mWofVzRJSEmCI1Qrff26zIo65SEkV00,13817
24
+ spacr/plot.py,sha256=fnswxUXHwSLmxRpqSAmoUl5ln-_ueYPeYQlDmiYSwzQ,63299
25
+ spacr/sequencing.py,sha256=TWQtylArdWZCYcjYrvfy7AAZdVprCMwXc1WMEavw10E,50987
26
+ spacr/sim.py,sha256=FveaVgBi3eypO2oVB5Dx-v0CC1Ny7UPfXkJiiRRodAk,71212
27
+ spacr/timelapse.py,sha256=5TNmkzR_urMxy0eVB4quGdjNj2QduyiwrLL2I-udlAg,39614
28
+ spacr/utils.py,sha256=3cA3qUNf7l_VEeuhype2kI7B5IoYK0hb6Y31Q6Si3ds,184107
29
+ spacr/version.py,sha256=axH5tnGwtgSnJHb5IDhiu4Zjk5GhLyAEDRe-rnaoFOA,409
30
+ spacr/models/cp/toxo_pv_lumen.CP_model,sha256=2y_CindYhmTvVwBH39SNILF3rI3x9SsRn6qrMxHy3l0,26562451
31
+ spacr-0.0.80.dist-info/LICENSE,sha256=SR-2MeGc6SCM1UORJYyarSWY_A-JaOMFDj7ReSs9tRM,1083
32
+ spacr-0.0.80.dist-info/METADATA,sha256=ZvBlLVEEUqE0JiUTbokhpMPI33nHAzvY2Ahmg1WueLk,5121
33
+ spacr-0.0.80.dist-info/WHEEL,sha256=GJ7t_kWBFywbagK5eo9IoUwLW6oyOeTKmQ-9iHFVNxQ,92
34
+ spacr-0.0.80.dist-info/entry_points.txt,sha256=xncHsqD9MI5wj0_p4mgZlrB8dHm_g_qF0Ggo1c78LqY,315
35
+ spacr-0.0.80.dist-info/top_level.txt,sha256=GJPU8FgwRXGzKeut6JopsSRY2R8T3i9lDgya42tLInY,6
36
+ spacr-0.0.80.dist-info/RECORD,,
@@ -1,84 +0,0 @@
1
- import torch
2
- import torch.nn as nn
3
- import torch.nn.functional as F
4
- from torch.utils.data import Dataset, DataLoader, TensorDataset
5
-
6
- # Let's assume that the feature embedding part and the dataset loading part
7
- # has already been taken care of, and your data is already in the format
8
- # suitable for PyTorch (i.e., Tensors).
9
-
10
- class FeatureEmbedder(nn.Module):
11
- def __init__(self, vocab_sizes, embedding_size):
12
- super(FeatureEmbedder, self).__init__()
13
- self.embeddings = nn.ModuleDict({
14
- key: nn.Embedding(num_embeddings=vocab_size+1,
15
- embedding_dim=embedding_size,
16
- padding_idx=vocab_size)
17
- for key, vocab_size in vocab_sizes.items()
18
- })
19
- # Adding the 'visit' embedding
20
- self.embeddings['visit'] = nn.Parameter(torch.zeros(1, embedding_size))
21
-
22
- def forward(self, feature_map, max_num_codes):
23
- # Implementation will depend on how you want to handle sparse data
24
- # This is just a placeholder
25
- embeddings = {}
26
- masks = {}
27
- for key, tensor in feature_map.items():
28
- embeddings[key] = self.embeddings[key](tensor.long())
29
- mask = torch.ones_like(tensor, dtype=torch.float32)
30
- masks[key] = mask.unsqueeze(-1)
31
-
32
- # Batch size hardcoded for simplicity in example
33
- batch_size = 1 # Replace with actual batch size
34
- embeddings['visit'] = self.embeddings['visit'].expand(batch_size, -1, -1)
35
- masks['visit'] = torch.ones(batch_size, 1)
36
-
37
- return embeddings, masks
38
-
39
- class GraphConvolutionalTransformer(nn.Module):
40
- def __init__(self, embedding_size=128, num_attention_heads=1, **kwargs):
41
- super(GraphConvolutionalTransformer, self).__init__()
42
- # Transformer Blocks
43
- self.layers = nn.ModuleList([
44
- nn.TransformerEncoderLayer(
45
- d_model=embedding_size,
46
- nhead=num_attention_heads,
47
- batch_first=True)
48
- for _ in range(kwargs.get('num_transformer_stack', 3))
49
- ])
50
- # Output Layer for Classification
51
- self.output_layer = nn.Linear(embedding_size, 1)
52
-
53
- def feedforward(self, features, mask=None, training=None):
54
- # Implement feedforward logic (placeholder)
55
- pass
56
-
57
- def forward(self, embeddings, masks, mask=None, training=False):
58
- features = embeddings
59
- attentions = [] # Storing attentions if needed
60
-
61
- # Pass through each Transformer block
62
- for layer in self.layers:
63
- features = layer(features) # Apply transformer encoding here
64
-
65
- if mask is not None:
66
- features = features * mask
67
-
68
- logits = self.output_layer(features[:, 0, :]) # Using the 'visit' embedding for classification
69
- return logits, attentions
70
-
71
- # Usage Example
72
- vocab_sizes = {'dx_ints':3249, 'proc_ints':2210}
73
- embedding_size = 128
74
- gct_params = {
75
- 'embedding_size': embedding_size,
76
- 'num_transformer_stack': 3,
77
- 'num_attention_heads': 1
78
- }
79
- feature_embedder = FeatureEmbedder(vocab_sizes, embedding_size)
80
- gct_model = GraphConvolutionalTransformer(**gct_params)
81
-
82
- # Assume `feature_map` is a dictionary of tensors, and `max_num_codes` is provided
83
- embeddings, masks = feature_embedder(feature_map, max_num_codes)
84
- logits, attentions = gct_model(embeddings, masks)