debase 0.1.2__tar.gz → 0.1.4__tar.gz

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
Files changed (30) hide show
  1. {debase-0.1.2 → debase-0.1.4}/PKG-INFO +57 -1
  2. {debase-0.1.2 → debase-0.1.4}/README.md +56 -0
  3. {debase-0.1.2 → debase-0.1.4}/src/debase/_version.py +1 -1
  4. {debase-0.1.2 → debase-0.1.4}/src/debase/enzyme_lineage_extractor.py +43 -6
  5. {debase-0.1.2 → debase-0.1.4}/src/debase/reaction_info_extractor.py +14 -1
  6. {debase-0.1.2 → debase-0.1.4}/src/debase.egg-info/PKG-INFO +57 -1
  7. {debase-0.1.2 → debase-0.1.4}/.gitignore +0 -0
  8. {debase-0.1.2 → debase-0.1.4}/CONTRIBUTING.md +0 -0
  9. {debase-0.1.2 → debase-0.1.4}/LICENSE +0 -0
  10. {debase-0.1.2 → debase-0.1.4}/MANIFEST.in +0 -0
  11. {debase-0.1.2 → debase-0.1.4}/docs/README.md +0 -0
  12. {debase-0.1.2 → debase-0.1.4}/docs/examples/README.md +0 -0
  13. {debase-0.1.2 → debase-0.1.4}/environment.yml +0 -0
  14. {debase-0.1.2 → debase-0.1.4}/pyproject.toml +0 -0
  15. {debase-0.1.2 → debase-0.1.4}/setup.cfg +0 -0
  16. {debase-0.1.2 → debase-0.1.4}/setup.py +0 -0
  17. {debase-0.1.2 → debase-0.1.4}/src/__init__.py +0 -0
  18. {debase-0.1.2 → debase-0.1.4}/src/debase/PIPELINE_FLOW.md +0 -0
  19. {debase-0.1.2 → debase-0.1.4}/src/debase/__init__.py +0 -0
  20. {debase-0.1.2 → debase-0.1.4}/src/debase/__main__.py +0 -0
  21. {debase-0.1.2 → debase-0.1.4}/src/debase/build_db.py +0 -0
  22. {debase-0.1.2 → debase-0.1.4}/src/debase/cleanup_sequence.py +0 -0
  23. {debase-0.1.2 → debase-0.1.4}/src/debase/lineage_format.py +0 -0
  24. {debase-0.1.2 → debase-0.1.4}/src/debase/substrate_scope_extractor.py +0 -0
  25. {debase-0.1.2 → debase-0.1.4}/src/debase/wrapper.py +0 -0
  26. {debase-0.1.2 → debase-0.1.4}/src/debase.egg-info/SOURCES.txt +0 -0
  27. {debase-0.1.2 → debase-0.1.4}/src/debase.egg-info/dependency_links.txt +0 -0
  28. {debase-0.1.2 → debase-0.1.4}/src/debase.egg-info/entry_points.txt +0 -0
  29. {debase-0.1.2 → debase-0.1.4}/src/debase.egg-info/requires.txt +0 -0
  30. {debase-0.1.2 → debase-0.1.4}/src/debase.egg-info/top_level.txt +0 -0
@@ -1,6 +1,6 @@
1
1
  Metadata-Version: 2.4
2
2
  Name: debase
3
- Version: 0.1.2
3
+ Version: 0.1.4
4
4
  Summary: Enzyme lineage analysis and sequence extraction package
5
5
  Home-page: https://github.com/YuemingLong/DEBase
6
6
  Author: DEBase Team
@@ -61,14 +61,70 @@ Enzyme lineage analysis and sequence extraction package with advanced parallel p
61
61
 
62
62
  ## Installation
63
63
 
64
+ ### Quick Install (PyPI)
64
65
  ```bash
65
66
  pip install debase
66
67
  ```
68
+
69
+ ### Development Setup with Conda (Recommended)
70
+
71
+ 1. **Clone the repository**
72
+ ```bash
73
+ git clone https://github.com/YuemingLong/DEBase.git
74
+ cd DEBase
75
+ ```
76
+
77
+ 2. **Create conda environment from provided file**
78
+ ```bash
79
+ conda env create -f environment.yml
80
+ conda activate debase
81
+ ```
82
+
83
+ 3. **Install DEBase in development mode**
84
+ ```bash
85
+ pip install -e .
86
+ ```
87
+
88
+ ### Manual Setup
89
+
90
+ If you prefer to set up the environment manually:
91
+
92
+ ```bash
93
+ # Create new conda environment
94
+ conda create -n debase python=3.9
95
+ conda activate debase
96
+
97
+ # Install conda packages
98
+ conda install -c conda-forge pandas numpy matplotlib seaborn jupyter jupyterlab openpyxl biopython requests tqdm
99
+
100
+ # Install RDKit (optional - used for SMILES canonicalization)
101
+ conda install -c conda-forge rdkit
102
+
103
+ # Install pip-only packages
104
+ pip install PyMuPDF google-generativeai debase
105
+ ```
106
+
107
+ **Note about RDKit**: RDKit is optional and only used for canonicalizing SMILES strings in the output. If not installed, DEBase will still function normally but SMILES strings won't be standardized.
108
+
67
109
  ## Requirements
68
110
 
69
111
  - Python 3.8 or higher
70
112
  - A Gemini API key (set as environment variable `GEMINI_API_KEY`)
71
113
 
114
+ ### Setting up Gemini API Key
115
+
116
+ ```bash
117
+ # Option 1: Export in your shell
118
+ export GEMINI_API_KEY="your-api-key-here"
119
+
120
+ # Option 2: Add to ~/.bashrc or ~/.zshrc for persistence
121
+ echo 'export GEMINI_API_KEY="your-api-key-here"' >> ~/.bashrc
122
+ source ~/.bashrc
123
+
124
+ # Option 3: Create .env file in project directory
125
+ echo 'GEMINI_API_KEY=your-api-key-here' > .env
126
+ ```
127
+
72
128
  ## Recent Updates
73
129
 
74
130
  - **Campaign-Aware Extraction**: Automatically detects and processes multiple directed evolution campaigns in a single paper
@@ -4,14 +4,70 @@ Enzyme lineage analysis and sequence extraction package with advanced parallel p
4
4
 
5
5
  ## Installation
6
6
 
7
+ ### Quick Install (PyPI)
7
8
  ```bash
8
9
  pip install debase
9
10
  ```
11
+
12
+ ### Development Setup with Conda (Recommended)
13
+
14
+ 1. **Clone the repository**
15
+ ```bash
16
+ git clone https://github.com/YuemingLong/DEBase.git
17
+ cd DEBase
18
+ ```
19
+
20
+ 2. **Create conda environment from provided file**
21
+ ```bash
22
+ conda env create -f environment.yml
23
+ conda activate debase
24
+ ```
25
+
26
+ 3. **Install DEBase in development mode**
27
+ ```bash
28
+ pip install -e .
29
+ ```
30
+
31
+ ### Manual Setup
32
+
33
+ If you prefer to set up the environment manually:
34
+
35
+ ```bash
36
+ # Create new conda environment
37
+ conda create -n debase python=3.9
38
+ conda activate debase
39
+
40
+ # Install conda packages
41
+ conda install -c conda-forge pandas numpy matplotlib seaborn jupyter jupyterlab openpyxl biopython requests tqdm
42
+
43
+ # Install RDKit (optional - used for SMILES canonicalization)
44
+ conda install -c conda-forge rdkit
45
+
46
+ # Install pip-only packages
47
+ pip install PyMuPDF google-generativeai debase
48
+ ```
49
+
50
+ **Note about RDKit**: RDKit is optional and only used for canonicalizing SMILES strings in the output. If not installed, DEBase will still function normally but SMILES strings won't be standardized.
51
+
10
52
  ## Requirements
11
53
 
12
54
  - Python 3.8 or higher
13
55
  - A Gemini API key (set as environment variable `GEMINI_API_KEY`)
14
56
 
57
+ ### Setting up Gemini API Key
58
+
59
+ ```bash
60
+ # Option 1: Export in your shell
61
+ export GEMINI_API_KEY="your-api-key-here"
62
+
63
+ # Option 2: Add to ~/.bashrc or ~/.zshrc for persistence
64
+ echo 'export GEMINI_API_KEY="your-api-key-here"' >> ~/.bashrc
65
+ source ~/.bashrc
66
+
67
+ # Option 3: Create .env file in project directory
68
+ echo 'GEMINI_API_KEY=your-api-key-here' > .env
69
+ ```
70
+
15
71
  ## Recent Updates
16
72
 
17
73
  - **Campaign-Aware Extraction**: Automatically detects and processes multiple directed evolution campaigns in a single paper
@@ -1,3 +1,3 @@
1
1
  """Version information."""
2
2
 
3
- __version__ = "0.1.2"
3
+ __version__ = "0.1.4"
@@ -800,15 +800,36 @@ def identify_evolution_locations(
800
800
  _dump(f"=== CAMPAIGN MAPPING PROMPT ===\nLocation: {location_str}\n{'='*80}\n\n{mapping_prompt}", mapping_file)
801
801
 
802
802
  response = model.generate_content(mapping_prompt)
803
- campaign_id = _extract_text(response).strip().strip('"')
803
+ response_text = _extract_text(response).strip()
804
+
805
+ # Extract just the campaign_id from the response
806
+ # Look for the campaign_id pattern in the response
807
+ campaign_id = None
808
+ for campaign in campaigns:
809
+ if hasattr(campaign, 'campaign_id') and campaign.campaign_id in response_text:
810
+ campaign_id = campaign.campaign_id
811
+ break
812
+
813
+ # If not found, try to extract the last line or quoted string
814
+ if not campaign_id:
815
+ # Try to find quoted string
816
+ quoted_match = re.search(r'"([^"]+)"', response_text)
817
+ if quoted_match:
818
+ campaign_id = quoted_match.group(1)
819
+ else:
820
+ # Take the last non-empty line
821
+ lines = [line.strip() for line in response_text.split('\n') if line.strip()]
822
+ if lines:
823
+ campaign_id = lines[-1].strip('"')
804
824
 
805
825
  # Save mapping response to debug if provided
806
826
  if debug_dir:
807
827
  response_file = debug_path / f"campaign_mapping_response_{location_str.replace(' ', '_')}_{int(time.time())}.txt"
808
- _dump(f"=== CAMPAIGN MAPPING RESPONSE ===\nLocation: {location_str}\nMapped to: {campaign_id}\n{'='*80}\n\n{_extract_text(response)}", response_file)
828
+ _dump(f"=== CAMPAIGN MAPPING RESPONSE ===\nLocation: {location_str}\nFull response:\n{response_text}\nExtracted campaign_id: {campaign_id}\n{'='*80}", response_file)
809
829
 
810
830
  # Add campaign_id to location
811
- loc['campaign_id'] = campaign_id
831
+ if campaign_id:
832
+ loc['campaign_id'] = campaign_id
812
833
  log.info(f"Mapped {location_str} to campaign: {campaign_id}")
813
834
  except Exception as exc:
814
835
  log.warning(f"Failed to map location to campaign: {exc}")
@@ -1297,6 +1318,8 @@ _SEQUENCE_SCHEMA_HINT = """
1297
1318
  _SEQ_LOC_PROMPT = """
1298
1319
  Find where FULL-LENGTH protein or DNA sequences are located in this document.
1299
1320
 
1321
+ PRIORITY: Protein/amino acid sequences are preferred over DNA sequences.
1322
+
1300
1323
  Look for table of contents entries or section listings that mention sequences.
1301
1324
  Return a JSON array where each element has:
1302
1325
  - "section": the section heading or description
@@ -1305,6 +1328,7 @@ Return a JSON array where each element has:
1305
1328
  Focus on:
1306
1329
  - Table of contents or entries about "Sequence Information" or "Nucleotide and amino acid sequences"
1307
1330
  - Return the EXACT notation as shown.
1331
+ - Prioritize sections that mention "protein" or "amino acid" sequences
1308
1332
 
1309
1333
  Return [] if no sequence sections are found.
1310
1334
  Absolutely don't include nucleotides or primer sequences, it is better to return nothing then incomplete sequence, use your best judgement.
@@ -1465,10 +1489,16 @@ def validate_sequence_locations(text: str, locations: list, model, *, pdf_paths:
1465
1489
  # --- 7.3 Main extraction prompt ---------------------------------------------
1466
1490
  _SEQ_EXTRACTION_PROMPT = """
1467
1491
  Extract EVERY distinct enzyme-variant sequence you can find in the text.
1492
+
1493
+ IMPORTANT: Prioritize amino acid (protein) sequences over DNA sequences:
1494
+ - If an amino acid sequence exists for a variant, extract ONLY the aa_seq (set dna_seq to null)
1495
+ - Only extract dna_seq if NO amino acid sequence is available for that variant
1496
+ - This reduces redundancy since protein sequences are usually more relevant
1497
+
1468
1498
  For each variant return:
1469
1499
  * variant_id - the label used in the paper (e.g. "R4-10")
1470
1500
  * aa_seq - amino-acid sequence (uppercase), or null
1471
- * dna_seq - DNA sequence (A/C/G/T), or null
1501
+ * dna_seq - DNA sequence (A/C/G/T), or null (ONLY if no aa_seq exists)
1472
1502
 
1473
1503
  Respond ONLY with **minified JSON** that matches the schema below.
1474
1504
  NO markdown, no code fences, no commentary.
@@ -2029,8 +2059,15 @@ def run_pipeline(
2029
2059
  sequences = get_sequences(full_text, model, pdf_paths=pdf_paths, debug_dir=debug_dir)
2030
2060
 
2031
2061
  # 4a. Try PDB extraction if no sequences found -----------------------------
2032
- if not sequences or all(s.aa_seq is None for s in sequences):
2033
- log.info("No sequences found in paper, attempting PDB extraction...")
2062
+ # Check if we need PDB sequences (no sequences or only partial sequences)
2063
+ MIN_PROTEIN_LENGTH = 50 # Most proteins are >50 AA
2064
+ needs_pdb = (not sequences or
2065
+ all(s.aa_seq is None or (s.aa_seq and len(s.aa_seq) < MIN_PROTEIN_LENGTH)
2066
+ for s in sequences))
2067
+
2068
+ if needs_pdb:
2069
+ log.info("No full-length sequences found in paper (only partial sequences < %d AA), attempting PDB extraction...",
2070
+ MIN_PROTEIN_LENGTH)
2034
2071
 
2035
2072
  # Extract PDB IDs from all PDFs
2036
2073
  pdb_ids = []
@@ -1055,7 +1055,20 @@ Different campaigns may use different model reactions.
1055
1055
  """Extract text around a given location identifier."""
1056
1056
  location_lower = location.lower()
1057
1057
 
1058
- # Search in all pages
1058
+ # Handle compound locations like "Figure 2 caption and Section I"
1059
+ # Extract the first figure/table/scheme reference
1060
+ figure_match = re.search(r"(figure|scheme|table)\s*\d+", location_lower)
1061
+ if figure_match:
1062
+ primary_location = figure_match.group(0)
1063
+ # Try to find this primary location first
1064
+ for page_text in self.all_pages:
1065
+ if primary_location in page_text.lower():
1066
+ idx = page_text.lower().index(primary_location)
1067
+ start = max(0, idx - 500)
1068
+ end = min(len(page_text), idx + 3000)
1069
+ return page_text[start:end]
1070
+
1071
+ # Search in all pages for exact match
1059
1072
  for page_text in self.all_pages:
1060
1073
  if location_lower in page_text.lower():
1061
1074
  # Find the location and extract context around it
@@ -1,6 +1,6 @@
1
1
  Metadata-Version: 2.4
2
2
  Name: debase
3
- Version: 0.1.2
3
+ Version: 0.1.4
4
4
  Summary: Enzyme lineage analysis and sequence extraction package
5
5
  Home-page: https://github.com/YuemingLong/DEBase
6
6
  Author: DEBase Team
@@ -61,14 +61,70 @@ Enzyme lineage analysis and sequence extraction package with advanced parallel p
61
61
 
62
62
  ## Installation
63
63
 
64
+ ### Quick Install (PyPI)
64
65
  ```bash
65
66
  pip install debase
66
67
  ```
68
+
69
+ ### Development Setup with Conda (Recommended)
70
+
71
+ 1. **Clone the repository**
72
+ ```bash
73
+ git clone https://github.com/YuemingLong/DEBase.git
74
+ cd DEBase
75
+ ```
76
+
77
+ 2. **Create conda environment from provided file**
78
+ ```bash
79
+ conda env create -f environment.yml
80
+ conda activate debase
81
+ ```
82
+
83
+ 3. **Install DEBase in development mode**
84
+ ```bash
85
+ pip install -e .
86
+ ```
87
+
88
+ ### Manual Setup
89
+
90
+ If you prefer to set up the environment manually:
91
+
92
+ ```bash
93
+ # Create new conda environment
94
+ conda create -n debase python=3.9
95
+ conda activate debase
96
+
97
+ # Install conda packages
98
+ conda install -c conda-forge pandas numpy matplotlib seaborn jupyter jupyterlab openpyxl biopython requests tqdm
99
+
100
+ # Install RDKit (optional - used for SMILES canonicalization)
101
+ conda install -c conda-forge rdkit
102
+
103
+ # Install pip-only packages
104
+ pip install PyMuPDF google-generativeai debase
105
+ ```
106
+
107
+ **Note about RDKit**: RDKit is optional and only used for canonicalizing SMILES strings in the output. If not installed, DEBase will still function normally but SMILES strings won't be standardized.
108
+
67
109
  ## Requirements
68
110
 
69
111
  - Python 3.8 or higher
70
112
  - A Gemini API key (set as environment variable `GEMINI_API_KEY`)
71
113
 
114
+ ### Setting up Gemini API Key
115
+
116
+ ```bash
117
+ # Option 1: Export in your shell
118
+ export GEMINI_API_KEY="your-api-key-here"
119
+
120
+ # Option 2: Add to ~/.bashrc or ~/.zshrc for persistence
121
+ echo 'export GEMINI_API_KEY="your-api-key-here"' >> ~/.bashrc
122
+ source ~/.bashrc
123
+
124
+ # Option 3: Create .env file in project directory
125
+ echo 'GEMINI_API_KEY=your-api-key-here' > .env
126
+ ```
127
+
72
128
  ## Recent Updates
73
129
 
74
130
  - **Campaign-Aware Extraction**: Automatically detects and processes multiple directed evolution campaigns in a single paper
File without changes
File without changes
File without changes
File without changes
File without changes
File without changes
File without changes
File without changes
File without changes
File without changes
File without changes
File without changes
File without changes
File without changes
File without changes