corp-extractor 0.2.8__tar.gz → 0.4.0__tar.gz

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
@@ -1,6 +1,6 @@
1
1
  Metadata-Version: 2.4
2
2
  Name: corp-extractor
3
- Version: 0.2.8
3
+ Version: 0.4.0
4
4
  Summary: Extract structured statements from text using T5-Gemma 2 and Diverse Beam Search
5
5
  Project-URL: Homepage, https://github.com/corp-o-rate/statement-extractor
6
6
  Project-URL: Documentation, https://github.com/corp-o-rate/statement-extractor#readme
@@ -24,19 +24,17 @@ Classifier: Topic :: Scientific/Engineering :: Information Analysis
24
24
  Classifier: Topic :: Text Processing :: Linguistic
25
25
  Requires-Python: >=3.10
26
26
  Requires-Dist: click>=8.0.0
27
+ Requires-Dist: gliner2
27
28
  Requires-Dist: numpy>=1.24.0
28
29
  Requires-Dist: pydantic>=2.0.0
30
+ Requires-Dist: sentence-transformers>=2.2.0
29
31
  Requires-Dist: torch>=2.0.0
30
32
  Requires-Dist: transformers>=5.0.0rc3
31
- Provides-Extra: all
32
- Requires-Dist: sentence-transformers>=2.2.0; extra == 'all'
33
33
  Provides-Extra: dev
34
34
  Requires-Dist: mypy>=1.0.0; extra == 'dev'
35
35
  Requires-Dist: pytest-cov>=4.0.0; extra == 'dev'
36
36
  Requires-Dist: pytest>=7.0.0; extra == 'dev'
37
37
  Requires-Dist: ruff>=0.1.0; extra == 'dev'
38
- Provides-Extra: embeddings
39
- Requires-Dist: sentence-transformers>=2.2.0; extra == 'embeddings'
40
38
  Description-Content-Type: text/markdown
41
39
 
42
40
  # Corp Extractor
@@ -51,35 +49,40 @@ Extract structured subject-predicate-object statements from unstructured text us
51
49
 
52
50
  - **Structured Extraction**: Converts unstructured text into subject-predicate-object triples
53
51
  - **Entity Type Recognition**: Identifies 12 entity types (ORG, PERSON, GPE, LOC, PRODUCT, EVENT, etc.)
54
- - **Quality Scoring** *(v0.2.0)*: Each triple scored for groundedness (0-1) based on source text
55
- - **Beam Merging** *(v0.2.0)*: Combines top beams for better coverage instead of picking one
56
- - **Embedding-based Dedup** *(v0.2.0)*: Uses semantic similarity to detect near-duplicate predicates
57
- - **Predicate Taxonomies** *(v0.2.0)*: Normalize predicates to canonical forms via embeddings
58
- - **Contextualized Matching** *(v0.2.2)*: Compares full "Subject Predicate Object" against source text for better accuracy
59
- - **Entity Type Merging** *(v0.2.3)*: Automatically merges UNKNOWN entity types with specific types during deduplication
60
- - **Reversal Detection** *(v0.2.3)*: Detects and corrects subject-object reversals using embedding comparison
61
- - **Command Line Interface** *(v0.2.4)*: Full-featured CLI for terminal usage
52
+ - **GLiNER2 Integration** *(v0.4.0)*: Uses GLiNER2 (205M params) for entity recognition and relation extraction
53
+ - **Predefined Predicates** *(v0.4.0)*: Optional `--predicates` list for GLiNER2 relation extraction mode
54
+ - **Entity-based Scoring** *(v0.4.0)*: Confidence combines semantic similarity (50%) + entity recognition scores (25% each)
55
+ - **Multi-Candidate Extraction**: Generates 3 candidates per statement (hybrid, GLiNER2-only, predicate-split)
56
+ - **Best Triple Selection**: Keeps only highest-scoring triple per source (use `--all-triples` to keep all)
57
+ - **Extraction Method Tracking**: Each statement includes `extraction_method` field (hybrid, gliner, split, model)
58
+ - **Beam Merging**: Combines top beams for better coverage instead of picking one
59
+ - **Embedding-based Dedup**: Uses semantic similarity to detect near-duplicate predicates
60
+ - **Predicate Taxonomies**: Normalize predicates to canonical forms via embeddings
61
+ - **Contextualized Matching**: Compares full "Subject Predicate Object" against source text for better accuracy
62
+ - **Entity Type Merging**: Automatically merges UNKNOWN entity types with specific types during deduplication
63
+ - **Reversal Detection**: Detects and corrects subject-object reversals using embedding comparison
64
+ - **Command Line Interface**: Full-featured CLI for terminal usage
62
65
  - **Multiple Output Formats**: Get results as Pydantic models, JSON, XML, or dictionaries
63
66
 
64
67
  ## Installation
65
68
 
66
69
  ```bash
67
- # Recommended: include embedding support for smart deduplication
68
- pip install "corp-extractor[embeddings]"
69
-
70
- # Minimal installation (no embedding features)
71
70
  pip install corp-extractor
72
71
  ```
73
72
 
74
- **Note**: This package requires `transformers>=5.0.0` (pre-release) for T5-Gemma2 model support. Install with `--pre` flag if needed:
75
- ```bash
76
- pip install --pre "corp-extractor[embeddings]"
77
- ```
73
+ The GLiNER2 model (205M params) is downloaded automatically on first use.
74
+
75
+ **Note**: This package requires `transformers>=5.0.0` for T5-Gemma2 model support.
78
76
 
79
77
  **For GPU support**, install PyTorch with CUDA first:
80
78
  ```bash
81
79
  pip install torch --index-url https://download.pytorch.org/whl/cu121
82
- pip install "corp-extractor[embeddings]"
80
+ pip install corp-extractor
81
+ ```
82
+
83
+ **For Apple Silicon (M1/M2/M3)**, MPS acceleration is automatically detected:
84
+ ```bash
85
+ pip install corp-extractor # MPS used automatically
83
86
  ```
84
87
 
85
88
  ## Quick Start
@@ -177,11 +180,14 @@ Options:
177
180
  --no-dedup Disable deduplication
178
181
  --no-embeddings Disable embedding-based dedup (faster)
179
182
  --no-merge Disable beam merging
183
+ --no-gliner Disable GLiNER2 extraction (use raw model output)
184
+ --predicates TEXT Comma-separated predicate types for GLiNER2 relation extraction
185
+ --all-triples Keep all candidate triples (default: best per source)
180
186
  --dedup-threshold FLOAT Deduplication threshold (default: 0.65)
181
187
  --min-confidence FLOAT Min confidence filter (default: 0)
182
188
  --taxonomy PATH Load predicate taxonomy from file
183
189
  --taxonomy-threshold FLOAT Taxonomy matching threshold (default: 0.5)
184
- --device [auto|cuda|cpu] Device to use (default: auto)
190
+ --device [auto|cuda|mps|cpu] Device to use (default: auto)
185
191
  -v, --verbose Show confidence scores and metadata
186
192
  -q, --quiet Suppress progress messages
187
193
  --version Show version
@@ -279,7 +285,111 @@ for stmt in fixed_statements:
279
285
 
280
286
  During deduplication, reversed duplicates (e.g., "A -> P -> B" and "B -> P -> A") are now detected and merged, with the correct orientation determined by source text similarity.
281
287
 
282
- ## Disable Embeddings (Faster, No Extra Dependencies)
288
+ ## New in v0.4.0: GLiNER2 Integration
289
+
290
+ v0.4.0 replaces spaCy with **GLiNER2** (205M params) for entity recognition and relation extraction. GLiNER2 is a unified model that handles NER, text classification, structured data extraction, and relation extraction with CPU-optimized inference.
291
+
292
+ ### Why GLiNER2?
293
+
294
+ The T5-Gemma model excels at:
295
+ - **Triple isolation** - identifying that a relationship exists
296
+ - **Coreference resolution** - resolving pronouns to named entities
297
+
298
+ GLiNER2 now handles:
299
+ - **Entity recognition** - refining subject/object boundaries
300
+ - **Relation extraction** - when predefined predicates are provided
301
+ - **Entity scoring** - scoring how "entity-like" subjects/objects are
302
+
303
+ ### Two Extraction Modes
304
+
305
+ **Mode 1: With Predicate List** (GLiNER2 relation extraction)
306
+ ```python
307
+ from statement_extractor import extract_statements, ExtractionOptions
308
+
309
+ options = ExtractionOptions(predicates=["works_for", "founded", "acquired", "headquartered_in"])
310
+ result = extract_statements("John works for Apple Inc. in Cupertino.", options)
311
+ ```
312
+
313
+ Or via CLI:
314
+ ```bash
315
+ corp-extractor "John works for Apple Inc." --predicates "works_for,founded,acquired"
316
+ ```
317
+
318
+ **Mode 2: Without Predicate List** (entity-refined extraction)
319
+ ```python
320
+ result = extract_statements("Apple announced a new iPhone.")
321
+ # Uses GLiNER2 for entity extraction to refine boundaries
322
+ # Extracts predicate from source text using T5-Gemma's hint
323
+ ```
324
+
325
+ ### Three Candidate Extraction Methods
326
+
327
+ For each statement, three candidates are generated and the best is selected:
328
+
329
+ | Method | Description |
330
+ |--------|-------------|
331
+ | `hybrid` | Model subject/object + GLiNER2/extracted predicate |
332
+ | `gliner` | All components refined by GLiNER2 |
333
+ | `split` | Source text split around the predicate |
334
+
335
+ ```python
336
+ for stmt in result:
337
+ print(f"{stmt.subject.text} --[{stmt.predicate}]--> {stmt.object.text}")
338
+ print(f" Method: {stmt.extraction_method}") # hybrid, gliner, split, or model
339
+ print(f" Confidence: {stmt.confidence_score:.2f}")
340
+ ```
341
+
342
+ ### Combined Quality Scoring
343
+
344
+ Confidence scores combine **semantic similarity** and **entity recognition**:
345
+
346
+ | Component | Weight | Description |
347
+ |-----------|--------|-------------|
348
+ | Semantic similarity | 50% | Cosine similarity between source text and reassembled triple |
349
+ | Subject entity score | 25% | How entity-like the subject is (via GLiNER2) |
350
+ | Object entity score | 25% | How entity-like the object is (via GLiNER2) |
351
+
352
+ **Entity scoring (via GLiNER2):**
353
+ - Recognized entity with high confidence: 1.0
354
+ - Recognized entity with moderate confidence: 0.8
355
+ - Partially recognized: 0.6
356
+ - Not recognized: 0.2
357
+
358
+ ### Extraction Method Tracking
359
+
360
+ Each statement includes an `extraction_method` field:
361
+ - `hybrid` - Model subject/object + GLiNER2 predicate
362
+ - `gliner` - All components refined by GLiNER2
363
+ - `split` - Subject/object from splitting source text around predicate
364
+ - `model` - All components from T5-Gemma model (only when `--no-gliner`)
365
+
366
+ ### Best Triple Selection
367
+
368
+ By default, only the **highest-scoring triple** is kept for each source sentence.
369
+
370
+ To keep all candidate triples:
371
+ ```python
372
+ options = ExtractionOptions(all_triples=True)
373
+ result = extract_statements(text, options)
374
+ ```
375
+
376
+ Or via CLI:
377
+ ```bash
378
+ corp-extractor "Your text" --all-triples --verbose
379
+ ```
380
+
381
+ **Disable GLiNER2 extraction** to use only model output:
382
+ ```python
383
+ options = ExtractionOptions(use_gliner_extraction=False)
384
+ result = extract_statements(text, options)
385
+ ```
386
+
387
+ Or via CLI:
388
+ ```bash
389
+ corp-extractor "Your text" --no-gliner
390
+ ```
391
+
392
+ ## Disable Embeddings
283
393
 
284
394
  ```python
285
395
  options = ExtractionOptions(
@@ -317,7 +427,7 @@ dict_output = extract_statements_as_dict(text)
317
427
  ```python
318
428
  from statement_extractor import StatementExtractor
319
429
 
320
- extractor = StatementExtractor(device="cuda") # or "cpu"
430
+ extractor = StatementExtractor(device="cuda") # or "mps" (Apple Silicon) or "cpu"
321
431
 
322
432
  texts = ["Text 1...", "Text 2...", "Text 3..."]
323
433
  for text in texts:
@@ -348,21 +458,23 @@ for text in texts:
348
458
  This library uses the T5-Gemma 2 statement extraction model with **Diverse Beam Search** ([Vijayakumar et al., 2016](https://arxiv.org/abs/1610.02424)):
349
459
 
350
460
  1. **Diverse Beam Search**: Generates 4+ candidate outputs using beam groups with diversity penalty
351
- 2. **Quality Scoring** *(v0.2.0)*: Each triple scored for groundedness in source text
352
- 3. **Beam Merging** *(v0.2.0)*: Top beams combined for better coverage
353
- 4. **Embedding Dedup** *(v0.2.0)*: Semantic similarity removes near-duplicate predicates
354
- 5. **Predicate Normalization** *(v0.2.0)*: Optional taxonomy matching via embeddings
355
- 6. **Contextualized Matching** *(v0.2.2)*: Full statement context used for canonicalization and dedup
356
- 7. **Entity Type Merging** *(v0.2.3)*: UNKNOWN types merged with specific types during dedup
357
- 8. **Reversal Detection** *(v0.2.3)*: Subject-object reversals detected and corrected via embedding comparison
461
+ 2. **Quality Scoring**: Each triple scored for groundedness in source text
462
+ 3. **Beam Merging**: Top beams combined for better coverage
463
+ 4. **Embedding Dedup**: Semantic similarity removes near-duplicate predicates
464
+ 5. **Predicate Normalization**: Optional taxonomy matching via embeddings
465
+ 6. **Contextualized Matching**: Full statement context used for canonicalization and dedup
466
+ 7. **Entity Type Merging**: UNKNOWN types merged with specific types during dedup
467
+ 8. **Reversal Detection**: Subject-object reversals detected and corrected via embedding comparison
468
+ 9. **GLiNER2 Extraction** *(v0.4.0)*: Entity recognition and relation extraction for improved accuracy
358
469
 
359
470
  ## Requirements
360
471
 
361
472
  - Python 3.10+
362
473
  - PyTorch 2.0+
363
- - Transformers 4.35+
474
+ - Transformers 5.0+
364
475
  - Pydantic 2.0+
365
- - sentence-transformers 2.2+ *(optional, for embedding features)*
476
+ - sentence-transformers 2.2+
477
+ - GLiNER2 (model downloaded automatically on first use)
366
478
  - ~2GB VRAM (GPU) or ~4GB RAM (CPU)
367
479
 
368
480
  ## Links
@@ -10,35 +10,40 @@ Extract structured subject-predicate-object statements from unstructured text us
10
10
 
11
11
  - **Structured Extraction**: Converts unstructured text into subject-predicate-object triples
12
12
  - **Entity Type Recognition**: Identifies 12 entity types (ORG, PERSON, GPE, LOC, PRODUCT, EVENT, etc.)
13
- - **Quality Scoring** *(v0.2.0)*: Each triple scored for groundedness (0-1) based on source text
14
- - **Beam Merging** *(v0.2.0)*: Combines top beams for better coverage instead of picking one
15
- - **Embedding-based Dedup** *(v0.2.0)*: Uses semantic similarity to detect near-duplicate predicates
16
- - **Predicate Taxonomies** *(v0.2.0)*: Normalize predicates to canonical forms via embeddings
17
- - **Contextualized Matching** *(v0.2.2)*: Compares full "Subject Predicate Object" against source text for better accuracy
18
- - **Entity Type Merging** *(v0.2.3)*: Automatically merges UNKNOWN entity types with specific types during deduplication
19
- - **Reversal Detection** *(v0.2.3)*: Detects and corrects subject-object reversals using embedding comparison
20
- - **Command Line Interface** *(v0.2.4)*: Full-featured CLI for terminal usage
13
+ - **GLiNER2 Integration** *(v0.4.0)*: Uses GLiNER2 (205M params) for entity recognition and relation extraction
14
+ - **Predefined Predicates** *(v0.4.0)*: Optional `--predicates` list for GLiNER2 relation extraction mode
15
+ - **Entity-based Scoring** *(v0.4.0)*: Confidence combines semantic similarity (50%) + entity recognition scores (25% each)
16
+ - **Multi-Candidate Extraction**: Generates 3 candidates per statement (hybrid, GLiNER2-only, predicate-split)
17
+ - **Best Triple Selection**: Keeps only highest-scoring triple per source (use `--all-triples` to keep all)
18
+ - **Extraction Method Tracking**: Each statement includes `extraction_method` field (hybrid, gliner, split, model)
19
+ - **Beam Merging**: Combines top beams for better coverage instead of picking one
20
+ - **Embedding-based Dedup**: Uses semantic similarity to detect near-duplicate predicates
21
+ - **Predicate Taxonomies**: Normalize predicates to canonical forms via embeddings
22
+ - **Contextualized Matching**: Compares full "Subject Predicate Object" against source text for better accuracy
23
+ - **Entity Type Merging**: Automatically merges UNKNOWN entity types with specific types during deduplication
24
+ - **Reversal Detection**: Detects and corrects subject-object reversals using embedding comparison
25
+ - **Command Line Interface**: Full-featured CLI for terminal usage
21
26
  - **Multiple Output Formats**: Get results as Pydantic models, JSON, XML, or dictionaries
22
27
 
23
28
  ## Installation
24
29
 
25
30
  ```bash
26
- # Recommended: include embedding support for smart deduplication
27
- pip install "corp-extractor[embeddings]"
28
-
29
- # Minimal installation (no embedding features)
30
31
  pip install corp-extractor
31
32
  ```
32
33
 
33
- **Note**: This package requires `transformers>=5.0.0` (pre-release) for T5-Gemma2 model support. Install with `--pre` flag if needed:
34
- ```bash
35
- pip install --pre "corp-extractor[embeddings]"
36
- ```
34
+ The GLiNER2 model (205M params) is downloaded automatically on first use.
35
+
36
+ **Note**: This package requires `transformers>=5.0.0` for T5-Gemma2 model support.
37
37
 
38
38
  **For GPU support**, install PyTorch with CUDA first:
39
39
  ```bash
40
40
  pip install torch --index-url https://download.pytorch.org/whl/cu121
41
- pip install "corp-extractor[embeddings]"
41
+ pip install corp-extractor
42
+ ```
43
+
44
+ **For Apple Silicon (M1/M2/M3)**, MPS acceleration is automatically detected:
45
+ ```bash
46
+ pip install corp-extractor # MPS used automatically
42
47
  ```
43
48
 
44
49
  ## Quick Start
@@ -136,11 +141,14 @@ Options:
136
141
  --no-dedup Disable deduplication
137
142
  --no-embeddings Disable embedding-based dedup (faster)
138
143
  --no-merge Disable beam merging
144
+ --no-gliner Disable GLiNER2 extraction (use raw model output)
145
+ --predicates TEXT Comma-separated predicate types for GLiNER2 relation extraction
146
+ --all-triples Keep all candidate triples (default: best per source)
139
147
  --dedup-threshold FLOAT Deduplication threshold (default: 0.65)
140
148
  --min-confidence FLOAT Min confidence filter (default: 0)
141
149
  --taxonomy PATH Load predicate taxonomy from file
142
150
  --taxonomy-threshold FLOAT Taxonomy matching threshold (default: 0.5)
143
- --device [auto|cuda|cpu] Device to use (default: auto)
151
+ --device [auto|cuda|mps|cpu] Device to use (default: auto)
144
152
  -v, --verbose Show confidence scores and metadata
145
153
  -q, --quiet Suppress progress messages
146
154
  --version Show version
@@ -238,7 +246,111 @@ for stmt in fixed_statements:
238
246
 
239
247
  During deduplication, reversed duplicates (e.g., "A -> P -> B" and "B -> P -> A") are now detected and merged, with the correct orientation determined by source text similarity.
240
248
 
241
- ## Disable Embeddings (Faster, No Extra Dependencies)
249
+ ## New in v0.4.0: GLiNER2 Integration
250
+
251
+ v0.4.0 replaces spaCy with **GLiNER2** (205M params) for entity recognition and relation extraction. GLiNER2 is a unified model that handles NER, text classification, structured data extraction, and relation extraction with CPU-optimized inference.
252
+
253
+ ### Why GLiNER2?
254
+
255
+ The T5-Gemma model excels at:
256
+ - **Triple isolation** - identifying that a relationship exists
257
+ - **Coreference resolution** - resolving pronouns to named entities
258
+
259
+ GLiNER2 now handles:
260
+ - **Entity recognition** - refining subject/object boundaries
261
+ - **Relation extraction** - when predefined predicates are provided
262
+ - **Entity scoring** - scoring how "entity-like" subjects/objects are
263
+
264
+ ### Two Extraction Modes
265
+
266
+ **Mode 1: With Predicate List** (GLiNER2 relation extraction)
267
+ ```python
268
+ from statement_extractor import extract_statements, ExtractionOptions
269
+
270
+ options = ExtractionOptions(predicates=["works_for", "founded", "acquired", "headquartered_in"])
271
+ result = extract_statements("John works for Apple Inc. in Cupertino.", options)
272
+ ```
273
+
274
+ Or via CLI:
275
+ ```bash
276
+ corp-extractor "John works for Apple Inc." --predicates "works_for,founded,acquired"
277
+ ```
278
+
279
+ **Mode 2: Without Predicate List** (entity-refined extraction)
280
+ ```python
281
+ result = extract_statements("Apple announced a new iPhone.")
282
+ # Uses GLiNER2 for entity extraction to refine boundaries
283
+ # Extracts predicate from source text using T5-Gemma's hint
284
+ ```
285
+
286
+ ### Three Candidate Extraction Methods
287
+
288
+ For each statement, three candidates are generated and the best is selected:
289
+
290
+ | Method | Description |
291
+ |--------|-------------|
292
+ | `hybrid` | Model subject/object + GLiNER2/extracted predicate |
293
+ | `gliner` | All components refined by GLiNER2 |
294
+ | `split` | Source text split around the predicate |
295
+
296
+ ```python
297
+ for stmt in result:
298
+ print(f"{stmt.subject.text} --[{stmt.predicate}]--> {stmt.object.text}")
299
+ print(f" Method: {stmt.extraction_method}") # hybrid, gliner, split, or model
300
+ print(f" Confidence: {stmt.confidence_score:.2f}")
301
+ ```
302
+
303
+ ### Combined Quality Scoring
304
+
305
+ Confidence scores combine **semantic similarity** and **entity recognition**:
306
+
307
+ | Component | Weight | Description |
308
+ |-----------|--------|-------------|
309
+ | Semantic similarity | 50% | Cosine similarity between source text and reassembled triple |
310
+ | Subject entity score | 25% | How entity-like the subject is (via GLiNER2) |
311
+ | Object entity score | 25% | How entity-like the object is (via GLiNER2) |
312
+
313
+ **Entity scoring (via GLiNER2):**
314
+ - Recognized entity with high confidence: 1.0
315
+ - Recognized entity with moderate confidence: 0.8
316
+ - Partially recognized: 0.6
317
+ - Not recognized: 0.2
318
+
319
+ ### Extraction Method Tracking
320
+
321
+ Each statement includes an `extraction_method` field:
322
+ - `hybrid` - Model subject/object + GLiNER2 predicate
323
+ - `gliner` - All components refined by GLiNER2
324
+ - `split` - Subject/object from splitting source text around predicate
325
+ - `model` - All components from T5-Gemma model (only when `--no-gliner`)
326
+
327
+ ### Best Triple Selection
328
+
329
+ By default, only the **highest-scoring triple** is kept for each source sentence.
330
+
331
+ To keep all candidate triples:
332
+ ```python
333
+ options = ExtractionOptions(all_triples=True)
334
+ result = extract_statements(text, options)
335
+ ```
336
+
337
+ Or via CLI:
338
+ ```bash
339
+ corp-extractor "Your text" --all-triples --verbose
340
+ ```
341
+
342
+ **Disable GLiNER2 extraction** to use only model output:
343
+ ```python
344
+ options = ExtractionOptions(use_gliner_extraction=False)
345
+ result = extract_statements(text, options)
346
+ ```
347
+
348
+ Or via CLI:
349
+ ```bash
350
+ corp-extractor "Your text" --no-gliner
351
+ ```
352
+
353
+ ## Disable Embeddings
242
354
 
243
355
  ```python
244
356
  options = ExtractionOptions(
@@ -276,7 +388,7 @@ dict_output = extract_statements_as_dict(text)
276
388
  ```python
277
389
  from statement_extractor import StatementExtractor
278
390
 
279
- extractor = StatementExtractor(device="cuda") # or "cpu"
391
+ extractor = StatementExtractor(device="cuda") # or "mps" (Apple Silicon) or "cpu"
280
392
 
281
393
  texts = ["Text 1...", "Text 2...", "Text 3..."]
282
394
  for text in texts:
@@ -307,21 +419,23 @@ for text in texts:
307
419
  This library uses the T5-Gemma 2 statement extraction model with **Diverse Beam Search** ([Vijayakumar et al., 2016](https://arxiv.org/abs/1610.02424)):
308
420
 
309
421
  1. **Diverse Beam Search**: Generates 4+ candidate outputs using beam groups with diversity penalty
310
- 2. **Quality Scoring** *(v0.2.0)*: Each triple scored for groundedness in source text
311
- 3. **Beam Merging** *(v0.2.0)*: Top beams combined for better coverage
312
- 4. **Embedding Dedup** *(v0.2.0)*: Semantic similarity removes near-duplicate predicates
313
- 5. **Predicate Normalization** *(v0.2.0)*: Optional taxonomy matching via embeddings
314
- 6. **Contextualized Matching** *(v0.2.2)*: Full statement context used for canonicalization and dedup
315
- 7. **Entity Type Merging** *(v0.2.3)*: UNKNOWN types merged with specific types during dedup
316
- 8. **Reversal Detection** *(v0.2.3)*: Subject-object reversals detected and corrected via embedding comparison
422
+ 2. **Quality Scoring**: Each triple scored for groundedness in source text
423
+ 3. **Beam Merging**: Top beams combined for better coverage
424
+ 4. **Embedding Dedup**: Semantic similarity removes near-duplicate predicates
425
+ 5. **Predicate Normalization**: Optional taxonomy matching via embeddings
426
+ 6. **Contextualized Matching**: Full statement context used for canonicalization and dedup
427
+ 7. **Entity Type Merging**: UNKNOWN types merged with specific types during dedup
428
+ 8. **Reversal Detection**: Subject-object reversals detected and corrected via embedding comparison
429
+ 9. **GLiNER2 Extraction** *(v0.4.0)*: Entity recognition and relation extraction for improved accuracy
317
430
 
318
431
  ## Requirements
319
432
 
320
433
  - Python 3.10+
321
434
  - PyTorch 2.0+
322
- - Transformers 4.35+
435
+ - Transformers 5.0+
323
436
  - Pydantic 2.0+
324
- - sentence-transformers 2.2+ *(optional, for embedding features)*
437
+ - sentence-transformers 2.2+
438
+ - GLiNER2 (model downloaded automatically on first use)
325
439
  - ~2GB VRAM (GPU) or ~4GB RAM (CPU)
326
440
 
327
441
  ## Links
@@ -4,7 +4,7 @@ build-backend = "hatchling.build"
4
4
 
5
5
  [project]
6
6
  name = "corp-extractor"
7
- version = "0.2.8"
7
+ version = "0.4.0"
8
8
  description = "Extract structured statements from text using T5-Gemma 2 and Diverse Beam Search"
9
9
  readme = "README.md"
10
10
  requires-python = ">=3.10"
@@ -49,23 +49,17 @@ dependencies = [
49
49
  "transformers>=5.0.0rc3",
50
50
  "numpy>=1.24.0",
51
51
  "click>=8.0.0",
52
+ "sentence-transformers>=2.2.0",
53
+ "gliner2",
52
54
  ]
53
55
 
54
56
  [project.optional-dependencies]
55
- # Embedding-based predicate comparison (enabled by default in ExtractionOptions)
56
- embeddings = [
57
- "sentence-transformers>=2.2.0",
58
- ]
59
57
  dev = [
60
58
  "pytest>=7.0.0",
61
59
  "pytest-cov>=4.0.0",
62
60
  "ruff>=0.1.0",
63
61
  "mypy>=1.0.0",
64
62
  ]
65
- # Full installation with all optional features
66
- all = [
67
- "sentence-transformers>=2.2.0",
68
- ]
69
63
 
70
64
  [project.scripts]
71
65
  statement-extractor = "statement_extractor.cli:main"
@@ -29,12 +29,13 @@ Example:
29
29
  >>> data = extract_statements_as_dict("Some text...")
30
30
  """
31
31
 
32
- __version__ = "0.2.5"
32
+ __version__ = "0.3.0"
33
33
 
34
34
  # Core models
35
35
  from .models import (
36
36
  Entity,
37
37
  EntityType,
38
+ ExtractionMethod,
38
39
  ExtractionOptions,
39
40
  ExtractionResult,
40
41
  Statement,
@@ -73,6 +74,7 @@ __all__ = [
73
74
  # Core models
74
75
  "Entity",
75
76
  "EntityType",
77
+ "ExtractionMethod",
76
78
  "ExtractionOptions",
77
79
  "ExtractionResult",
78
80
  "Statement",
@@ -7,11 +7,37 @@ Usage:
7
7
  cat input.txt | corp-extractor -
8
8
  """
9
9
 
10
+ import logging
10
11
  import sys
11
12
  from typing import Optional
12
13
 
13
14
  import click
14
15
 
16
+
17
+ def _configure_logging(verbose: bool) -> None:
18
+ """Configure logging for the extraction pipeline."""
19
+ level = logging.DEBUG if verbose else logging.WARNING
20
+
21
+ # Configure root logger for statement_extractor package
22
+ logging.basicConfig(
23
+ level=level,
24
+ format="%(asctime)s [%(levelname)s] %(name)s: %(message)s",
25
+ datefmt="%H:%M:%S",
26
+ stream=sys.stderr,
27
+ force=True,
28
+ )
29
+
30
+ # Set level for all statement_extractor loggers
31
+ for logger_name in [
32
+ "statement_extractor",
33
+ "statement_extractor.extractor",
34
+ "statement_extractor.scoring",
35
+ "statement_extractor.predicate_comparer",
36
+ "statement_extractor.canonicalization",
37
+ "statement_extractor.gliner_extraction",
38
+ ]:
39
+ logging.getLogger(logger_name).setLevel(level)
40
+
15
41
  from . import __version__
16
42
  from .models import (
17
43
  ExtractionOptions,
@@ -40,6 +66,9 @@ from .models import (
40
66
  @click.option("--no-dedup", is_flag=True, help="Disable deduplication")
41
67
  @click.option("--no-embeddings", is_flag=True, help="Disable embedding-based deduplication (faster)")
42
68
  @click.option("--no-merge", is_flag=True, help="Disable beam merging (select single best beam)")
69
+ @click.option("--no-gliner", is_flag=True, help="Disable GLiNER2 extraction (use raw model output)")
70
+ @click.option("--predicates", type=str, help="Comma-separated list of predicate types for GLiNER2 relation extraction")
71
+ @click.option("--all-triples", is_flag=True, help="Keep all candidate triples instead of selecting best per source")
43
72
  @click.option("--dedup-threshold", type=float, default=0.65, help="Similarity threshold for deduplication (default: 0.65)")
44
73
  # Quality options
45
74
  @click.option("--min-confidence", type=float, default=0.0, help="Minimum confidence threshold 0-1 (default: 0)")
@@ -64,6 +93,9 @@ def main(
64
93
  no_dedup: bool,
65
94
  no_embeddings: bool,
66
95
  no_merge: bool,
96
+ no_gliner: bool,
97
+ predicates: Optional[str],
98
+ all_triples: bool,
67
99
  dedup_threshold: float,
68
100
  min_confidence: float,
69
101
  taxonomy: Optional[str],
@@ -91,6 +123,9 @@ def main(
91
123
  json JSON with full metadata
92
124
  xml Raw XML from model
93
125
  """
126
+ # Configure logging based on verbose flag
127
+ _configure_logging(verbose)
128
+
94
129
  # Determine output format
95
130
  if output_json:
96
131
  output = "json"
@@ -124,6 +159,13 @@ def main(
124
159
  # Configure scoring
125
160
  scoring_config = ScoringConfig(min_confidence=min_confidence)
126
161
 
162
+ # Parse predicates if provided
163
+ predicate_list = None
164
+ if predicates:
165
+ predicate_list = [p.strip() for p in predicates.split(",") if p.strip()]
166
+ if not quiet:
167
+ click.echo(f"Using predicate list: {predicate_list}", err=True)
168
+
127
169
  # Configure extraction options
128
170
  options = ExtractionOptions(
129
171
  num_beams=beams,
@@ -132,9 +174,13 @@ def main(
132
174
  deduplicate=not no_dedup,
133
175
  embedding_dedup=not no_embeddings,
134
176
  merge_beams=not no_merge,
177
+ use_gliner_extraction=not no_gliner,
178
+ predicates=predicate_list,
179
+ all_triples=all_triples,
135
180
  predicate_taxonomy=predicate_taxonomy,
136
181
  predicate_config=predicate_config,
137
182
  scoring_config=scoring_config,
183
+ verbose=verbose,
138
184
  )
139
185
 
140
186
  # Import here to allow --help without loading torch
@@ -160,6 +206,7 @@ def main(
160
206
  result = extractor.extract(input_text, options)
161
207
  _print_table(result, verbose)
162
208
  except Exception as e:
209
+ logging.exception("Error extracting statements:")
163
210
  raise click.ClickException(f"Extraction failed: {e}")
164
211
 
165
212
 
@@ -195,6 +242,9 @@ def _print_table(result, verbose: bool):
195
242
  click.echo(f" {stmt.object.text}{object_type}")
196
243
 
197
244
  if verbose:
245
+ # Always show extraction method
246
+ click.echo(f" Method: {stmt.extraction_method.value}")
247
+
198
248
  if stmt.confidence_score is not None:
199
249
  click.echo(f" Confidence: {stmt.confidence_score:.2f}")
200
250