fast-sentence-segment 1.3.0__tar.gz → 1.4.1__tar.gz

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
Files changed (34) hide show
  1. {fast_sentence_segment-1.3.0 → fast_sentence_segment-1.4.1}/PKG-INFO +4 -3
  2. {fast_sentence_segment-1.3.0 → fast_sentence_segment-1.4.1}/README.md +2 -1
  3. {fast_sentence_segment-1.3.0 → fast_sentence_segment-1.4.1}/fast_sentence_segment/cli.py +6 -1
  4. {fast_sentence_segment-1.3.0 → fast_sentence_segment-1.4.1}/fast_sentence_segment/dmo/__init__.py +1 -0
  5. fast_sentence_segment-1.4.1/fast_sentence_segment/dmo/dehyphenator.py +55 -0
  6. fast_sentence_segment-1.4.1/fast_sentence_segment/dmo/unwrap_hard_wrapped_text.py +75 -0
  7. {fast_sentence_segment-1.3.0 → fast_sentence_segment-1.4.1}/fast_sentence_segment/svc/perform_sentence_segmentation.py +6 -0
  8. {fast_sentence_segment-1.3.0 → fast_sentence_segment-1.4.1}/pyproject.toml +2 -2
  9. fast_sentence_segment-1.4.1/setup.py +39 -0
  10. fast_sentence_segment-1.3.0/fast_sentence_segment/dmo/unwrap_hard_wrapped_text.py +0 -34
  11. fast_sentence_segment-1.3.0/setup.py +0 -39
  12. {fast_sentence_segment-1.3.0 → fast_sentence_segment-1.4.1}/LICENSE +0 -0
  13. {fast_sentence_segment-1.3.0 → fast_sentence_segment-1.4.1}/fast_sentence_segment/__init__.py +0 -0
  14. {fast_sentence_segment-1.3.0 → fast_sentence_segment-1.4.1}/fast_sentence_segment/bp/__init__.py +0 -0
  15. {fast_sentence_segment-1.3.0 → fast_sentence_segment-1.4.1}/fast_sentence_segment/bp/segmenter.py +0 -0
  16. {fast_sentence_segment-1.3.0 → fast_sentence_segment-1.4.1}/fast_sentence_segment/core/__init__.py +0 -0
  17. {fast_sentence_segment-1.3.0 → fast_sentence_segment-1.4.1}/fast_sentence_segment/core/base_object.py +0 -0
  18. {fast_sentence_segment-1.3.0 → fast_sentence_segment-1.4.1}/fast_sentence_segment/core/stopwatch.py +0 -0
  19. {fast_sentence_segment-1.3.0 → fast_sentence_segment-1.4.1}/fast_sentence_segment/dmo/abbreviation_merger.py +0 -0
  20. {fast_sentence_segment-1.3.0 → fast_sentence_segment-1.4.1}/fast_sentence_segment/dmo/abbreviation_splitter.py +0 -0
  21. {fast_sentence_segment-1.3.0 → fast_sentence_segment-1.4.1}/fast_sentence_segment/dmo/abbreviations.py +0 -0
  22. {fast_sentence_segment-1.3.0 → fast_sentence_segment-1.4.1}/fast_sentence_segment/dmo/bullet_point_cleaner.py +0 -0
  23. {fast_sentence_segment-1.3.0 → fast_sentence_segment-1.4.1}/fast_sentence_segment/dmo/ellipsis_normalizer.py +0 -0
  24. {fast_sentence_segment-1.3.0 → fast_sentence_segment-1.4.1}/fast_sentence_segment/dmo/group_quoted_sentences.py +0 -0
  25. {fast_sentence_segment-1.3.0 → fast_sentence_segment-1.4.1}/fast_sentence_segment/dmo/newlines_to_periods.py +0 -0
  26. {fast_sentence_segment-1.3.0 → fast_sentence_segment-1.4.1}/fast_sentence_segment/dmo/normalize_quotes.py +0 -0
  27. {fast_sentence_segment-1.3.0 → fast_sentence_segment-1.4.1}/fast_sentence_segment/dmo/numbered_list_normalizer.py +0 -0
  28. {fast_sentence_segment-1.3.0 → fast_sentence_segment-1.4.1}/fast_sentence_segment/dmo/post_process_sentences.py +0 -0
  29. {fast_sentence_segment-1.3.0 → fast_sentence_segment-1.4.1}/fast_sentence_segment/dmo/question_exclamation_splitter.py +0 -0
  30. {fast_sentence_segment-1.3.0 → fast_sentence_segment-1.4.1}/fast_sentence_segment/dmo/spacy_doc_segmenter.py +0 -0
  31. {fast_sentence_segment-1.3.0 → fast_sentence_segment-1.4.1}/fast_sentence_segment/dmo/strip_trailing_period_after_quote.py +0 -0
  32. {fast_sentence_segment-1.3.0 → fast_sentence_segment-1.4.1}/fast_sentence_segment/dmo/title_name_merger.py +0 -0
  33. {fast_sentence_segment-1.3.0 → fast_sentence_segment-1.4.1}/fast_sentence_segment/svc/__init__.py +0 -0
  34. {fast_sentence_segment-1.3.0 → fast_sentence_segment-1.4.1}/fast_sentence_segment/svc/perform_paragraph_segmentation.py +0 -0
@@ -1,6 +1,6 @@
1
1
  Metadata-Version: 2.1
2
2
  Name: fast-sentence-segment
3
- Version: 1.3.0
3
+ Version: 1.4.1
4
4
  Summary: Fast and Efficient Sentence Segmentation
5
5
  Home-page: https://github.com/craigtrim/fast-sentence-segment
6
6
  License: MIT
@@ -9,7 +9,7 @@ Author: Craig Trim
9
9
  Author-email: craigtrim@gmail.com
10
10
  Maintainer: Craig Trim
11
11
  Maintainer-email: craigtrim@gmail.com
12
- Requires-Python: >=3.9,<4.0
12
+ Requires-Python: >=3.9,<3.13
13
13
  Classifier: Development Status :: 5 - Production/Stable
14
14
  Classifier: Intended Audience :: Developers
15
15
  Classifier: Intended Audience :: Science/Research
@@ -33,8 +33,9 @@ Description-Content-Type: text/markdown
33
33
 
34
34
  [![PyPI version](https://img.shields.io/pypi/v/fast-sentence-segment.svg)](https://pypi.org/project/fast-sentence-segment/)
35
35
  [![Python versions](https://img.shields.io/pypi/pyversions/fast-sentence-segment.svg)](https://pypi.org/project/fast-sentence-segment/)
36
+ [![CI](https://img.shields.io/github/actions/workflow/status/craigtrim/fast-sentence-segment/ci.yml?branch=master&label=CI)](https://github.com/craigtrim/fast-sentence-segment/actions/workflows/ci.yml)
36
37
  [![License: MIT](https://img.shields.io/badge/License-MIT-yellow.svg)](https://opensource.org/licenses/MIT)
37
- [![spaCy](https://img.shields.io/badge/spaCy-3.8-blue.svg)](https://spacy.io/)
38
+ [![Ruff](https://img.shields.io/endpoint?url=https://raw.githubusercontent.com/astral-sh/ruff/main/assets/badge/v2.json)](https://github.com/astral-sh/ruff)
38
39
  [![Downloads](https://static.pepy.tech/badge/fast-sentence-segment)](https://pepy.tech/project/fast-sentence-segment)
39
40
  [![Downloads/Month](https://static.pepy.tech/badge/fast-sentence-segment/month)](https://pepy.tech/project/fast-sentence-segment)
40
41
 
@@ -2,8 +2,9 @@
2
2
 
3
3
  [![PyPI version](https://img.shields.io/pypi/v/fast-sentence-segment.svg)](https://pypi.org/project/fast-sentence-segment/)
4
4
  [![Python versions](https://img.shields.io/pypi/pyversions/fast-sentence-segment.svg)](https://pypi.org/project/fast-sentence-segment/)
5
+ [![CI](https://img.shields.io/github/actions/workflow/status/craigtrim/fast-sentence-segment/ci.yml?branch=master&label=CI)](https://github.com/craigtrim/fast-sentence-segment/actions/workflows/ci.yml)
5
6
  [![License: MIT](https://img.shields.io/badge/License-MIT-yellow.svg)](https://opensource.org/licenses/MIT)
6
- [![spaCy](https://img.shields.io/badge/spaCy-3.8-blue.svg)](https://spacy.io/)
7
+ [![Ruff](https://img.shields.io/endpoint?url=https://raw.githubusercontent.com/astral-sh/ruff/main/assets/badge/v2.json)](https://github.com/astral-sh/ruff)
7
8
  [![Downloads](https://static.pepy.tech/badge/fast-sentence-segment)](https://pepy.tech/project/fast-sentence-segment)
8
9
  [![Downloads/Month](https://static.pepy.tech/badge/fast-sentence-segment/month)](https://pepy.tech/project/fast-sentence-segment)
9
10
 
@@ -62,6 +62,11 @@ def main():
62
62
  action="store_true",
63
63
  help="Number output lines",
64
64
  )
65
+ parser.add_argument(
66
+ "--unwrap",
67
+ action="store_true",
68
+ help="Unwrap hard-wrapped lines and dehyphenate split words",
69
+ )
65
70
  args = parser.parse_args()
66
71
 
67
72
  # Get input text
@@ -77,7 +82,7 @@ def main():
77
82
  sys.exit(1)
78
83
 
79
84
  # Segment and output
80
- sentences = segment_text(text.strip(), flatten=True)
85
+ sentences = segment_text(text.strip(), flatten=True, unwrap=args.unwrap)
81
86
  for i, sentence in enumerate(sentences, 1):
82
87
  if args.numbered:
83
88
  print(f"{i}. {sentence}")
@@ -2,6 +2,7 @@ from .abbreviation_merger import AbbreviationMerger
2
2
  from .abbreviation_splitter import AbbreviationSplitter
3
3
  from .title_name_merger import TitleNameMerger
4
4
  from .bullet_point_cleaner import BulletPointCleaner
5
+ from .dehyphenator import Dehyphenator
5
6
  from .ellipsis_normalizer import EllipsisNormalizer
6
7
  from .newlines_to_periods import NewlinesToPeriods
7
8
  from .post_process_sentences import PostProcessStructure
@@ -0,0 +1,55 @@
1
+ # -*- coding: UTF-8 -*-
2
+ """Dehyphenate words split across lines.
3
+
4
+ Related GitHub Issue:
5
+ #8 - Add dehyphenation support for words split across lines
6
+ https://github.com/craigtrim/fast-sentence-segment/issues/8
7
+
8
+ When processing ebooks and scanned documents, words are often hyphenated
9
+ at line breaks for typesetting purposes. This module rejoins those words.
10
+ """
11
+
12
+ import re
13
+
14
+ from fast_sentence_segment.core import BaseObject
15
+
16
+ # Pattern to match hyphenated word breaks at end of line:
17
+ # - A single hyphen (not -- em-dash)
18
+ # - Followed by newline and optional whitespace
19
+ # - Followed by a lowercase letter (continuation of word)
20
+ _HYPHEN_LINE_BREAK_PATTERN = re.compile(r'(?<!-)-\n\s*([a-z])')
21
+
22
+
23
+ class Dehyphenator(BaseObject):
24
+ """Rejoin words that were hyphenated across line breaks."""
25
+
26
+ def __init__(self):
27
+ """Change Log
28
+
29
+ Created:
30
+ 3-Feb-2026
31
+ craigtrim@gmail.com
32
+ * add dehyphenation support for words split across lines
33
+ https://github.com/craigtrim/fast-sentence-segment/issues/8
34
+ """
35
+ BaseObject.__init__(self, __name__)
36
+
37
+ @staticmethod
38
+ def process(input_text: str) -> str:
39
+ """Rejoin words that were hyphenated across line breaks.
40
+
41
+ Detects the pattern of a word fragment ending with a hyphen
42
+ at the end of a line, followed by the word continuation
43
+ starting with a lowercase letter on the next line.
44
+
45
+ Examples:
46
+ "bot-\\ntle" -> "bottle"
47
+ "cham-\\n bermaid" -> "chambermaid"
48
+
49
+ Args:
50
+ input_text: Text that may contain hyphenated line breaks.
51
+
52
+ Returns:
53
+ Text with hyphenated word breaks rejoined.
54
+ """
55
+ return _HYPHEN_LINE_BREAK_PATTERN.sub(r'\1', input_text)
@@ -0,0 +1,75 @@
1
+ # -*- coding: UTF-8 -*-
2
+ """Unwrap hard-wrapped text (e.g., Project Gutenberg e-texts).
3
+
4
+ Joins lines within paragraphs into continuous strings while
5
+ preserving paragraph boundaries (blank lines). Also dehyphenates
6
+ words that were split across lines for typesetting.
7
+
8
+ Related GitHub Issue:
9
+ #8 - Add dehyphenation support for words split across lines
10
+ https://github.com/craigtrim/fast-sentence-segment/issues/8
11
+ """
12
+
13
+ import re
14
+
15
+ # Pattern to match hyphenated word breaks at end of line:
16
+ # - A single hyphen (not -- em-dash)
17
+ # - Followed by newline and optional whitespace
18
+ # - Followed by a lowercase letter (continuation of word)
19
+ _HYPHEN_LINE_BREAK_PATTERN = re.compile(r'(?<!-)-\n\s*([a-z])')
20
+
21
+
22
+ def _dehyphenate_block(block: str) -> str:
23
+ """Remove hyphens from words split across lines.
24
+
25
+ Detects the pattern of a word fragment ending with a hyphen
26
+ at the end of a line, followed by the word continuation
27
+ starting with a lowercase letter on the next line.
28
+
29
+ Examples:
30
+ "bot-\\ntle" -> "bottle"
31
+ "cham-\\n bermaid" -> "chambermaid"
32
+
33
+ Args:
34
+ block: A paragraph block that may contain hyphenated line breaks.
35
+
36
+ Returns:
37
+ The block with hyphenated word breaks rejoined.
38
+ """
39
+ return _HYPHEN_LINE_BREAK_PATTERN.sub(r'\1', block)
40
+
41
+
42
+ def unwrap_hard_wrapped_text(text: str) -> str:
43
+ """Unwrap hard-wrapped paragraphs into continuous lines.
44
+
45
+ Splits on blank lines to identify paragraphs, then joins
46
+ lines within each paragraph into a single string with
47
+ single spaces. Also dehyphenates words that were split
48
+ across lines for typesetting purposes.
49
+
50
+ Examples:
51
+ >>> unwrap_hard_wrapped_text("a bot-\\ntle of wine")
52
+ 'a bottle of wine'
53
+ >>> unwrap_hard_wrapped_text("line one\\nline two")
54
+ 'line one line two'
55
+
56
+ Args:
57
+ text: Raw text with hard-wrapped lines.
58
+
59
+ Returns:
60
+ Text with paragraphs unwrapped into continuous strings,
61
+ separated by double newlines, with hyphenated words rejoined.
62
+ """
63
+ blocks = re.split(r'\n\s*\n', text)
64
+ unwrapped = []
65
+
66
+ for block in blocks:
67
+ # First, dehyphenate words split across lines
68
+ block = _dehyphenate_block(block)
69
+ # Then join remaining lines with spaces
70
+ lines = block.splitlines()
71
+ joined = ' '.join(line.strip() for line in lines if line.strip())
72
+ if joined:
73
+ unwrapped.append(joined)
74
+
75
+ return '\n\n'.join(unwrapped)
@@ -18,6 +18,7 @@ from fast_sentence_segment.dmo import QuestionExclamationSplitter
18
18
  from fast_sentence_segment.dmo import SpacyDocSegmenter
19
19
  from fast_sentence_segment.dmo import PostProcessStructure
20
20
  from fast_sentence_segment.dmo import StripTrailingPeriodAfterQuote
21
+ from fast_sentence_segment.dmo import Dehyphenator
21
22
 
22
23
 
23
24
  class PerformSentenceSegmentation(BaseObject):
@@ -46,6 +47,7 @@ class PerformSentenceSegmentation(BaseObject):
46
47
  if not self.__nlp:
47
48
  self.__nlp = spacy.load("en_core_web_sm")
48
49
 
50
+ self._dehyphenate = Dehyphenator.process
49
51
  self._newlines_to_periods = NewlinesToPeriods.process
50
52
  self._normalize_numbered_lists = NumberedListNormalizer().process
51
53
  self._normalize_ellipses = EllipsisNormalizer().process
@@ -96,6 +98,10 @@ class PerformSentenceSegmentation(BaseObject):
96
98
  # Normalize tabs to spaces
97
99
  input_text = input_text.replace('\t', ' ')
98
100
 
101
+ # Dehyphenate words split across lines (issue #8)
102
+ # Must happen before newlines are converted to periods
103
+ input_text = self._dehyphenate(input_text)
104
+
99
105
  input_text = self._normalize_numbered_lists(input_text)
100
106
  input_text = self._normalize_ellipses(input_text)
101
107
 
@@ -11,7 +11,7 @@ description = "Fast and Efficient Sentence Segmentation"
11
11
  license = "MIT"
12
12
  name = "fast-sentence-segment"
13
13
  readme = "README.md"
14
- version = "1.3.0"
14
+ version = "1.4.1"
15
15
 
16
16
  keywords = ["nlp", "text", "preprocess", "segment"]
17
17
  repository = "https://github.com/craigtrim/fast-sentence-segment"
@@ -37,7 +37,7 @@ classifiers = [
37
37
  "Bug Tracker" = "https://github.com/craigtrim/fast-sentence-segment/issues"
38
38
 
39
39
  [tool.poetry.dependencies]
40
- python = "^3.9"
40
+ python = ">=3.9,<3.13"
41
41
  spacy = "^3.8.0"
42
42
 
43
43
  [tool.poetry.dev-dependencies]
@@ -0,0 +1,39 @@
1
+ # -*- coding: utf-8 -*-
2
+ from setuptools import setup
3
+
4
+ packages = \
5
+ ['fast_sentence_segment',
6
+ 'fast_sentence_segment.bp',
7
+ 'fast_sentence_segment.core',
8
+ 'fast_sentence_segment.dmo',
9
+ 'fast_sentence_segment.svc']
10
+
11
+ package_data = \
12
+ {'': ['*']}
13
+
14
+ install_requires = \
15
+ ['spacy>=3.8.0,<4.0.0']
16
+
17
+ entry_points = \
18
+ {'console_scripts': ['segment = fast_sentence_segment.cli:main',
19
+ 'segment-file = fast_sentence_segment.cli:file_main']}
20
+
21
+ setup_kwargs = {
22
+ 'name': 'fast-sentence-segment',
23
+ 'version': '1.4.1',
24
+ 'description': 'Fast and Efficient Sentence Segmentation',
25
+ 'long_description': '# Fast Sentence Segmentation\n\n[![PyPI version](https://img.shields.io/pypi/v/fast-sentence-segment.svg)](https://pypi.org/project/fast-sentence-segment/)\n[![Python versions](https://img.shields.io/pypi/pyversions/fast-sentence-segment.svg)](https://pypi.org/project/fast-sentence-segment/)\n[![CI](https://img.shields.io/github/actions/workflow/status/craigtrim/fast-sentence-segment/ci.yml?branch=master&label=CI)](https://github.com/craigtrim/fast-sentence-segment/actions/workflows/ci.yml)\n[![License: MIT](https://img.shields.io/badge/License-MIT-yellow.svg)](https://opensource.org/licenses/MIT)\n[![Ruff](https://img.shields.io/endpoint?url=https://raw.githubusercontent.com/astral-sh/ruff/main/assets/badge/v2.json)](https://github.com/astral-sh/ruff)\n[![Downloads](https://static.pepy.tech/badge/fast-sentence-segment)](https://pepy.tech/project/fast-sentence-segment)\n[![Downloads/Month](https://static.pepy.tech/badge/fast-sentence-segment/month)](https://pepy.tech/project/fast-sentence-segment)\n\nFast and efficient sentence segmentation using spaCy with surgical post-processing fixes. Handles complex edge cases like abbreviations (Dr., Mr., etc.), ellipses, quoted text, and multi-paragraph documents.\n\n## Why This Library?\n\n1. **Keep it local**: LLM API calls cost money and send your data to third parties. Run sentence segmentation entirely on your machine.\n2. **spaCy perfected**: spaCy is a great local model, but it makes mistakes. This library fixes most of spaCy\'s shortcomings.\n\n## Features\n\n- **Paragraph-aware segmentation**: Returns sentences grouped by paragraph\n- **Abbreviation handling**: Correctly handles "Dr.", "Mr.", "etc.", "p.m.", "a.m." without false splits\n- **Ellipsis preservation**: Keeps `...` intact while detecting sentence boundaries\n- **Question/exclamation splitting**: Properly splits on `?` and `!` followed by capital letters\n- **Cached processing**: LRU cache for repeated text processing\n- **Flexible output**: Nested lists (by paragraph) or flattened list of sentences\n- **Bullet point & numbered list normalization**: Cleans common list formats\n- **CLI tool**: Command-line interface for quick segmentation\n\n## Installation\n\n```bash\npip install fast-sentence-segment\n```\n\nAfter installation, download the spaCy model:\n\n```bash\npython -m spacy download en_core_web_sm\n```\n\n## Quick Start\n\n```python\nfrom fast_sentence_segment import segment_text\n\ntext = "Do you like Dr. Who? I prefer Dr. Strange! Mr. T is also cool."\n\nresults = segment_text(text, flatten=True)\n```\n\n```json\n[\n "Do you like Dr. Who?",\n "I prefer Dr. Strange!",\n "Mr. T is also cool."\n]\n```\n\nNotice how "Dr. Who?" stays together as a single sentence—the library correctly recognizes that a title followed by a single-word name ending in `?` or `!` is a name reference, not a sentence boundary.\n\n## Usage\n\n### Basic Segmentation\n\nThe `segment_text` function returns a list of lists, where each inner list represents a paragraph containing its sentences:\n\n```python\nfrom fast_sentence_segment import segment_text\n\ntext = """Gandalf spoke softly. "All we have to decide is what to do with the time given us."\n\nFrodo nodded. The weight of the Ring pressed against his chest."""\n\nresults = segment_text(text)\n```\n\n```json\n[\n [\n "Gandalf spoke softly.",\n "\\"All we have to decide is what to do with the time given us.\\"."\n ],\n [\n "Frodo nodded.",\n "The weight of the Ring pressed against his chest."\n ]\n]\n```\n\n### Flattened Output\n\nIf you don\'t need paragraph boundaries, use the `flatten` parameter:\n\n```python\ntext = "At 9 a.m. the hobbits set out. By 3 p.m. they reached Rivendell. Mr. Frodo was exhausted."\n\nresults = segment_text(text, flatten=True)\n```\n\n```json\n[\n "At 9 a.m. the hobbits set out.",\n "By 3 p.m. they reached Rivendell.",\n "Mr. Frodo was exhausted."\n]\n```\n\n### Direct Segmenter Access\n\nFor more control, use the `Segmenter` class directly:\n\n```python\nfrom fast_sentence_segment import Segmenter\n\nsegmenter = Segmenter()\nresults = segmenter.input_text("Your text here.")\n```\n\n### Command Line Interface\n\n```bash\n# Inline text\nsegment "Gandalf paused... You shall not pass! The Balrog roared."\n\n# Pipe from stdin\necho "Have you seen Dr. Who? It\'s brilliant!" | segment\n\n# Numbered output\nsegment -n -f silmarillion.txt\n\n# File-to-file (one sentence per line)\nsegment-file --input-file book.txt --output-file sentences.txt\n\n# Unwrap hard-wrapped e-texts (Project Gutenberg, etc.)\nsegment-file --input-file book.txt --output-file sentences.txt --unwrap\n```\n\n## API Reference\n\n| Function | Parameters | Returns | Description |\n|----------|------------|---------|-------------|\n| `segment_text()` | `input_text: str`, `flatten: bool = False`, `unwrap: bool = False` | `list` | Main entry point for segmentation |\n| `Segmenter.input_text()` | `input_text: str` | `list[list[str]]` | Cached paragraph-aware segmentation |\n\n### CLI Commands\n\n| Command | Description |\n|---------|-------------|\n| `segment [text]` | Segment text from argument, `-f FILE`, or stdin. Use `-n` for numbered output. |\n| `segment-file --input-file IN --output-file OUT [--unwrap]` | Segment a file and write one sentence per line. Use `--unwrap` for hard-wrapped e-texts. |\n\n## Why Nested Lists?\n\nThe segmentation process preserves document structure by segmenting into both paragraphs and sentences. Each outer list represents a paragraph, and each inner list contains that paragraph\'s sentences. This is useful for:\n\n- Document structure analysis\n- Paragraph-level processing\n- Maintaining original text organization\n\nUse `flatten=True` when you only need sentences without paragraph context.\n\n## Requirements\n\n- Python 3.9+\n- spaCy 3.8+\n- en_core_web_sm spaCy model\n\n## How It Works\n\nThis library uses spaCy for initial sentence segmentation, then applies surgical post-processing fixes for cases where spaCy\'s default behavior is incorrect:\n\n1. **Pre-processing**: Normalize numbered lists, preserve ellipses with placeholders\n2. **spaCy segmentation**: Use spaCy\'s sentence boundary detection\n3. **Post-processing**: Split on abbreviation boundaries, handle `?`/`!` + capital patterns\n4. **Denormalization**: Restore placeholders to original text\n\n## License\n\nMIT License - see [LICENSE](LICENSE) for details.\n\n## Contributing\n\nContributions are welcome! Please feel free to submit a Pull Request.\n\n1. Fork the repository\n2. Create your feature branch (`git checkout -b feature/amazing-feature`)\n3. Run tests (`make test`)\n4. Commit your changes\n5. Push to the branch\n6. Open a Pull Request\n',
26
+ 'author': 'Craig Trim',
27
+ 'author_email': 'craigtrim@gmail.com',
28
+ 'maintainer': 'Craig Trim',
29
+ 'maintainer_email': 'craigtrim@gmail.com',
30
+ 'url': 'https://github.com/craigtrim/fast-sentence-segment',
31
+ 'packages': packages,
32
+ 'package_data': package_data,
33
+ 'install_requires': install_requires,
34
+ 'entry_points': entry_points,
35
+ 'python_requires': '>=3.9,<3.13',
36
+ }
37
+
38
+
39
+ setup(**setup_kwargs)
@@ -1,34 +0,0 @@
1
- # -*- coding: UTF-8 -*-
2
- """Unwrap hard-wrapped text (e.g., Project Gutenberg e-texts).
3
-
4
- Joins lines within paragraphs into continuous strings while
5
- preserving paragraph boundaries (blank lines).
6
- """
7
-
8
- import re
9
-
10
-
11
- def unwrap_hard_wrapped_text(text: str) -> str:
12
- """Unwrap hard-wrapped paragraphs into continuous lines.
13
-
14
- Splits on blank lines to identify paragraphs, then joins
15
- lines within each paragraph into a single string with
16
- single spaces.
17
-
18
- Args:
19
- text: Raw text with hard-wrapped lines.
20
-
21
- Returns:
22
- Text with paragraphs unwrapped into continuous strings,
23
- separated by double newlines.
24
- """
25
- blocks = re.split(r'\n\s*\n', text)
26
- unwrapped = []
27
-
28
- for block in blocks:
29
- lines = block.splitlines()
30
- joined = ' '.join(line.strip() for line in lines if line.strip())
31
- if joined:
32
- unwrapped.append(joined)
33
-
34
- return '\n\n'.join(unwrapped)
@@ -1,39 +0,0 @@
1
- # -*- coding: utf-8 -*-
2
- from setuptools import setup
3
-
4
- packages = \
5
- ['fast_sentence_segment',
6
- 'fast_sentence_segment.bp',
7
- 'fast_sentence_segment.core',
8
- 'fast_sentence_segment.dmo',
9
- 'fast_sentence_segment.svc']
10
-
11
- package_data = \
12
- {'': ['*']}
13
-
14
- install_requires = \
15
- ['spacy>=3.8.0,<4.0.0']
16
-
17
- entry_points = \
18
- {'console_scripts': ['segment = fast_sentence_segment.cli:main',
19
- 'segment-file = fast_sentence_segment.cli:file_main']}
20
-
21
- setup_kwargs = {
22
- 'name': 'fast-sentence-segment',
23
- 'version': '1.3.0',
24
- 'description': 'Fast and Efficient Sentence Segmentation',
25
- 'long_description': '# Fast Sentence Segmentation\n\n[![PyPI version](https://img.shields.io/pypi/v/fast-sentence-segment.svg)](https://pypi.org/project/fast-sentence-segment/)\n[![Python versions](https://img.shields.io/pypi/pyversions/fast-sentence-segment.svg)](https://pypi.org/project/fast-sentence-segment/)\n[![License: MIT](https://img.shields.io/badge/License-MIT-yellow.svg)](https://opensource.org/licenses/MIT)\n[![spaCy](https://img.shields.io/badge/spaCy-3.8-blue.svg)](https://spacy.io/)\n[![Downloads](https://static.pepy.tech/badge/fast-sentence-segment)](https://pepy.tech/project/fast-sentence-segment)\n[![Downloads/Month](https://static.pepy.tech/badge/fast-sentence-segment/month)](https://pepy.tech/project/fast-sentence-segment)\n\nFast and efficient sentence segmentation using spaCy with surgical post-processing fixes. Handles complex edge cases like abbreviations (Dr., Mr., etc.), ellipses, quoted text, and multi-paragraph documents.\n\n## Why This Library?\n\n1. **Keep it local**: LLM API calls cost money and send your data to third parties. Run sentence segmentation entirely on your machine.\n2. **spaCy perfected**: spaCy is a great local model, but it makes mistakes. This library fixes most of spaCy\'s shortcomings.\n\n## Features\n\n- **Paragraph-aware segmentation**: Returns sentences grouped by paragraph\n- **Abbreviation handling**: Correctly handles "Dr.", "Mr.", "etc.", "p.m.", "a.m." without false splits\n- **Ellipsis preservation**: Keeps `...` intact while detecting sentence boundaries\n- **Question/exclamation splitting**: Properly splits on `?` and `!` followed by capital letters\n- **Cached processing**: LRU cache for repeated text processing\n- **Flexible output**: Nested lists (by paragraph) or flattened list of sentences\n- **Bullet point & numbered list normalization**: Cleans common list formats\n- **CLI tool**: Command-line interface for quick segmentation\n\n## Installation\n\n```bash\npip install fast-sentence-segment\n```\n\nAfter installation, download the spaCy model:\n\n```bash\npython -m spacy download en_core_web_sm\n```\n\n## Quick Start\n\n```python\nfrom fast_sentence_segment import segment_text\n\ntext = "Do you like Dr. Who? I prefer Dr. Strange! Mr. T is also cool."\n\nresults = segment_text(text, flatten=True)\n```\n\n```json\n[\n "Do you like Dr. Who?",\n "I prefer Dr. Strange!",\n "Mr. T is also cool."\n]\n```\n\nNotice how "Dr. Who?" stays together as a single sentence—the library correctly recognizes that a title followed by a single-word name ending in `?` or `!` is a name reference, not a sentence boundary.\n\n## Usage\n\n### Basic Segmentation\n\nThe `segment_text` function returns a list of lists, where each inner list represents a paragraph containing its sentences:\n\n```python\nfrom fast_sentence_segment import segment_text\n\ntext = """Gandalf spoke softly. "All we have to decide is what to do with the time given us."\n\nFrodo nodded. The weight of the Ring pressed against his chest."""\n\nresults = segment_text(text)\n```\n\n```json\n[\n [\n "Gandalf spoke softly.",\n "\\"All we have to decide is what to do with the time given us.\\"."\n ],\n [\n "Frodo nodded.",\n "The weight of the Ring pressed against his chest."\n ]\n]\n```\n\n### Flattened Output\n\nIf you don\'t need paragraph boundaries, use the `flatten` parameter:\n\n```python\ntext = "At 9 a.m. the hobbits set out. By 3 p.m. they reached Rivendell. Mr. Frodo was exhausted."\n\nresults = segment_text(text, flatten=True)\n```\n\n```json\n[\n "At 9 a.m. the hobbits set out.",\n "By 3 p.m. they reached Rivendell.",\n "Mr. Frodo was exhausted."\n]\n```\n\n### Direct Segmenter Access\n\nFor more control, use the `Segmenter` class directly:\n\n```python\nfrom fast_sentence_segment import Segmenter\n\nsegmenter = Segmenter()\nresults = segmenter.input_text("Your text here.")\n```\n\n### Command Line Interface\n\n```bash\n# Inline text\nsegment "Gandalf paused... You shall not pass! The Balrog roared."\n\n# Pipe from stdin\necho "Have you seen Dr. Who? It\'s brilliant!" | segment\n\n# Numbered output\nsegment -n -f silmarillion.txt\n\n# File-to-file (one sentence per line)\nsegment-file --input-file book.txt --output-file sentences.txt\n\n# Unwrap hard-wrapped e-texts (Project Gutenberg, etc.)\nsegment-file --input-file book.txt --output-file sentences.txt --unwrap\n```\n\n## API Reference\n\n| Function | Parameters | Returns | Description |\n|----------|------------|---------|-------------|\n| `segment_text()` | `input_text: str`, `flatten: bool = False`, `unwrap: bool = False` | `list` | Main entry point for segmentation |\n| `Segmenter.input_text()` | `input_text: str` | `list[list[str]]` | Cached paragraph-aware segmentation |\n\n### CLI Commands\n\n| Command | Description |\n|---------|-------------|\n| `segment [text]` | Segment text from argument, `-f FILE`, or stdin. Use `-n` for numbered output. |\n| `segment-file --input-file IN --output-file OUT [--unwrap]` | Segment a file and write one sentence per line. Use `--unwrap` for hard-wrapped e-texts. |\n\n## Why Nested Lists?\n\nThe segmentation process preserves document structure by segmenting into both paragraphs and sentences. Each outer list represents a paragraph, and each inner list contains that paragraph\'s sentences. This is useful for:\n\n- Document structure analysis\n- Paragraph-level processing\n- Maintaining original text organization\n\nUse `flatten=True` when you only need sentences without paragraph context.\n\n## Requirements\n\n- Python 3.9+\n- spaCy 3.8+\n- en_core_web_sm spaCy model\n\n## How It Works\n\nThis library uses spaCy for initial sentence segmentation, then applies surgical post-processing fixes for cases where spaCy\'s default behavior is incorrect:\n\n1. **Pre-processing**: Normalize numbered lists, preserve ellipses with placeholders\n2. **spaCy segmentation**: Use spaCy\'s sentence boundary detection\n3. **Post-processing**: Split on abbreviation boundaries, handle `?`/`!` + capital patterns\n4. **Denormalization**: Restore placeholders to original text\n\n## License\n\nMIT License - see [LICENSE](LICENSE) for details.\n\n## Contributing\n\nContributions are welcome! Please feel free to submit a Pull Request.\n\n1. Fork the repository\n2. Create your feature branch (`git checkout -b feature/amazing-feature`)\n3. Run tests (`make test`)\n4. Commit your changes\n5. Push to the branch\n6. Open a Pull Request\n',
26
- 'author': 'Craig Trim',
27
- 'author_email': 'craigtrim@gmail.com',
28
- 'maintainer': 'Craig Trim',
29
- 'maintainer_email': 'craigtrim@gmail.com',
30
- 'url': 'https://github.com/craigtrim/fast-sentence-segment',
31
- 'packages': packages,
32
- 'package_data': package_data,
33
- 'install_requires': install_requires,
34
- 'entry_points': entry_points,
35
- 'python_requires': '>=3.9,<4.0',
36
- }
37
-
38
-
39
- setup(**setup_kwargs)