swarmauri_tool_lexicaldensity 0.9.0.dev3__tar.gz → 0.9.0.dev22__tar.gz

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
@@ -0,0 +1,167 @@
1
+ Metadata-Version: 2.4
2
+ Name: swarmauri_tool_lexicaldensity
3
+ Version: 0.9.0.dev22
4
+ Summary: Lexical Density Tool for Swarmauri.
5
+ License-Expression: Apache-2.0
6
+ License-File: LICENSE
7
+ Keywords: swarmauri,tool,lexicaldensity,lexical,density
8
+ Author: Jacob Stewart
9
+ Author-email: jacob@swarmauri.com
10
+ Requires-Python: >=3.10,<3.13
11
+ Classifier: License :: OSI Approved :: Apache Software License
12
+ Classifier: Programming Language :: Python :: 3.10
13
+ Classifier: Programming Language :: Python :: 3.11
14
+ Classifier: Programming Language :: Python :: 3.12
15
+ Classifier: Natural Language :: English
16
+ Classifier: Development Status :: 3 - Alpha
17
+ Classifier: Intended Audience :: Developers
18
+ Classifier: Topic :: Software Development :: Libraries :: Application Frameworks
19
+ Requires-Dist: nltk (>=3.9.1)
20
+ Requires-Dist: swarmauri_base
21
+ Requires-Dist: swarmauri_core
22
+ Requires-Dist: swarmauri_standard
23
+ Requires-Dist: textstat (>=0.7.4)
24
+ Description-Content-Type: text/markdown
25
+
26
+ ![Swarmauri Logo](https://github.com/swarmauri/swarmauri-sdk/blob/3d4d1cfa949399d7019ae9d8f296afba773dfb7f/assets/swarmauri.brand.theme.svg)
27
+
28
+ <p align="center">
29
+ <a href="https://pypi.org/project/swarmauri_tool_lexicaldensity/">
30
+ <img src="https://img.shields.io/pypi/dm/swarmauri_tool_lexicaldensity" alt="PyPI - Downloads"/></a>
31
+ <a href="https://hits.sh/github.com/swarmauri/swarmauri-sdk/tree/master/pkgs/community/swarmauri_tool_lexicaldensity/">
32
+ <img alt="Hits" src="https://hits.sh/github.com/swarmauri/swarmauri-sdk/tree/master/pkgs/community/swarmauri_tool_lexicaldensity.svg"/></a>
33
+ <a href="https://pypi.org/project/swarmauri_tool_lexicaldensity/">
34
+ <img src="https://img.shields.io/pypi/pyversions/swarmauri_tool_lexicaldensity" alt="PyPI - Python Version"/></a>
35
+ <a href="https://pypi.org/project/swarmauri_tool_lexicaldensity/">
36
+ <img src="https://img.shields.io/pypi/l/swarmauri_tool_lexicaldensity" alt="PyPI - License"/></a>
37
+ <a href="https://pypi.org/project/swarmauri_tool_lexicaldensity/">
38
+ <img src="https://img.shields.io/pypi/v/swarmauri_tool_lexicaldensity?label=swarmauri_tool_lexicaldensity&color=green" alt="PyPI - swarmauri_tool_lexicaldensity"/></a>
39
+ </p>
40
+
41
+ ---
42
+
43
+ # Swarmauri Tool · Lexical Density
44
+
45
+ A Swarmauri-compatible NLP utility that measures the lexical density of text—the ratio of content words (nouns, verbs, adjectives, adverbs) versus the total token count. Use it to monitor writing complexity, automate readability checks, or surface style signals inside agent conversations.
46
+
47
+ - Tokenizes input with NLTK and tags parts of speech to isolate lexical items.
48
+ - Returns a percentage score so changes in density are easy to compare between drafts.
49
+ - Ships as a Swarmauri tool, ready for registration inside agents or pipelines.
50
+
51
+ ## Requirements
52
+
53
+ - Python 3.10 – 3.13.
54
+ - `nltk` with the `punkt_tab` and `averaged_perceptron_tagger_eng` resources available (downloaded at runtime).
55
+ - `textstat` for robust lexicon counting.
56
+ - Core dependencies (`swarmauri_base`, `swarmauri_standard`, `pydantic`).
57
+
58
+ ## Installation
59
+
60
+ Pick the installer that matches your project; each command resolves the transitive requirements.
61
+
62
+ **pip**
63
+
64
+ ```bash
65
+ pip install swarmauri_tool_lexicaldensity
66
+ ```
67
+
68
+ **Poetry**
69
+
70
+ ```bash
71
+ poetry add swarmauri_tool_lexicaldensity
72
+ ```
73
+
74
+ **uv**
75
+
76
+ ```bash
77
+ # Add to the current project and update uv.lock
78
+ uv add swarmauri_tool_lexicaldensity
79
+
80
+ # or install into the running environment without touching pyproject.toml
81
+ uv pip install swarmauri_tool_lexicaldensity
82
+ ```
83
+
84
+ > Tip: If you deploy in an offline environment, download the required NLTK models during build time (`python -m nltk.downloader punkt_tab averaged_perceptron_tagger_eng`).
85
+
86
+ ## Quick Start
87
+
88
+ ```python
89
+ from swarmauri_tool_lexicaldensity import LexicalDensityTool
90
+
91
+ text = "This report summarizes quarterly revenue growth across all segments."
92
+
93
+ lexical_density_tool = LexicalDensityTool()
94
+ result = lexical_density_tool(text)
95
+
96
+ print(result)
97
+ # {'lexical_density': 58.333333333333336}
98
+ ```
99
+
100
+ The tool returns a floating-point percentage. Use the same instance to score multiple passages.
101
+
102
+ ## Usage Scenarios
103
+
104
+ ### Enforce Style Guidelines in Content Pipelines
105
+
106
+ ```python
107
+ from swarmauri_tool_lexicaldensity import LexicalDensityTool
108
+
109
+ product_copy = "Introducing our new AI-powered workstation with modular expansion."
110
+
111
+ checker = LexicalDensityTool()
112
+ score = checker(product_copy)["lexical_density"]
113
+
114
+ if score < 40:
115
+ raise ValueError(f"Copy too simple (density={score:.1f}%). Add more substantive language.")
116
+ ```
117
+
118
+ Gate marketing copy or documentation PRs based on desired complexity thresholds.
119
+
120
+ ### Analyze Conversations in a Swarmauri Agent
121
+
122
+ ```python
123
+ from swarmauri_core.agent.Agent import Agent
124
+ from swarmauri_standard.tools.registry import ToolRegistry
125
+ from swarmauri_tool_lexicaldensity import LexicalDensityTool
126
+
127
+ registry = ToolRegistry()
128
+ registry.register(LexicalDensityTool())
129
+ agent = Agent(tool_registry=registry)
130
+
131
+ utterance = "Could you elaborate on the architectural trade-offs in the data plane?"
132
+ result = agent.tools["LexicalDensityTool"](utterance)
133
+ print(result)
134
+ ```
135
+
136
+ Use lexical density as a signal to adjust agent tone or escalate queries to human operators.
137
+
138
+ ### Batch Score Documents and Track Trends
139
+
140
+ ```python
141
+ from pathlib import Path
142
+ from swarmauri_tool_lexicaldensity import LexicalDensityTool
143
+
144
+ lexical_density = LexicalDensityTool()
145
+ corpus_dir = Path("reports/")
146
+
147
+ scores = []
148
+ for doc in corpus_dir.glob("*.txt"):
149
+ text = doc.read_text(encoding="utf-8")
150
+ scores.append((doc.name, lexical_density(text)["lexical_density"]))
151
+
152
+ for name, score in sorted(scores, key=lambda item: item[1], reverse=True):
153
+ print(f"{name}: {score:.2f}% lexical words")
154
+ ```
155
+
156
+ Monitor writing complexity across a corpus of articles or support responses.
157
+
158
+ ## Troubleshooting
159
+
160
+ - **`LookupError` for NLTK resources** – Ensure `punkt_tab` and `averaged_perceptron_tagger_eng` are downloaded prior to calling the tool (see `nltk.download(...)`).
161
+ - **Low density on short texts** – Very short messages yield coarse percentages. Aggregate multiple utterances or relax thresholds for brief content.
162
+ - **Non-English text** – POS tagging models target English. Swap in language-specific models before using the tool with multilingual corpora.
163
+
164
+ ## License
165
+
166
+ `swarmauri_tool_lexicaldensity` is released under the Apache 2.0 License. See `LICENSE` for full details.
167
+
@@ -0,0 +1,141 @@
1
+ ![Swarmauri Logo](https://github.com/swarmauri/swarmauri-sdk/blob/3d4d1cfa949399d7019ae9d8f296afba773dfb7f/assets/swarmauri.brand.theme.svg)
2
+
3
+ <p align="center">
4
+ <a href="https://pypi.org/project/swarmauri_tool_lexicaldensity/">
5
+ <img src="https://img.shields.io/pypi/dm/swarmauri_tool_lexicaldensity" alt="PyPI - Downloads"/></a>
6
+ <a href="https://hits.sh/github.com/swarmauri/swarmauri-sdk/tree/master/pkgs/community/swarmauri_tool_lexicaldensity/">
7
+ <img alt="Hits" src="https://hits.sh/github.com/swarmauri/swarmauri-sdk/tree/master/pkgs/community/swarmauri_tool_lexicaldensity.svg"/></a>
8
+ <a href="https://pypi.org/project/swarmauri_tool_lexicaldensity/">
9
+ <img src="https://img.shields.io/pypi/pyversions/swarmauri_tool_lexicaldensity" alt="PyPI - Python Version"/></a>
10
+ <a href="https://pypi.org/project/swarmauri_tool_lexicaldensity/">
11
+ <img src="https://img.shields.io/pypi/l/swarmauri_tool_lexicaldensity" alt="PyPI - License"/></a>
12
+ <a href="https://pypi.org/project/swarmauri_tool_lexicaldensity/">
13
+ <img src="https://img.shields.io/pypi/v/swarmauri_tool_lexicaldensity?label=swarmauri_tool_lexicaldensity&color=green" alt="PyPI - swarmauri_tool_lexicaldensity"/></a>
14
+ </p>
15
+
16
+ ---
17
+
18
+ # Swarmauri Tool · Lexical Density
19
+
20
+ A Swarmauri-compatible NLP utility that measures the lexical density of text—the ratio of content words (nouns, verbs, adjectives, adverbs) versus the total token count. Use it to monitor writing complexity, automate readability checks, or surface style signals inside agent conversations.
21
+
22
+ - Tokenizes input with NLTK and tags parts of speech to isolate lexical items.
23
+ - Returns a percentage score so changes in density are easy to compare between drafts.
24
+ - Ships as a Swarmauri tool, ready for registration inside agents or pipelines.
25
+
26
+ ## Requirements
27
+
28
+ - Python 3.10 – 3.13.
29
+ - `nltk` with the `punkt_tab` and `averaged_perceptron_tagger_eng` resources available (downloaded at runtime).
30
+ - `textstat` for robust lexicon counting.
31
+ - Core dependencies (`swarmauri_base`, `swarmauri_standard`, `pydantic`).
32
+
33
+ ## Installation
34
+
35
+ Pick the installer that matches your project; each command resolves the transitive requirements.
36
+
37
+ **pip**
38
+
39
+ ```bash
40
+ pip install swarmauri_tool_lexicaldensity
41
+ ```
42
+
43
+ **Poetry**
44
+
45
+ ```bash
46
+ poetry add swarmauri_tool_lexicaldensity
47
+ ```
48
+
49
+ **uv**
50
+
51
+ ```bash
52
+ # Add to the current project and update uv.lock
53
+ uv add swarmauri_tool_lexicaldensity
54
+
55
+ # or install into the running environment without touching pyproject.toml
56
+ uv pip install swarmauri_tool_lexicaldensity
57
+ ```
58
+
59
+ > Tip: If you deploy in an offline environment, download the required NLTK models during build time (`python -m nltk.downloader punkt_tab averaged_perceptron_tagger_eng`).
60
+
61
+ ## Quick Start
62
+
63
+ ```python
64
+ from swarmauri_tool_lexicaldensity import LexicalDensityTool
65
+
66
+ text = "This report summarizes quarterly revenue growth across all segments."
67
+
68
+ lexical_density_tool = LexicalDensityTool()
69
+ result = lexical_density_tool(text)
70
+
71
+ print(result)
72
+ # {'lexical_density': 58.333333333333336}
73
+ ```
74
+
75
+ The tool returns a floating-point percentage. Use the same instance to score multiple passages.
76
+
77
+ ## Usage Scenarios
78
+
79
+ ### Enforce Style Guidelines in Content Pipelines
80
+
81
+ ```python
82
+ from swarmauri_tool_lexicaldensity import LexicalDensityTool
83
+
84
+ product_copy = "Introducing our new AI-powered workstation with modular expansion."
85
+
86
+ checker = LexicalDensityTool()
87
+ score = checker(product_copy)["lexical_density"]
88
+
89
+ if score < 40:
90
+ raise ValueError(f"Copy too simple (density={score:.1f}%). Add more substantive language.")
91
+ ```
92
+
93
+ Gate marketing copy or documentation PRs based on desired complexity thresholds.
94
+
95
+ ### Analyze Conversations in a Swarmauri Agent
96
+
97
+ ```python
98
+ from swarmauri_core.agent.Agent import Agent
99
+ from swarmauri_standard.tools.registry import ToolRegistry
100
+ from swarmauri_tool_lexicaldensity import LexicalDensityTool
101
+
102
+ registry = ToolRegistry()
103
+ registry.register(LexicalDensityTool())
104
+ agent = Agent(tool_registry=registry)
105
+
106
+ utterance = "Could you elaborate on the architectural trade-offs in the data plane?"
107
+ result = agent.tools["LexicalDensityTool"](utterance)
108
+ print(result)
109
+ ```
110
+
111
+ Use lexical density as a signal to adjust agent tone or escalate queries to human operators.
112
+
113
+ ### Batch Score Documents and Track Trends
114
+
115
+ ```python
116
+ from pathlib import Path
117
+ from swarmauri_tool_lexicaldensity import LexicalDensityTool
118
+
119
+ lexical_density = LexicalDensityTool()
120
+ corpus_dir = Path("reports/")
121
+
122
+ scores = []
123
+ for doc in corpus_dir.glob("*.txt"):
124
+ text = doc.read_text(encoding="utf-8")
125
+ scores.append((doc.name, lexical_density(text)["lexical_density"]))
126
+
127
+ for name, score in sorted(scores, key=lambda item: item[1], reverse=True):
128
+ print(f"{name}: {score:.2f}% lexical words")
129
+ ```
130
+
131
+ Monitor writing complexity across a corpus of articles or support responses.
132
+
133
+ ## Troubleshooting
134
+
135
+ - **`LookupError` for NLTK resources** – Ensure `punkt_tab` and `averaged_perceptron_tagger_eng` are downloaded prior to calling the tool (see `nltk.download(...)`).
136
+ - **Low density on short texts** – Very short messages yield coarse percentages. Aggregate multiple utterances or relax thresholds for brief content.
137
+ - **Non-English text** – POS tagging models target English. Swap in language-specific models before using the tool with multilingual corpora.
138
+
139
+ ## License
140
+
141
+ `swarmauri_tool_lexicaldensity` is released under the Apache 2.0 License. See `LICENSE` for full details.
@@ -1,6 +1,6 @@
1
1
  [project]
2
2
  name = "swarmauri_tool_lexicaldensity"
3
- version = "0.9.0.dev3"
3
+ version = "0.9.0.dev22"
4
4
  description = "Lexical Density Tool for Swarmauri."
5
5
  license = "Apache-2.0"
6
6
  readme = "README.md"
@@ -11,6 +11,10 @@ classifiers = [
11
11
  "Programming Language :: Python :: 3.10",
12
12
  "Programming Language :: Python :: 3.11",
13
13
  "Programming Language :: Python :: 3.12",
14
+ "Natural Language :: English",
15
+ "Development Status :: 3 - Alpha",
16
+ "Intended Audience :: Developers",
17
+ "Topic :: Software Development :: Libraries :: Application Frameworks",
14
18
  ]
15
19
  authors = [{ name = "Jacob Stewart", email = "jacob@swarmauri.com" }]
16
20
  dependencies = [
@@ -20,6 +24,13 @@ dependencies = [
20
24
  "swarmauri_base",
21
25
  "swarmauri_standard",
22
26
  ]
27
+ keywords = [
28
+ "swarmauri",
29
+ "tool",
30
+ "lexicaldensity",
31
+ "lexical",
32
+ "density",
33
+ ]
23
34
 
24
35
  [tool.uv.sources]
25
36
  swarmauri_core = { workspace = true }
@@ -1,64 +0,0 @@
1
- Metadata-Version: 2.3
2
- Name: swarmauri_tool_lexicaldensity
3
- Version: 0.9.0.dev3
4
- Summary: Lexical Density Tool for Swarmauri.
5
- License: Apache-2.0
6
- Author: Jacob Stewart
7
- Author-email: jacob@swarmauri.com
8
- Requires-Python: >=3.10,<3.13
9
- Classifier: License :: OSI Approved :: Apache Software License
10
- Classifier: Programming Language :: Python :: 3.10
11
- Classifier: Programming Language :: Python :: 3.11
12
- Classifier: Programming Language :: Python :: 3.12
13
- Requires-Dist: nltk (>=3.9.1)
14
- Requires-Dist: swarmauri_base
15
- Requires-Dist: swarmauri_core
16
- Requires-Dist: swarmauri_standard
17
- Requires-Dist: textstat (>=0.7.4)
18
- Description-Content-Type: text/markdown
19
-
20
-
21
- ![Swamauri Logo](https://res.cloudinary.com/dbjmpekvl/image/upload/v1730099724/Swarmauri-logo-lockup-2048x757_hww01w.png)
22
-
23
- <p align="center">
24
- <a href="https://pypi.org/project/swarmauri_tool_lexicaldensity/">
25
- <img src="https://img.shields.io/pypi/dm/swarmauri_tool_lexicaldensity" alt="PyPI - Downloads"/></a>
26
- <a href="https://hits.sh/github.com/swarmauri/swarmauri-sdk/tree/master/pkgs/community/swarmauri_tool_lexicaldensity/">
27
- <img alt="Hits" src="https://hits.sh/github.com/swarmauri/swarmauri-sdk/tree/master/pkgs/community/swarmauri_tool_lexicaldensity.svg"/></a>
28
- <a href="https://pypi.org/project/swarmauri_tool_lexicaldensity/">
29
- <img src="https://img.shields.io/pypi/pyversions/swarmauri_tool_lexicaldensity" alt="PyPI - Python Version"/></a>
30
- <a href="https://pypi.org/project/swarmauri_tool_lexicaldensity/">
31
- <img src="https://img.shields.io/pypi/l/swarmauri_tool_lexicaldensity" alt="PyPI - License"/></a>
32
- <a href="https://pypi.org/project/swarmauri_tool_lexicaldensity/">
33
- <img src="https://img.shields.io/pypi/v/swarmauri_tool_lexicaldensity?label=swarmauri_tool_lexicaldensity&color=green" alt="PyPI - swarmauri_tool_lexicaldensity"/></a>
34
- </p>
35
-
36
- ---
37
-
38
- # Swarmauri Tool Lexical Density
39
-
40
- A tool for calculating the lexical density of text, indicating the proportion of content words (nouns, verbs, adjectives, and adverbs) relative to the total number of words.
41
-
42
- ## Installation
43
-
44
- ```bash
45
- pip install swarmauri_tool_lexicaldensity
46
- ```
47
-
48
- ## Usage
49
- ```python
50
- from swarmauri.tools.LexicalDensityTool import LexicalDensityTool
51
-
52
- # Initialize the tool
53
- tool = LexicalDensityTool()
54
-
55
- # Calculate lexical density
56
- text = "This is a test sentence."
57
- result = tool(text)
58
- print(result) # Returns: {'lexical_density': <score>}
59
- ```
60
-
61
- ## Want to help?
62
-
63
- If you want to contribute to swarmauri-sdk, read up on our [guidelines for contributing](https://github.com/swarmauri/swarmauri-sdk/blob/master/contributing.md) that will help you get started.
64
-
@@ -1,44 +0,0 @@
1
-
2
- ![Swamauri Logo](https://res.cloudinary.com/dbjmpekvl/image/upload/v1730099724/Swarmauri-logo-lockup-2048x757_hww01w.png)
3
-
4
- <p align="center">
5
- <a href="https://pypi.org/project/swarmauri_tool_lexicaldensity/">
6
- <img src="https://img.shields.io/pypi/dm/swarmauri_tool_lexicaldensity" alt="PyPI - Downloads"/></a>
7
- <a href="https://hits.sh/github.com/swarmauri/swarmauri-sdk/tree/master/pkgs/community/swarmauri_tool_lexicaldensity/">
8
- <img alt="Hits" src="https://hits.sh/github.com/swarmauri/swarmauri-sdk/tree/master/pkgs/community/swarmauri_tool_lexicaldensity.svg"/></a>
9
- <a href="https://pypi.org/project/swarmauri_tool_lexicaldensity/">
10
- <img src="https://img.shields.io/pypi/pyversions/swarmauri_tool_lexicaldensity" alt="PyPI - Python Version"/></a>
11
- <a href="https://pypi.org/project/swarmauri_tool_lexicaldensity/">
12
- <img src="https://img.shields.io/pypi/l/swarmauri_tool_lexicaldensity" alt="PyPI - License"/></a>
13
- <a href="https://pypi.org/project/swarmauri_tool_lexicaldensity/">
14
- <img src="https://img.shields.io/pypi/v/swarmauri_tool_lexicaldensity?label=swarmauri_tool_lexicaldensity&color=green" alt="PyPI - swarmauri_tool_lexicaldensity"/></a>
15
- </p>
16
-
17
- ---
18
-
19
- # Swarmauri Tool Lexical Density
20
-
21
- A tool for calculating the lexical density of text, indicating the proportion of content words (nouns, verbs, adjectives, and adverbs) relative to the total number of words.
22
-
23
- ## Installation
24
-
25
- ```bash
26
- pip install swarmauri_tool_lexicaldensity
27
- ```
28
-
29
- ## Usage
30
- ```python
31
- from swarmauri.tools.LexicalDensityTool import LexicalDensityTool
32
-
33
- # Initialize the tool
34
- tool = LexicalDensityTool()
35
-
36
- # Calculate lexical density
37
- text = "This is a test sentence."
38
- result = tool(text)
39
- print(result) # Returns: {'lexical_density': <score>}
40
- ```
41
-
42
- ## Want to help?
43
-
44
- If you want to contribute to swarmauri-sdk, read up on our [guidelines for contributing](https://github.com/swarmauri/swarmauri-sdk/blob/master/contributing.md) that will help you get started.