hamtaa-texttools 1.3.2__py3-none-any.whl → 2.1.0__py3-none-any.whl

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
@@ -1,6 +1,6 @@
1
1
  Metadata-Version: 2.4
2
2
  Name: hamtaa-texttools
3
- Version: 1.3.2
3
+ Version: 2.1.0
4
4
  Summary: A high-level NLP toolkit built on top of modern LLMs.
5
5
  Author-email: Tohidi <the.mohammad.tohidi@gmail.com>, Erfan Moosavi <erfanmoosavi84@gmail.com>, Montazer <montazerh82@gmail.com>, Givechi <mohamad.m.givechi@gmail.com>, Zareshahi <a.zareshahi1377@gmail.com>
6
6
  Maintainer-email: Erfan Moosavi <erfanmoosavi84@gmail.com>, Tohidi <the.mohammad.tohidi@gmail.com>
@@ -11,9 +11,10 @@ Classifier: License :: OSI Approved :: MIT License
11
11
  Classifier: Topic :: Scientific/Engineering :: Artificial Intelligence
12
12
  Classifier: Topic :: Text Processing
13
13
  Classifier: Operating System :: OS Independent
14
- Requires-Python: >=3.9
14
+ Requires-Python: >=3.11
15
15
  Description-Content-Type: text/markdown
16
16
  License-File: LICENSE
17
+ Requires-Dist: dotenv>=0.9.9
17
18
  Requires-Dist: openai>=1.97.1
18
19
  Requires-Dist: pydantic>=2.0.0
19
20
  Requires-Dist: pyyaml>=6.0
@@ -30,30 +31,27 @@ Dynamic: license-file
30
31
 
31
32
  It provides both **sync (`TheTool`)** and **async (`AsyncTheTool`)** APIs for maximum flexibility.
32
33
 
33
- It provides ready-to-use utilities for **translation, question detection, keyword extraction, categorization, NER extraction, and more** - designed to help you integrate AI-powered text processing into your applications with minimal effort.
34
-
35
- **Note:** Most features of `texttools` are reliable when you use `google/gemma-3n-e4b-it` model.
34
+ It provides ready-to-use utilities for **translation, question detection, categorization, NER extraction, and more** - designed to help you integrate AI-powered text processing into your applications with minimal effort.
36
35
 
37
36
  ---
38
37
 
39
38
  ## ✨ Features
40
39
 
41
- TextTools provides a rich collection of high-level NLP utilities,
40
+ TextTools provides a collection of high-level NLP utilities.
42
41
  Each tool is designed to work with structured outputs.
43
42
 
44
- - **`categorize()`** - Classifies text into given categories
45
- - **`extract_keywords()`** - Extracts keywords from the text
46
- - **`extract_entities()`** - Named Entity Recognition (NER) system
47
- - **`is_question()`** - Binary question detection
48
- - **`text_to_question()`** - Generates questions from text
49
- - **`merge_questions()`** - Merges multiple questions into one
50
- - **`rewrite()`** - Rewrites text in a different way
51
- - **`subject_to_question()`** - Generates questions about a given subject
52
- - **`summarize()`** - Text summarization
53
- - **`translate()`** - Text translation
54
- - **`propositionize()`** - Convert text to atomic independent meaningful sentences
55
- - **`check_fact()`** - Check whether a statement is relevant to the source text
56
- - **`run_custom()`** - Allows users to define a custom tool with an arbitrary BaseModel
43
+ - **`categorize()`** - Classify text into given categories
44
+ - **`extract_keywords()`** - Extract keywords from the text
45
+ - **`extract_entities()`** - Perform Named Entity Recognition (NER)
46
+ - **`is_question()`** - Detect if the input is phrased as a question
47
+ - **`to_question()`** - Generate questions from the given text / subject
48
+ - **`merge_questions()`** - Merge multiple questions into one
49
+ - **`augment()`** - Rewrite text in different augmentations
50
+ - **`summarize()`** - Summarize the given text
51
+ - **`translate()`** - Translate text between languages
52
+ - **`propositionize()`** - Convert a text into atomic, independent, meaningful sentences
53
+ - **`is_fact()`** - Check whether a statement is a fact based on the source text
54
+ - **`run_custom()`** - Custom tool that can do almost anything
57
55
 
58
56
  ---
59
57
 
@@ -71,14 +69,14 @@ pip install -U hamtaa-texttools
71
69
 
72
70
  | Status | Meaning | Tools | Safe for Production? |
73
71
  |--------|---------|----------|-------------------|
74
- | **✅ Production** | Evaluated, tested, stable. | `categorize()` (list mode), `extract_keywords()`, `extract_entities()`, `is_question()`, `text_to_question()`, `merge_questions()`, `rewrite()`, `subject_to_question()`, `summarize()`, `run_custom()` | **Yes** - ready for reliable use. |
75
- | **🧪 Experimental** | Added to the package but **not fully evaluated**. Functional, but quality may vary. | `categorize()` (tree mode), `translate()`, `propositionize()`, `check_fact()` | **Use with caution** - outputs not yet validated. |
72
+ | **✅ Production** | Evaluated and tested. | `categorize()`, `extract_keywords()`, `extract_entities()`, `is_question()`, `to_question()`, `merge_questions()`, `augment()`, `summarize()`, `run_custom()` | **Yes** - ready for reliable use. |
73
+ | **🧪 Experimental** | Added to the package but **not fully evaluated**. | `translate()`, `propositionize()`, `is_fact()` | **Use with caution** |
76
74
 
77
75
  ---
78
76
 
79
- ## ⚙️ `with_analysis`, `logprobs`, `output_lang`, `user_prompt`, `temperature`, `validator`, `priority` and `timeout` parameters
77
+ ## ⚙️ Additional Parameters
80
78
 
81
- TextTools provides several optional flags to customize LLM behavior:
79
+ - **`raise_on_error: bool`** (`TheTool/AsyncTheTool` parameter) Raise errors (True) or return them in output (False). Default is True.
82
80
 
83
81
  - **`with_analysis: bool`** → Adds a reasoning step before generating the final output.
84
82
  **Note:** This doubles token usage per call.
@@ -88,17 +86,17 @@ TextTools provides several optional flags to customize LLM behavior:
88
86
 
89
87
  - **`output_lang: str`** → Forces the model to respond in a specific language.
90
88
 
91
- - **`user_prompt: str`** → Allows you to inject a custom instruction or into the model alongside the main template. This gives you fine-grained control over how the model interprets or modifies the input text.
89
+ - **`user_prompt: str`** → Allows you to inject a custom instruction into the model alongside the main template.
92
90
 
93
- - **`temperature: float`** → Determines how creative the model should respond. Takes a float number from `0.0` to `2.0`.
91
+ - **`temperature: float`** → Determines how creative the model should respond. Takes a float number between `0.0` and `2.0`.
94
92
 
95
- - **`validator: Callable (Experimental)`** → Forces TheTool to validate the output result based on your custom validator. Validator should return a boolean. If the validator fails, TheTool will retry to get another output by modifying `temperature`. You can also specify `max_validation_retries=<N>`.
93
+ - **`validator: Callable (Experimental)`** → Forces the tool to validate the output result based on your validator function. Validator should return a boolean. If the validator fails, TheTool will retry to get another output by modifying `temperature`. You can also specify `max_validation_retries=<N>`.
96
94
 
97
- - **`priority: int (Experimental)`** → Task execution priority level. Affects processing order in queues.
95
+ - **`priority: int (Experimental)`** → Affects processing order in queues.
98
96
  **Note:** This feature works if it's supported by the model and vLLM.
99
97
 
100
- - **`timeout: float`** → Maximum time in seconds to wait for the response before raising a timeout error
101
- **Note:** This feature only exists in `AsyncTheTool`.
98
+ - **`timeout: float`** → Maximum time in seconds to wait for the response before raising a timeout error.
99
+ **Note:** This feature is only available in `AsyncTheTool`.
102
100
 
103
101
 
104
102
  ---
@@ -110,12 +108,14 @@ Every tool of `TextTools` returns a `ToolOutput` object which is a BaseModel wit
110
108
  - **`analysis: str`**
111
109
  - **`logprobs: list`**
112
110
  - **`errors: list[str]`**
113
- - **`ToolOutputMetadata`**
111
+ - **`ToolOutputMetadata`**
114
112
  - **`tool_name: str`**
115
113
  - **`processed_at: datetime`**
116
114
  - **`execution_time: float`**
117
115
 
118
- **Note:** You can use `repr(ToolOutput)` to print your output with all the details.
116
+ - Serialize output to JSON using the `to_json()` method.
117
+ - Verify operation success with the `is_successful()` method.
118
+ - Convert output to a dictionary with the `to_dict()` method.
119
119
 
120
120
  ---
121
121
 
@@ -133,13 +133,13 @@ Every tool of `TextTools` returns a `ToolOutput` object which is a BaseModel wit
133
133
  from openai import OpenAI
134
134
  from texttools import TheTool
135
135
 
136
- client = OpenAI(base_url = "your_url", API_KEY = "your_api_key")
136
+ client = OpenAI(base_url="your_url", API_KEY="your_api_key")
137
137
  model = "model_name"
138
138
 
139
139
  the_tool = TheTool(client=client, model=model)
140
140
 
141
141
  detection = the_tool.is_question("Is this project open source?")
142
- print(repr(detection))
142
+ print(detection.to_json())
143
143
  ```
144
144
 
145
145
  ---
@@ -157,30 +157,23 @@ async def main():
157
157
 
158
158
  async_the_tool = AsyncTheTool(client=async_client, model=model)
159
159
 
160
- translation_task = async_the_tool.translate("سلام، حالت چطوره؟", target_language="English")
161
- keywords_task = async_the_tool.extract_keywords("Tomorrow, we will be dead by the car crash")
160
+ translation_task = async_the_tool.translate("سلام، حالت چطوره؟", target_lang="English")
161
+ keywords_task = async_the_tool.extract_keywords("This open source project is great for processing large datasets!")
162
162
 
163
163
  (translation, keywords) = await asyncio.gather(translation_task, keywords_task)
164
- print(repr(translation))
165
- print(repr(keywords))
164
+
165
+ print(translation.to_json())
166
+ print(keywords.to_json())
166
167
 
167
168
  asyncio.run(main())
168
169
  ```
169
170
 
170
171
  ---
171
172
 
172
- ## 👍 Use Cases
173
+ ## Use Cases
173
174
 
174
175
  Use **TextTools** when you need to:
175
176
 
176
- - 🔍 **Classify** large datasets quickly without model training
177
- - 🌍 **Translate** and process multilingual corpora with ease
177
+ - 🔍 **Classify** large datasets quickly without model training
178
178
  - 🧩 **Integrate** LLMs into production pipelines (structured outputs)
179
179
  - 📊 **Analyze** large text collections using embeddings and categorization
180
-
181
- ---
182
-
183
- ## 🤝 Contributing
184
-
185
- Contributions are welcome!
186
- Feel free to **open issues, suggest new features, or submit pull requests**.
@@ -0,0 +1,30 @@
1
+ hamtaa_texttools-2.1.0.dist-info/licenses/LICENSE,sha256=gqxbR8wqI3utd__l3Yn6_dQ3Pou1a17W4KmydbvZGok,1084
2
+ texttools/__init__.py,sha256=AHpTq1BbL3sWCaFiIjlSkqNfNqweq-qm2EIOSmUZRJ0,175
3
+ texttools/models.py,sha256=CQnO1zkKHFyqeMWrYGA4IyXQ7YYLVc3Xz1WaXbXzDLw,4634
4
+ texttools/py.typed,sha256=47DEQpj8HBSa-_TImW-5JCeuQeRkm5NMpJWZG3hSuFU,0
5
+ texttools/core/__init__.py,sha256=47DEQpj8HBSa-_TImW-5JCeuQeRkm5NMpJWZG3hSuFU,0
6
+ texttools/core/exceptions.py,sha256=6SDjUL1rmd3ngzD3ytF4LyTRj3bQMSFR9ECrLoqXXHw,395
7
+ texttools/core/internal_models.py,sha256=CmRtXGZRn5fZ18lVb42N8LrZXvJb6WwdjIhgiotWJdA,1952
8
+ texttools/core/utils.py,sha256=jqXHXU1DWDKWhK0HHSjnjq4_TLg3FMcnRzrwTF1eqqc,9744
9
+ texttools/core/operators/__init__.py,sha256=47DEQpj8HBSa-_TImW-5JCeuQeRkm5NMpJWZG3hSuFU,0
10
+ texttools/core/operators/async_operator.py,sha256=HOi9gUwIffJUtyp8WLNbMpxI8jnafNDrbtLl6vyPcUs,6221
11
+ texttools/core/operators/sync_operator.py,sha256=yM14fsku-4Nf60lPUVePaB9Lu8HbGKb4ubwoizVWuYQ,6126
12
+ texttools/prompts/augment.yaml,sha256=O-LMVyrihr0GQ8hp2Lx6uIR8Jh83bUDS9UZ-dvYOP7k,5453
13
+ texttools/prompts/categorize.yaml,sha256=kN4uRPOC7q6A13bdCIox60vZZ8sgRiTtquv-kqIvTsk,1133
14
+ texttools/prompts/extract_entities.yaml,sha256=-qe1eEvN-8nJ2_GLjeoFAPVORCPYUzsIt7UGXD485bE,648
15
+ texttools/prompts/extract_keywords.yaml,sha256=jP74HFa4Dka01d1COStEBbdzW5onqwocwyyVsmNpECs,3276
16
+ texttools/prompts/is_fact.yaml,sha256=kqF527DEdnlL3MG5tF1Z3ci_sRxmGv7dgNR2SuElq4Y,719
17
+ texttools/prompts/is_question.yaml,sha256=C-ynlt0qHpUM4BAIh0oI7UJ5BxCNU9-GR9T5864jeto,496
18
+ texttools/prompts/merge_questions.yaml,sha256=zgZs8BcwseZy1GsD_DvVGtw0yuCCc6xsK8VDmuHI2V0,1844
19
+ texttools/prompts/propositionize.yaml,sha256=xTw3HQrxtxoMpkf8a9is0uZZ0AG4IDNfh7XE0aVlNso,1441
20
+ texttools/prompts/run_custom.yaml,sha256=hSfR4BMJNUo9nP_AodPU7YTnhR-X_G-W7Pz0ROQzoI0,133
21
+ texttools/prompts/summarize.yaml,sha256=0aKYFRDxODqOOEhSexi-hn3twLwkMFVmi7rtAifnCuA,464
22
+ texttools/prompts/to_question.yaml,sha256=n8Bn28QjvSHwPHQLwRYpZ2IsaaBsq4pK9Dp_i0xk8eg,2210
23
+ texttools/prompts/translate.yaml,sha256=omtC-TlFYMidy8WqRe7idUtKNiK4g3IhEl-iyufOwjk,649
24
+ texttools/tools/__init__.py,sha256=47DEQpj8HBSa-_TImW-5JCeuQeRkm5NMpJWZG3hSuFU,0
25
+ texttools/tools/async_tools.py,sha256=2ZJ8K1-SSRSyyQ5VfDBZof0HDeRjEuakZJyHAlswrLw,46089
26
+ texttools/tools/sync_tools.py,sha256=WqHaUQscOd6RbMCGjhFbC4muw1VZxu-W5qCOA9JIwVc,41835
27
+ hamtaa_texttools-2.1.0.dist-info/METADATA,sha256=Sq4pywPSrBvHxp6sundpF2LFblcJqYgkhONx8V3XNyU,6958
28
+ hamtaa_texttools-2.1.0.dist-info/WHEEL,sha256=wUyA8OaulRlbfwMtmQsvNngGrxQHAvkKcvRmdizlJi0,92
29
+ hamtaa_texttools-2.1.0.dist-info/top_level.txt,sha256=5Mh0jIxxZ5rOXHGJ6Mp-JPKviywwN0MYuH0xk5bEWqE,10
30
+ hamtaa_texttools-2.1.0.dist-info/RECORD,,
@@ -1,5 +1,5 @@
1
1
  Wheel-Version: 1.0
2
- Generator: setuptools (80.9.0)
2
+ Generator: setuptools (80.10.2)
3
3
  Root-Is-Purelib: true
4
4
  Tag: py3-none-any
5
5
 
@@ -18,4 +18,4 @@ FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
18
18
  AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
19
19
  LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
20
20
  OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
21
- SOFTWARE.
21
+ SOFTWARE.
texttools/__init__.py CHANGED
@@ -2,4 +2,4 @@ from .models import CategoryTree
2
2
  from .tools.async_tools import AsyncTheTool
3
3
  from .tools.sync_tools import TheTool
4
4
 
5
- __all__ = ["TheTool", "AsyncTheTool", "CategoryTree"]
5
+ __all__ = ["CategoryTree", "AsyncTheTool", "TheTool"]
@@ -1,4 +1,4 @@
1
- from typing import Any, Literal, Type
1
+ from typing import Any, Literal
2
2
 
3
3
  from pydantic import BaseModel, Field, create_model
4
4
 
@@ -10,12 +10,16 @@ class OperatorOutput(BaseModel):
10
10
 
11
11
 
12
12
  class Str(BaseModel):
13
- result: str = Field(..., description="The output string", example="text")
13
+ result: str = Field(
14
+ ..., description="The output string", json_schema_extra={"example": "text"}
15
+ )
14
16
 
15
17
 
16
18
  class Bool(BaseModel):
17
19
  result: bool = Field(
18
- ..., description="Boolean indicating the output state", example=True
20
+ ...,
21
+ description="Boolean indicating the output state",
22
+ json_schema_extra={"example": True},
19
23
  )
20
24
 
21
25
 
@@ -23,7 +27,7 @@ class ListStr(BaseModel):
23
27
  result: list[str] = Field(
24
28
  ...,
25
29
  description="The output list of strings",
26
- example=["text_1", "text_2", "text_3"],
30
+ json_schema_extra={"example": ["text_1", "text_2", "text_3"]},
27
31
  )
28
32
 
29
33
 
@@ -31,7 +35,12 @@ class ListDictStrStr(BaseModel):
31
35
  result: list[dict[str, str]] = Field(
32
36
  ...,
33
37
  description="List of dictionaries containing string key-value pairs",
34
- example=[{"text": "Mohammad", "type": "PER"}, {"text": "Iran", "type": "LOC"}],
38
+ json_schema_extra={
39
+ "example": [
40
+ {"text": "Mohammad", "type": "PER"},
41
+ {"text": "Iran", "type": "LOC"},
42
+ ]
43
+ },
35
44
  )
36
45
 
37
46
 
@@ -40,12 +49,12 @@ class ReasonListStr(BaseModel):
40
49
  result: list[str] = Field(
41
50
  ...,
42
51
  description="The output list of strings",
43
- example=["text_1", "text_2", "text_3"],
52
+ json_schema_extra={"example": ["text_1", "text_2", "text_3"]},
44
53
  )
45
54
 
46
55
 
47
56
  # Create CategorizerOutput with dynamic categories
48
- def create_dynamic_model(allowed_values: list[str]) -> Type[BaseModel]:
57
+ def create_dynamic_model(allowed_values: list[str]) -> type[BaseModel]:
49
58
  literal_type = Literal[*allowed_values]
50
59
 
51
60
  CategorizerOutput = create_model(
@@ -1,15 +1,12 @@
1
1
  from collections.abc import Callable
2
- from typing import Any, Type, TypeVar
2
+ from typing import Any
3
3
 
4
4
  from openai import AsyncOpenAI
5
5
  from pydantic import BaseModel
6
6
 
7
- from ..engine import OperatorUtils, PromptLoader
8
7
  from ..exceptions import LLMError, PromptError, TextToolsError, ValidationError
9
8
  from ..internal_models import OperatorOutput
10
-
11
- # Base Model type for output models
12
- T = TypeVar("T", bound=BaseModel)
9
+ from ..utils import OperatorUtils
13
10
 
14
11
 
15
12
  class AsyncOperator:
@@ -46,12 +43,12 @@ class AsyncOperator:
46
43
  async def _parse_completion(
47
44
  self,
48
45
  main_message: list[dict[str, str]],
49
- output_model: Type[T],
46
+ output_model: type[BaseModel],
50
47
  temperature: float,
51
48
  logprobs: bool,
52
49
  top_logprobs: int,
53
50
  priority: int | None,
54
- ) -> tuple[T, Any]:
51
+ ) -> tuple[BaseModel, Any]:
55
52
  """
56
53
  Parses a chat completion using OpenAI's structured output format.
57
54
  Returns both the parsed and the completion for logprobs.
@@ -103,7 +100,7 @@ class AsyncOperator:
103
100
  max_validation_retries: int | None,
104
101
  priority: int | None,
105
102
  tool_name: str,
106
- output_model: Type[T],
103
+ output_model: type[BaseModel],
107
104
  mode: str | None,
108
105
  **extra_kwargs,
109
106
  ) -> OperatorOutput:
@@ -111,8 +108,7 @@ class AsyncOperator:
111
108
  Execute the LLM pipeline with the given input text.
112
109
  """
113
110
  try:
114
- prompt_loader = PromptLoader()
115
- prompt_configs = prompt_loader.load(
111
+ prompt_configs = OperatorUtils.load_prompt(
116
112
  prompt_file=tool_name + ".yaml",
117
113
  text=text.strip(),
118
114
  mode=mode,
@@ -127,11 +123,10 @@ class AsyncOperator:
127
123
  )
128
124
  analysis = await self._analyze_completion(analyze_message)
129
125
 
130
- main_message = OperatorUtils.build_message(
131
- OperatorUtils.build_main_prompt(
132
- prompt_configs["main_template"], analysis, output_lang, user_prompt
133
- )
126
+ main_prompt = OperatorUtils.build_main_prompt(
127
+ prompt_configs["main_template"], analysis, output_lang, user_prompt
134
128
  )
129
+ main_message = OperatorUtils.build_message(main_prompt)
135
130
 
136
131
  parsed, completion = await self._parse_completion(
137
132
  main_message,
@@ -142,7 +137,7 @@ class AsyncOperator:
142
137
  priority,
143
138
  )
144
139
 
145
- # Retry logic if validation fails
140
+ # Retry logic in case output validation fails
146
141
  if validator and not validator(parsed.result):
147
142
  if (
148
143
  not isinstance(max_validation_retries, int)
@@ -152,7 +147,6 @@ class AsyncOperator:
152
147
 
153
148
  succeeded = False
154
149
  for _ in range(max_validation_retries):
155
- # Generate a new temperature to retry
156
150
  retry_temperature = OperatorUtils.get_retry_temp(temperature)
157
151
 
158
152
  try:
@@ -1,15 +1,12 @@
1
1
  from collections.abc import Callable
2
- from typing import Any, Type, TypeVar
2
+ from typing import Any
3
3
 
4
4
  from openai import OpenAI
5
5
  from pydantic import BaseModel
6
6
 
7
- from ..engine import OperatorUtils, PromptLoader
8
7
  from ..exceptions import LLMError, PromptError, TextToolsError, ValidationError
9
8
  from ..internal_models import OperatorOutput
10
-
11
- # Base Model type for output models
12
- T = TypeVar("T", bound=BaseModel)
9
+ from ..utils import OperatorUtils
13
10
 
14
11
 
15
12
  class Operator:
@@ -46,12 +43,12 @@ class Operator:
46
43
  def _parse_completion(
47
44
  self,
48
45
  main_message: list[dict[str, str]],
49
- output_model: Type[T],
46
+ output_model: type[BaseModel],
50
47
  temperature: float,
51
48
  logprobs: bool,
52
49
  top_logprobs: int,
53
50
  priority: int | None,
54
- ) -> tuple[T, Any]:
51
+ ) -> tuple[BaseModel, Any]:
55
52
  """
56
53
  Parses a chat completion using OpenAI's structured output format.
57
54
  Returns both the parsed and the completion for logprobs.
@@ -101,7 +98,7 @@ class Operator:
101
98
  max_validation_retries: int | None,
102
99
  priority: int | None,
103
100
  tool_name: str,
104
- output_model: Type[T],
101
+ output_model: type[BaseModel],
105
102
  mode: str | None,
106
103
  **extra_kwargs,
107
104
  ) -> OperatorOutput:
@@ -109,8 +106,7 @@ class Operator:
109
106
  Execute the LLM pipeline with the given input text.
110
107
  """
111
108
  try:
112
- prompt_loader = PromptLoader()
113
- prompt_configs = prompt_loader.load(
109
+ prompt_configs = OperatorUtils.load_prompt(
114
110
  prompt_file=tool_name + ".yaml",
115
111
  text=text.strip(),
116
112
  mode=mode,
@@ -125,11 +121,10 @@ class Operator:
125
121
  )
126
122
  analysis = self._analyze_completion(analyze_message)
127
123
 
128
- main_message = OperatorUtils.build_message(
129
- OperatorUtils.build_main_prompt(
130
- prompt_configs["main_template"], analysis, output_lang, user_prompt
131
- )
124
+ main_prompt = OperatorUtils.build_main_prompt(
125
+ prompt_configs["main_template"], analysis, output_lang, user_prompt
132
126
  )
127
+ main_message = OperatorUtils.build_message(main_prompt)
133
128
 
134
129
  parsed, completion = self._parse_completion(
135
130
  main_message,
@@ -140,7 +135,7 @@ class Operator:
140
135
  priority,
141
136
  )
142
137
 
143
- # Retry logic if validation fails
138
+ # Retry logic in case output validation fails
144
139
  if validator and not validator(parsed.result):
145
140
  if (
146
141
  not isinstance(max_validation_retries, int)
@@ -150,7 +145,6 @@ class Operator:
150
145
 
151
146
  succeeded = False
152
147
  for _ in range(max_validation_retries):
153
- # Generate a new temperature to retry
154
148
  retry_temperature = OperatorUtils.get_retry_temp(temperature)
155
149
 
156
150
  try: