hamtaa-texttools 1.1.2__py3-none-any.whl → 1.1.9__py3-none-any.whl
This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
- {hamtaa_texttools-1.1.2.dist-info → hamtaa_texttools-1.1.9.dist-info}/METADATA +77 -12
- hamtaa_texttools-1.1.9.dist-info/RECORD +30 -0
- texttools/__init__.py +2 -7
- texttools/batch/__init__.py +2 -3
- texttools/batch/batch_manager.py +14 -14
- texttools/batch/batch_runner.py +53 -61
- texttools/prompts/README.md +4 -4
- texttools/tools/__init__.py +2 -2
- texttools/tools/{async_the_tool.py → async_tools.py} +22 -1
- texttools/tools/internals/async_operator.py +57 -8
- texttools/tools/internals/base_operator.py +19 -11
- texttools/tools/internals/operator.py +59 -12
- texttools/tools/internals/output_models.py +7 -4
- texttools/tools/internals/prompt_loader.py +2 -7
- texttools/tools/{the_tool.py → sync_tools.py} +22 -1
- hamtaa_texttools-1.1.2.dist-info/RECORD +0 -30
- {hamtaa_texttools-1.1.2.dist-info → hamtaa_texttools-1.1.9.dist-info}/WHEEL +0 -0
- {hamtaa_texttools-1.1.2.dist-info → hamtaa_texttools-1.1.9.dist-info}/licenses/LICENSE +0 -0
- {hamtaa_texttools-1.1.2.dist-info → hamtaa_texttools-1.1.9.dist-info}/top_level.txt +0 -0
|
@@ -1,6 +1,6 @@
|
|
|
1
1
|
Metadata-Version: 2.4
|
|
2
2
|
Name: hamtaa-texttools
|
|
3
|
-
Version: 1.1.
|
|
3
|
+
Version: 1.1.9
|
|
4
4
|
Summary: A high-level NLP toolkit built on top of modern LLMs.
|
|
5
5
|
Author-email: Tohidi <the.mohammad.tohidi@gmail.com>, Montazer <montazerh82@gmail.com>, Givechi <mohamad.m.givechi@gmail.com>, MoosaviNejad <erfanmoosavi84@gmail.com>
|
|
6
6
|
License: MIT License
|
|
@@ -40,14 +40,14 @@ Dynamic: license-file
|
|
|
40
40
|
|
|
41
41
|
It provides both **sync (`TheTool`)** and **async (`AsyncTheTool`)** APIs for maximum flexibility.
|
|
42
42
|
|
|
43
|
-
It provides ready-to-use utilities for **translation, question detection, keyword extraction, categorization, NER
|
|
43
|
+
It provides ready-to-use utilities for **translation, question detection, keyword extraction, categorization, NER extraction, and more** — designed to help you integrate AI-powered text processing into your applications with minimal effort.
|
|
44
44
|
|
|
45
45
|
---
|
|
46
46
|
|
|
47
47
|
## ✨ Features
|
|
48
48
|
|
|
49
49
|
TextTools provides a rich collection of high-level NLP utilities built on top of LLMs.
|
|
50
|
-
Each tool is designed to work
|
|
50
|
+
Each tool is designed to work with structured outputs (JSON / Pydantic).
|
|
51
51
|
|
|
52
52
|
- **`categorize()`** - Classifies text into Islamic studies categories
|
|
53
53
|
- **`is_question()`** - Binary detection of whether input is a question
|
|
@@ -63,7 +63,7 @@ Each tool is designed to work out-of-the-box with structured outputs (JSON / Pyd
|
|
|
63
63
|
|
|
64
64
|
---
|
|
65
65
|
|
|
66
|
-
## ⚙️ `with_analysis`, `logprobs`, `output_lang`, `user_prompt` and `
|
|
66
|
+
## ⚙️ `with_analysis`, `logprobs`, `output_lang`, `user_prompt`, `temperature` and `validator` parameters
|
|
67
67
|
|
|
68
68
|
TextTools provides several optional flags to customize LLM behavior:
|
|
69
69
|
|
|
@@ -78,12 +78,26 @@ Note: This doubles token usage per call because it triggers an additional LLM re
|
|
|
78
78
|
|
|
79
79
|
- **`temperature=0.0`** → Determines how creative the model should respond. Takes a float number from `0.0` to `1.0`.
|
|
80
80
|
|
|
81
|
+
- **`validator=validation_function`** → Forces TheTool to validate the output result based on your custom validator. Validator should return bool (True if there were no problem, False if the validation failed.) If validator failed, TheTool will retry to get another output by modifying `temperature`.
|
|
82
|
+
|
|
81
83
|
All these parameters can be used individually or together to tailor the behavior of any tool in **TextTools**.
|
|
82
84
|
|
|
83
85
|
**Note:** There might be some tools that don't support some of the parameters above.
|
|
84
86
|
|
|
85
87
|
---
|
|
86
88
|
|
|
89
|
+
## 🧩 ToolOutput
|
|
90
|
+
|
|
91
|
+
Every tool of `TextTools` returns a `ToolOutput` object which is a BaseModel with attributes:
|
|
92
|
+
- **`result`** → The output of LLM (`type=Any`)
|
|
93
|
+
- **`analysis`** → The reasoning step before generating the final output (`type=str`)
|
|
94
|
+
- **`logprobs`** → Token-level probabilities for the generated output (`type=list`)
|
|
95
|
+
- **`errors`** → Any error that have occured during calling LLM (`type=str`)
|
|
96
|
+
|
|
97
|
+
**None:** You can use `repr(ToolOutput)` to see details of an output.
|
|
98
|
+
|
|
99
|
+
---
|
|
100
|
+
|
|
87
101
|
## 🚀 Installation
|
|
88
102
|
|
|
89
103
|
Install the latest release via PyPI:
|
|
@@ -121,13 +135,13 @@ the_tool = TheTool(client=client, model=model)
|
|
|
121
135
|
detection = the_tool.is_question("Is this project open source?", logprobs=True, top_logprobs=2)
|
|
122
136
|
print(detection.result)
|
|
123
137
|
print(detection.logprobs)
|
|
124
|
-
# Output: True
|
|
138
|
+
# Output: True + logprobs
|
|
125
139
|
|
|
126
140
|
# Example: Translation
|
|
127
141
|
translation = the_tool.translate("سلام، حالت چطوره؟" target_language="English", with_analysis=True)
|
|
128
142
|
print(translation.result)
|
|
129
143
|
print(translation.analysis)
|
|
130
|
-
# Output: "Hi! How are you?"
|
|
144
|
+
# Output: "Hi! How are you?" + analysis
|
|
131
145
|
```
|
|
132
146
|
|
|
133
147
|
---
|
|
@@ -147,19 +161,22 @@ async def main():
|
|
|
147
161
|
model = "gpt-4o-mini"
|
|
148
162
|
|
|
149
163
|
# Create an instance of AsyncTheTool
|
|
150
|
-
|
|
164
|
+
async_the_tool = AsyncTheTool(client=async_client, model=model)
|
|
165
|
+
|
|
166
|
+
# Example: Async Translation and Keyword Extraction
|
|
167
|
+
translation_task = async_the_tool.translate("سلام، حالت چطوره؟", target_language="English")
|
|
168
|
+
keywords_task = async_the_tool.extract_keywords("Tomorrow, we will be dead by the car crash")
|
|
151
169
|
|
|
152
|
-
|
|
153
|
-
translation = await the_tool.translate("سلام، حالت چطوره؟", target_language="English")
|
|
170
|
+
(translation, keywords) = await asyncio.gather(translation_task, keywords_task)
|
|
154
171
|
print(translation.result)
|
|
155
|
-
|
|
172
|
+
print(keywords.result)
|
|
156
173
|
|
|
157
174
|
asyncio.run(main())
|
|
158
175
|
```
|
|
159
176
|
|
|
160
177
|
---
|
|
161
178
|
|
|
162
|
-
##
|
|
179
|
+
## 👍 Use Cases
|
|
163
180
|
|
|
164
181
|
Use **TextTools** when you need to:
|
|
165
182
|
|
|
@@ -167,7 +184,55 @@ Use **TextTools** when you need to:
|
|
|
167
184
|
- 🌍 **Translate** and process multilingual corpora with ease
|
|
168
185
|
- 🧩 **Integrate** LLMs into production pipelines (structured outputs)
|
|
169
186
|
- 📊 **Analyze** large text collections using embeddings and categorization
|
|
170
|
-
|
|
187
|
+
|
|
188
|
+
---
|
|
189
|
+
|
|
190
|
+
## 🔍 Logging
|
|
191
|
+
|
|
192
|
+
TextTools uses Python's standard `logging` module. The library's default logger level is `WARNING`, so if you want to modify it, follow instructions:
|
|
193
|
+
|
|
194
|
+
|
|
195
|
+
```python
|
|
196
|
+
import logging
|
|
197
|
+
|
|
198
|
+
# Default: warnings and errors only
|
|
199
|
+
logging.basicConfig(level=logging.WARNING)
|
|
200
|
+
|
|
201
|
+
# Debug everything (verbose)
|
|
202
|
+
logging.basicConfig(level=logging.DEBUG)
|
|
203
|
+
|
|
204
|
+
# Complete silence
|
|
205
|
+
logging.basicConfig(level=logging.CRITICAL)
|
|
206
|
+
```
|
|
207
|
+
|
|
208
|
+
---
|
|
209
|
+
|
|
210
|
+
## 📚 Batch Processing
|
|
211
|
+
|
|
212
|
+
Process large datasets efficiently using OpenAI's batch API.
|
|
213
|
+
|
|
214
|
+
## Quick Start
|
|
215
|
+
|
|
216
|
+
```python
|
|
217
|
+
from texttools import BatchJobRunner, BatchConfig
|
|
218
|
+
|
|
219
|
+
# Configure your batch job
|
|
220
|
+
config = BatchConfig(
|
|
221
|
+
system_prompt="Extract entities from the text",
|
|
222
|
+
job_name="entity_extraction",
|
|
223
|
+
input_data_path="data.json",
|
|
224
|
+
output_data_filename="results.json",
|
|
225
|
+
model="gpt-4o-mini"
|
|
226
|
+
)
|
|
227
|
+
|
|
228
|
+
# Define your output schema
|
|
229
|
+
class Output(BaseModel):
|
|
230
|
+
entities: list[str]
|
|
231
|
+
|
|
232
|
+
# Run the batch job
|
|
233
|
+
runner = BatchJobRunner(config, output_model=Output)
|
|
234
|
+
runner.run()
|
|
235
|
+
```
|
|
171
236
|
|
|
172
237
|
---
|
|
173
238
|
|
|
@@ -0,0 +1,30 @@
|
|
|
1
|
+
hamtaa_texttools-1.1.9.dist-info/licenses/LICENSE,sha256=Hb2YOBKy2MJQLnyLrX37B4ZVuac8eaIcE71SvVIMOLg,1082
|
|
2
|
+
texttools/__init__.py,sha256=lFYe1jdssHC1h8qcPpV3whANxiDi8aiiFdY-7L0Ck10,164
|
|
3
|
+
texttools/batch/__init__.py,sha256=DJGJTfR6F3Yv4_alsj9g1tesGzdcSV27Zw74DonhW_s,102
|
|
4
|
+
texttools/batch/batch_manager.py,sha256=ZgLiO9maCHnx2cJbUjsYXFnlUsMLI2TP3Vc9uKU0BLg,8706
|
|
5
|
+
texttools/batch/batch_runner.py,sha256=X0YQmaowO_jUSAFWBHdxOLoRrX_gvmrJDgp9qPlOSEw,10254
|
|
6
|
+
texttools/prompts/README.md,sha256=-5YO93CN93QLifqZpUeUnCOCBbDiOTV-cFQeJ7Gg0I4,1377
|
|
7
|
+
texttools/prompts/categorizer.yaml,sha256=GMqIIzQFhgnlpkgU1qi3FAD3mD4A2jiWD5TilQ2XnnE,1204
|
|
8
|
+
texttools/prompts/extract_entities.yaml,sha256=KiKjeDpHaeh3JVtZ6q1pa3k4DYucUIU9WnEcRTCA-SE,651
|
|
9
|
+
texttools/prompts/extract_keywords.yaml,sha256=0O7ypL_OsEOxtvlQ2CZjnsv9637DJwAKprZsf9Vo2_s,769
|
|
10
|
+
texttools/prompts/is_question.yaml,sha256=d0-vKRbXWkxvO64ikvxRjEmpAXGpCYIPGhgexvPPjws,471
|
|
11
|
+
texttools/prompts/merge_questions.yaml,sha256=0J85GvTirZB4ELwH3sk8ub_WcqqpYf6PrMKr3djlZeo,1792
|
|
12
|
+
texttools/prompts/rewrite.yaml,sha256=LO7He_IA3MZKz8a-LxH9DHJpOjpYwaYN1pbjp1Y0tFo,5392
|
|
13
|
+
texttools/prompts/run_custom.yaml,sha256=38OkCoVITbuuS9c08UZSP1jZW4WjSmRIi8fR0RAiPu4,108
|
|
14
|
+
texttools/prompts/subject_to_question.yaml,sha256=C7x7rNNm6U_ZG9HOn6zuzYOtvJUZ2skuWbL1-aYdd3E,1147
|
|
15
|
+
texttools/prompts/summarize.yaml,sha256=o6rxGPfWtZd61Duvm8NVvCJqfq73b-wAuMSKR6UYUqY,459
|
|
16
|
+
texttools/prompts/text_to_question.yaml,sha256=UheKYpDn6iyKI8NxunHZtFpNyfCLZZe5cvkuXpurUJY,783
|
|
17
|
+
texttools/prompts/translate.yaml,sha256=mGT2uBCei6uucWqVbs4silk-UV060v3G0jnt0P6sr50,634
|
|
18
|
+
texttools/tools/__init__.py,sha256=3fPoeB-E5wGxWgv7axztHkeolR7ZDUJudd0xmpPFjao,113
|
|
19
|
+
texttools/tools/async_tools.py,sha256=2ZY7Lo6Jj9xoTF8bfdh_g8VOXZ7ljMMesd1_QHXyf4s,15395
|
|
20
|
+
texttools/tools/sync_tools.py,sha256=XKgZuzriFnk8B-YihJfs6BKivxjGCgOFfe7hnCpEiXs,15161
|
|
21
|
+
texttools/tools/internals/async_operator.py,sha256=egBsrcpGBmkDY5YzUvGHh1TjPmsH9IOVXDGmYMWjzMs,8960
|
|
22
|
+
texttools/tools/internals/base_operator.py,sha256=qV9LlVo_DzSCzQnjYTFi-6mlHN4gE0edPE2y_9WwQFw,3292
|
|
23
|
+
texttools/tools/internals/formatters.py,sha256=tACNLP6PeoqaRpNudVxBaHA25zyWqWYPZQuYysIu88g,941
|
|
24
|
+
texttools/tools/internals/operator.py,sha256=xgbt1Mm67SEC-KD9jwXjXGTCcaCsaVLhG6iCYOqLDcc,8709
|
|
25
|
+
texttools/tools/internals/output_models.py,sha256=ekpbyocmXj_dee7ieOT1zOkMo9cPHT7xcUFCZoUaXA0,1886
|
|
26
|
+
texttools/tools/internals/prompt_loader.py,sha256=8uD7JUatKXSLXhGwWs46iQpcjWdhF9p32SFDLMndy1o,1940
|
|
27
|
+
hamtaa_texttools-1.1.9.dist-info/METADATA,sha256=nQFuGr_7aVHlO7nRsTbubEtO0QVUofcdUKwMATzHhUU,9129
|
|
28
|
+
hamtaa_texttools-1.1.9.dist-info/WHEEL,sha256=_zCd3N1l69ArxyTb8rzEoP9TpbYXkqRFSNOD5OuxnTs,91
|
|
29
|
+
hamtaa_texttools-1.1.9.dist-info/top_level.txt,sha256=5Mh0jIxxZ5rOXHGJ6Mp-JPKviywwN0MYuH0xk5bEWqE,10
|
|
30
|
+
hamtaa_texttools-1.1.9.dist-info/RECORD,,
|
texttools/__init__.py
CHANGED
|
@@ -1,9 +1,4 @@
|
|
|
1
|
-
from .batch import BatchJobRunner,
|
|
1
|
+
from .batch import BatchJobRunner, BatchConfig
|
|
2
2
|
from .tools import AsyncTheTool, TheTool
|
|
3
3
|
|
|
4
|
-
__all__ = [
|
|
5
|
-
"TheTool",
|
|
6
|
-
"AsyncTheTool",
|
|
7
|
-
"SimpleBatchManager",
|
|
8
|
-
"BatchJobRunner",
|
|
9
|
-
]
|
|
4
|
+
__all__ = ["TheTool", "AsyncTheTool", "BatchJobRunner", "BatchConfig"]
|
texttools/batch/__init__.py
CHANGED
texttools/batch/batch_manager.py
CHANGED
|
@@ -1,18 +1,20 @@
|
|
|
1
1
|
import json
|
|
2
2
|
import uuid
|
|
3
3
|
from pathlib import Path
|
|
4
|
-
from typing import Any, Type
|
|
4
|
+
from typing import Any, Type, TypeVar
|
|
5
5
|
import logging
|
|
6
6
|
|
|
7
7
|
from pydantic import BaseModel
|
|
8
8
|
from openai import OpenAI
|
|
9
9
|
from openai.lib._pydantic import to_strict_json_schema
|
|
10
10
|
|
|
11
|
-
|
|
12
|
-
|
|
11
|
+
# Base Model type for output models
|
|
12
|
+
T = TypeVar("T", bound=BaseModel)
|
|
13
13
|
|
|
14
|
+
logger = logging.getLogger("texttools.batch_manager")
|
|
14
15
|
|
|
15
|
-
|
|
16
|
+
|
|
17
|
+
class BatchManager:
|
|
16
18
|
"""
|
|
17
19
|
Manages batch processing jobs for OpenAI's chat completions with structured outputs.
|
|
18
20
|
|
|
@@ -25,9 +27,8 @@ class SimpleBatchManager:
|
|
|
25
27
|
self,
|
|
26
28
|
client: OpenAI,
|
|
27
29
|
model: str,
|
|
28
|
-
output_model: Type[
|
|
30
|
+
output_model: Type[T],
|
|
29
31
|
prompt_template: str,
|
|
30
|
-
handlers: list[Any] | None = None,
|
|
31
32
|
state_dir: Path = Path(".batch_jobs"),
|
|
32
33
|
custom_json_schema_obj_str: dict | None = None,
|
|
33
34
|
**client_kwargs: Any,
|
|
@@ -36,16 +37,16 @@ class SimpleBatchManager:
|
|
|
36
37
|
self.model = model
|
|
37
38
|
self.output_model = output_model
|
|
38
39
|
self.prompt_template = prompt_template
|
|
39
|
-
self.handlers = handlers or []
|
|
40
40
|
self.state_dir = state_dir
|
|
41
41
|
self.state_dir.mkdir(parents=True, exist_ok=True)
|
|
42
42
|
self.custom_json_schema_obj_str = custom_json_schema_obj_str
|
|
43
43
|
self.client_kwargs = client_kwargs
|
|
44
44
|
self.dict_input = False
|
|
45
45
|
|
|
46
|
-
if
|
|
47
|
-
|
|
48
|
-
|
|
46
|
+
if custom_json_schema_obj_str and not isinstance(
|
|
47
|
+
custom_json_schema_obj_str, dict
|
|
48
|
+
):
|
|
49
|
+
raise ValueError("Schema should be a dict")
|
|
49
50
|
|
|
50
51
|
def _state_file(self, job_name: str) -> Path:
|
|
51
52
|
return self.state_dir / f"{job_name}.json"
|
|
@@ -126,7 +127,7 @@ class SimpleBatchManager:
|
|
|
126
127
|
|
|
127
128
|
else:
|
|
128
129
|
raise TypeError(
|
|
129
|
-
"The input must be either a list of texts or a dictionary in the form {'id': str, 'text': str}
|
|
130
|
+
"The input must be either a list of texts or a dictionary in the form {'id': str, 'text': str}"
|
|
130
131
|
)
|
|
131
132
|
|
|
132
133
|
file_path = self.state_dir / f"batch_{uuid.uuid4().hex}.jsonl"
|
|
@@ -142,6 +143,7 @@ class SimpleBatchManager:
|
|
|
142
143
|
"""
|
|
143
144
|
if self._load_state(job_name):
|
|
144
145
|
return
|
|
146
|
+
|
|
145
147
|
path = self._prepare_file(payload)
|
|
146
148
|
upload = self.client.files.create(file=open(path, "rb"), purpose="batch")
|
|
147
149
|
job = self.client.batches.create(
|
|
@@ -186,7 +188,7 @@ class SimpleBatchManager:
|
|
|
186
188
|
err_content = (
|
|
187
189
|
self.client.files.content(error_file_id).read().decode("utf-8")
|
|
188
190
|
)
|
|
189
|
-
logger.
|
|
191
|
+
logger.error("Error file content:", err_content)
|
|
190
192
|
return {}
|
|
191
193
|
|
|
192
194
|
content = self.client.files.content(out_file_id).read().decode("utf-8")
|
|
@@ -220,8 +222,6 @@ class SimpleBatchManager:
|
|
|
220
222
|
error_d = {custom_id: results[custom_id]}
|
|
221
223
|
log.append(error_d)
|
|
222
224
|
|
|
223
|
-
for handler in self.handlers:
|
|
224
|
-
handler.handle(results)
|
|
225
225
|
if remove_cache:
|
|
226
226
|
self._clear_state(job_name)
|
|
227
227
|
|
texttools/batch/batch_runner.py
CHANGED
|
@@ -3,24 +3,23 @@ import os
|
|
|
3
3
|
import time
|
|
4
4
|
from dataclasses import dataclass
|
|
5
5
|
from pathlib import Path
|
|
6
|
-
from typing import Any, Callable
|
|
6
|
+
from typing import Any, Callable, Type, TypeVar
|
|
7
7
|
import logging
|
|
8
8
|
|
|
9
9
|
from dotenv import load_dotenv
|
|
10
10
|
from openai import OpenAI
|
|
11
11
|
from pydantic import BaseModel
|
|
12
12
|
|
|
13
|
-
from texttools.batch import
|
|
13
|
+
from texttools.batch.batch_manager import BatchManager
|
|
14
|
+
from texttools.tools.internals.output_models import StrOutput
|
|
14
15
|
|
|
15
|
-
|
|
16
|
-
|
|
16
|
+
# Base Model type for output models
|
|
17
|
+
T = TypeVar("T", bound=BaseModel)
|
|
17
18
|
|
|
19
|
+
logger = logging.getLogger("texttools.batch_runner")
|
|
18
20
|
|
|
19
|
-
class OutputModel(BaseModel):
|
|
20
|
-
desired_output: str
|
|
21
21
|
|
|
22
|
-
|
|
23
|
-
def export_data(data):
|
|
22
|
+
def export_data(data) -> list[dict[str, str]]:
|
|
24
23
|
"""
|
|
25
24
|
Produces a structure of the following form from an initial data structure:
|
|
26
25
|
[{"id": str, "text": str},...]
|
|
@@ -28,7 +27,7 @@ def export_data(data):
|
|
|
28
27
|
return data
|
|
29
28
|
|
|
30
29
|
|
|
31
|
-
def import_data(data):
|
|
30
|
+
def import_data(data) -> Any:
|
|
32
31
|
"""
|
|
33
32
|
Takes the output and adds and aggregates it to the original structure.
|
|
34
33
|
"""
|
|
@@ -47,9 +46,9 @@ class BatchConfig:
|
|
|
47
46
|
output_data_filename: str = ""
|
|
48
47
|
model: str = "gpt-4.1-mini"
|
|
49
48
|
MAX_BATCH_SIZE: int = 100
|
|
50
|
-
MAX_TOTAL_TOKENS: int =
|
|
49
|
+
MAX_TOTAL_TOKENS: int = 2_000_000
|
|
51
50
|
CHARS_PER_TOKEN: float = 2.7
|
|
52
|
-
PROMPT_TOKEN_MULTIPLIER: int =
|
|
51
|
+
PROMPT_TOKEN_MULTIPLIER: int = 1_000
|
|
53
52
|
BASE_OUTPUT_DIR: str = "Data/batch_entity_result"
|
|
54
53
|
import_function: Callable = import_data
|
|
55
54
|
export_function: Callable = export_data
|
|
@@ -63,7 +62,7 @@ class BatchJobRunner:
|
|
|
63
62
|
"""
|
|
64
63
|
|
|
65
64
|
def __init__(
|
|
66
|
-
self, config: BatchConfig = BatchConfig(), output_model:
|
|
65
|
+
self, config: BatchConfig = BatchConfig(), output_model: Type[T] = StrOutput
|
|
67
66
|
):
|
|
68
67
|
self.config = config
|
|
69
68
|
self.system_prompt = config.system_prompt
|
|
@@ -82,11 +81,11 @@ class BatchJobRunner:
|
|
|
82
81
|
# Track retry attempts per part
|
|
83
82
|
self.part_attempts: dict[int, int] = {}
|
|
84
83
|
|
|
85
|
-
def _init_manager(self) ->
|
|
84
|
+
def _init_manager(self) -> BatchManager:
|
|
86
85
|
load_dotenv()
|
|
87
86
|
api_key = os.getenv("OPENAI_API_KEY")
|
|
88
87
|
client = OpenAI(api_key=api_key)
|
|
89
|
-
return
|
|
88
|
+
return BatchManager(
|
|
90
89
|
client=client,
|
|
91
90
|
model=self.model,
|
|
92
91
|
prompt_template=self.system_prompt,
|
|
@@ -101,12 +100,12 @@ class BatchJobRunner:
|
|
|
101
100
|
# Ensure data is a list of dicts with 'id' and 'content' as strings
|
|
102
101
|
if not isinstance(data, list):
|
|
103
102
|
raise ValueError(
|
|
104
|
-
|
|
103
|
+
"Exported data must be a list of dicts with 'id' and 'content' keys"
|
|
105
104
|
)
|
|
106
105
|
for item in data:
|
|
107
106
|
if not (isinstance(item, dict) and "id" in item and "content" in item):
|
|
108
107
|
raise ValueError(
|
|
109
|
-
"
|
|
108
|
+
f"Item must be a dict with 'id' and 'content' keys. Got: {type(item)}"
|
|
110
109
|
)
|
|
111
110
|
if not (isinstance(item["id"], str) and isinstance(item["content"], str)):
|
|
112
111
|
raise ValueError("'id' and 'content' must be strings.")
|
|
@@ -161,7 +160,45 @@ class BatchJobRunner:
|
|
|
161
160
|
logger.info("Uploading...")
|
|
162
161
|
time.sleep(30)
|
|
163
162
|
|
|
163
|
+
def _save_results(
|
|
164
|
+
self,
|
|
165
|
+
output_data: list[dict[str, Any]] | dict[str, Any],
|
|
166
|
+
log: list[Any],
|
|
167
|
+
part_idx: int,
|
|
168
|
+
):
|
|
169
|
+
part_suffix = f"_part_{part_idx + 1}" if len(self.parts) > 1 else ""
|
|
170
|
+
result_path = (
|
|
171
|
+
Path(self.config.BASE_OUTPUT_DIR)
|
|
172
|
+
/ f"{Path(self.output_data_filename).stem}{part_suffix}.json"
|
|
173
|
+
)
|
|
174
|
+
if not output_data:
|
|
175
|
+
logger.info("No output data to save. Skipping this part.")
|
|
176
|
+
return
|
|
177
|
+
else:
|
|
178
|
+
with open(result_path, "w", encoding="utf-8") as f:
|
|
179
|
+
json.dump(output_data, f, ensure_ascii=False, indent=4)
|
|
180
|
+
if log:
|
|
181
|
+
log_path = (
|
|
182
|
+
Path(self.config.BASE_OUTPUT_DIR)
|
|
183
|
+
/ f"{Path(self.output_data_filename).stem}{part_suffix}_log.json"
|
|
184
|
+
)
|
|
185
|
+
with open(log_path, "w", encoding="utf-8") as f:
|
|
186
|
+
json.dump(log, f, ensure_ascii=False, indent=4)
|
|
187
|
+
|
|
188
|
+
def _result_exists(self, part_idx: int) -> bool:
|
|
189
|
+
part_suffix = f"_part_{part_idx + 1}" if len(self.parts) > 1 else ""
|
|
190
|
+
result_path = (
|
|
191
|
+
Path(self.config.BASE_OUTPUT_DIR)
|
|
192
|
+
/ f"{Path(self.output_data_filename).stem}{part_suffix}.json"
|
|
193
|
+
)
|
|
194
|
+
return result_path.exists()
|
|
195
|
+
|
|
164
196
|
def run(self):
|
|
197
|
+
"""
|
|
198
|
+
Execute the batch job processing pipeline.
|
|
199
|
+
|
|
200
|
+
Submits jobs, monitors progress, handles retries, and saves results.
|
|
201
|
+
"""
|
|
165
202
|
# Submit all jobs up-front for concurrent execution
|
|
166
203
|
self._submit_all_jobs()
|
|
167
204
|
pending_parts: set[int] = set(self.part_idx_to_job_name.keys())
|
|
@@ -215,48 +252,3 @@ class BatchJobRunner:
|
|
|
215
252
|
f"Waiting {self.config.poll_interval_seconds}s before next status check for parts: {sorted(pending_parts)}"
|
|
216
253
|
)
|
|
217
254
|
time.sleep(self.config.poll_interval_seconds)
|
|
218
|
-
|
|
219
|
-
def _save_results(
|
|
220
|
-
self,
|
|
221
|
-
output_data: list[dict[str, Any]] | dict[str, Any],
|
|
222
|
-
log: list[Any],
|
|
223
|
-
part_idx: int,
|
|
224
|
-
):
|
|
225
|
-
part_suffix = f"_part_{part_idx + 1}" if len(self.parts) > 1 else ""
|
|
226
|
-
result_path = (
|
|
227
|
-
Path(self.config.BASE_OUTPUT_DIR)
|
|
228
|
-
/ f"{Path(self.output_data_filename).stem}{part_suffix}.json"
|
|
229
|
-
)
|
|
230
|
-
if not output_data:
|
|
231
|
-
logger.info("No output data to save. Skipping this part.")
|
|
232
|
-
return
|
|
233
|
-
else:
|
|
234
|
-
with open(result_path, "w", encoding="utf-8") as f:
|
|
235
|
-
json.dump(output_data, f, ensure_ascii=False, indent=4)
|
|
236
|
-
if log:
|
|
237
|
-
log_path = (
|
|
238
|
-
Path(self.config.BASE_OUTPUT_DIR)
|
|
239
|
-
/ f"{Path(self.output_data_filename).stem}{part_suffix}_log.json"
|
|
240
|
-
)
|
|
241
|
-
with open(log_path, "w", encoding="utf-8") as f:
|
|
242
|
-
json.dump(log, f, ensure_ascii=False, indent=4)
|
|
243
|
-
|
|
244
|
-
def _result_exists(self, part_idx: int) -> bool:
|
|
245
|
-
part_suffix = f"_part_{part_idx + 1}" if len(self.parts) > 1 else ""
|
|
246
|
-
result_path = (
|
|
247
|
-
Path(self.config.BASE_OUTPUT_DIR)
|
|
248
|
-
/ f"{Path(self.output_data_filename).stem}{part_suffix}.json"
|
|
249
|
-
)
|
|
250
|
-
return result_path.exists()
|
|
251
|
-
|
|
252
|
-
|
|
253
|
-
if __name__ == "__main__":
|
|
254
|
-
logger.info("=== Batch Job Runner ===")
|
|
255
|
-
config = BatchConfig(
|
|
256
|
-
system_prompt="",
|
|
257
|
-
job_name="job_name",
|
|
258
|
-
input_data_path="Data.json",
|
|
259
|
-
output_data_filename="output",
|
|
260
|
-
)
|
|
261
|
-
runner = BatchJobRunner(config)
|
|
262
|
-
runner.run()
|
texttools/prompts/README.md
CHANGED
|
@@ -14,15 +14,15 @@ This folder contains YAML files for all prompts used in the project. Each file r
|
|
|
14
14
|
### Example YAML Structure
|
|
15
15
|
```yaml
|
|
16
16
|
main_template:
|
|
17
|
-
|
|
17
|
+
mode_1: |
|
|
18
18
|
Your main instructions here with placeholders like {input}.
|
|
19
|
-
|
|
19
|
+
mode_2: |
|
|
20
20
|
Optional reasoning instructions here.
|
|
21
21
|
|
|
22
22
|
analyze_template:
|
|
23
|
-
|
|
23
|
+
mode_1: |
|
|
24
24
|
Analyze and summarize the input.
|
|
25
|
-
|
|
25
|
+
mode_2: |
|
|
26
26
|
Optional detailed analysis template.
|
|
27
27
|
```
|
|
28
28
|
|
texttools/tools/__init__.py
CHANGED
|
@@ -1,4 +1,4 @@
|
|
|
1
|
-
from typing import Literal, Any
|
|
1
|
+
from typing import Literal, Any, Callable
|
|
2
2
|
|
|
3
3
|
from openai import AsyncOpenAI
|
|
4
4
|
|
|
@@ -34,6 +34,7 @@ class AsyncTheTool:
|
|
|
34
34
|
temperature: float | None = 0.0,
|
|
35
35
|
logprobs: bool = False,
|
|
36
36
|
top_logprobs: int | None = None,
|
|
37
|
+
validator: Callable[[Any], bool] | None = None,
|
|
37
38
|
) -> OutputModels.ToolOutput:
|
|
38
39
|
"""
|
|
39
40
|
Categorize a text into a single Islamic studies domain category.
|
|
@@ -52,6 +53,7 @@ class AsyncTheTool:
|
|
|
52
53
|
temperature=temperature,
|
|
53
54
|
logprobs=logprobs,
|
|
54
55
|
top_logprobs=top_logprobs,
|
|
56
|
+
validator=validator,
|
|
55
57
|
# Internal parameters
|
|
56
58
|
prompt_file="categorizer.yaml",
|
|
57
59
|
output_model=OutputModels.CategorizerOutput,
|
|
@@ -69,6 +71,7 @@ class AsyncTheTool:
|
|
|
69
71
|
temperature: float | None = 0.0,
|
|
70
72
|
logprobs: bool = False,
|
|
71
73
|
top_logprobs: int | None = None,
|
|
74
|
+
validator: Callable[[Any], bool] | None = None,
|
|
72
75
|
) -> OutputModels.ToolOutput:
|
|
73
76
|
"""
|
|
74
77
|
Extract salient keywords from text.
|
|
@@ -88,6 +91,7 @@ class AsyncTheTool:
|
|
|
88
91
|
temperature=temperature,
|
|
89
92
|
logprobs=logprobs,
|
|
90
93
|
top_logprobs=top_logprobs,
|
|
94
|
+
validator=validator,
|
|
91
95
|
# Internal parameters
|
|
92
96
|
prompt_file="extract_keywords.yaml",
|
|
93
97
|
output_model=OutputModels.ListStrOutput,
|
|
@@ -104,6 +108,7 @@ class AsyncTheTool:
|
|
|
104
108
|
temperature: float | None = 0.0,
|
|
105
109
|
logprobs: bool = False,
|
|
106
110
|
top_logprobs: int | None = None,
|
|
111
|
+
validator: Callable[[Any], bool] | None = None,
|
|
107
112
|
) -> OutputModels.ToolOutput:
|
|
108
113
|
"""
|
|
109
114
|
Perform Named Entity Recognition (NER) over the input text.
|
|
@@ -123,6 +128,7 @@ class AsyncTheTool:
|
|
|
123
128
|
temperature=temperature,
|
|
124
129
|
logprobs=logprobs,
|
|
125
130
|
top_logprobs=top_logprobs,
|
|
131
|
+
validator=validator,
|
|
126
132
|
# Internal parameters
|
|
127
133
|
prompt_file="extract_entities.yaml",
|
|
128
134
|
output_model=OutputModels.ListDictStrStrOutput,
|
|
@@ -138,6 +144,7 @@ class AsyncTheTool:
|
|
|
138
144
|
temperature: float | None = 0.0,
|
|
139
145
|
logprobs: bool = False,
|
|
140
146
|
top_logprobs: int | None = None,
|
|
147
|
+
validator: Callable[[Any], bool] | None = None,
|
|
141
148
|
) -> OutputModels.ToolOutput:
|
|
142
149
|
"""
|
|
143
150
|
Detect if the input is phrased as a question.
|
|
@@ -156,6 +163,7 @@ class AsyncTheTool:
|
|
|
156
163
|
temperature=temperature,
|
|
157
164
|
logprobs=logprobs,
|
|
158
165
|
top_logprobs=top_logprobs,
|
|
166
|
+
validator=validator,
|
|
159
167
|
# Internal parameters
|
|
160
168
|
prompt_file="is_question.yaml",
|
|
161
169
|
output_model=OutputModels.BoolOutput,
|
|
@@ -173,6 +181,7 @@ class AsyncTheTool:
|
|
|
173
181
|
temperature: float | None = 0.0,
|
|
174
182
|
logprobs: bool = False,
|
|
175
183
|
top_logprobs: int | None = None,
|
|
184
|
+
validator: Callable[[Any], bool] | None = None,
|
|
176
185
|
) -> OutputModels.ToolOutput:
|
|
177
186
|
"""
|
|
178
187
|
Generate a single question from the given text.
|
|
@@ -192,6 +201,7 @@ class AsyncTheTool:
|
|
|
192
201
|
temperature=temperature,
|
|
193
202
|
logprobs=logprobs,
|
|
194
203
|
top_logprobs=top_logprobs,
|
|
204
|
+
validator=validator,
|
|
195
205
|
# Internal parameters
|
|
196
206
|
prompt_file="text_to_question.yaml",
|
|
197
207
|
output_model=OutputModels.StrOutput,
|
|
@@ -209,6 +219,7 @@ class AsyncTheTool:
|
|
|
209
219
|
logprobs: bool = False,
|
|
210
220
|
top_logprobs: int | None = None,
|
|
211
221
|
mode: Literal["default", "reason"] = "default",
|
|
222
|
+
validator: Callable[[Any], bool] | None = None,
|
|
212
223
|
) -> OutputModels.ToolOutput:
|
|
213
224
|
"""
|
|
214
225
|
Merge multiple questions into a single unified question.
|
|
@@ -229,6 +240,7 @@ class AsyncTheTool:
|
|
|
229
240
|
temperature=temperature,
|
|
230
241
|
logprobs=logprobs,
|
|
231
242
|
top_logprobs=top_logprobs,
|
|
243
|
+
validator=validator,
|
|
232
244
|
# Internal parameters
|
|
233
245
|
prompt_file="merge_questions.yaml",
|
|
234
246
|
output_model=OutputModels.StrOutput,
|
|
@@ -246,6 +258,7 @@ class AsyncTheTool:
|
|
|
246
258
|
logprobs: bool = False,
|
|
247
259
|
top_logprobs: int | None = None,
|
|
248
260
|
mode: Literal["positive", "negative", "hard_negative"] = "positive",
|
|
261
|
+
validator: Callable[[Any], bool] | None = None,
|
|
249
262
|
) -> OutputModels.ToolOutput:
|
|
250
263
|
"""
|
|
251
264
|
Rewrite a text with different modes.
|
|
@@ -265,6 +278,7 @@ class AsyncTheTool:
|
|
|
265
278
|
temperature=temperature,
|
|
266
279
|
logprobs=logprobs,
|
|
267
280
|
top_logprobs=top_logprobs,
|
|
281
|
+
validator=validator,
|
|
268
282
|
# Internal parameters
|
|
269
283
|
prompt_file="rewrite.yaml",
|
|
270
284
|
output_model=OutputModels.StrOutput,
|
|
@@ -282,6 +296,7 @@ class AsyncTheTool:
|
|
|
282
296
|
temperature: float | None = 0.0,
|
|
283
297
|
logprobs: bool = False,
|
|
284
298
|
top_logprobs: int | None = None,
|
|
299
|
+
validator: Callable[[Any], bool] | None = None,
|
|
285
300
|
) -> OutputModels.ToolOutput:
|
|
286
301
|
"""
|
|
287
302
|
Generate a list of questions about a subject.
|
|
@@ -302,6 +317,7 @@ class AsyncTheTool:
|
|
|
302
317
|
temperature=temperature,
|
|
303
318
|
logprobs=logprobs,
|
|
304
319
|
top_logprobs=top_logprobs,
|
|
320
|
+
validator=validator,
|
|
305
321
|
# Internal parameters
|
|
306
322
|
prompt_file="subject_to_question.yaml",
|
|
307
323
|
output_model=OutputModels.ReasonListStrOutput,
|
|
@@ -318,6 +334,7 @@ class AsyncTheTool:
|
|
|
318
334
|
temperature: float | None = 0.0,
|
|
319
335
|
logprobs: bool = False,
|
|
320
336
|
top_logprobs: int | None = None,
|
|
337
|
+
validator: Callable[[Any], bool] | None = None,
|
|
321
338
|
) -> OutputModels.ToolOutput:
|
|
322
339
|
"""
|
|
323
340
|
Summarize the given subject text.
|
|
@@ -337,6 +354,7 @@ class AsyncTheTool:
|
|
|
337
354
|
temperature=temperature,
|
|
338
355
|
logprobs=logprobs,
|
|
339
356
|
top_logprobs=top_logprobs,
|
|
357
|
+
validator=validator,
|
|
340
358
|
# Internal parameters
|
|
341
359
|
prompt_file="summarize.yaml",
|
|
342
360
|
output_model=OutputModels.StrOutput,
|
|
@@ -353,6 +371,7 @@ class AsyncTheTool:
|
|
|
353
371
|
temperature: float | None = 0.0,
|
|
354
372
|
logprobs: bool = False,
|
|
355
373
|
top_logprobs: int | None = None,
|
|
374
|
+
validator: Callable[[Any], bool] | None = None,
|
|
356
375
|
) -> OutputModels.ToolOutput:
|
|
357
376
|
"""
|
|
358
377
|
Translate text between languages.
|
|
@@ -372,6 +391,7 @@ class AsyncTheTool:
|
|
|
372
391
|
temperature=temperature,
|
|
373
392
|
logprobs=logprobs,
|
|
374
393
|
top_logprobs=top_logprobs,
|
|
394
|
+
validator=validator,
|
|
375
395
|
# Internal parameters
|
|
376
396
|
prompt_file="translate.yaml",
|
|
377
397
|
output_model=OutputModels.StrOutput,
|
|
@@ -411,4 +431,5 @@ class AsyncTheTool:
|
|
|
411
431
|
user_prompt=None,
|
|
412
432
|
with_analysis=False,
|
|
413
433
|
mode=None,
|
|
434
|
+
validator=None,
|
|
414
435
|
)
|
|
@@ -1,4 +1,4 @@
|
|
|
1
|
-
from typing import Any, TypeVar, Type, Literal
|
|
1
|
+
from typing import Any, TypeVar, Type, Literal, Callable
|
|
2
2
|
import logging
|
|
3
3
|
|
|
4
4
|
from openai import AsyncOpenAI
|
|
@@ -12,8 +12,7 @@ from texttools.tools.internals.prompt_loader import PromptLoader
|
|
|
12
12
|
# Base Model type for output models
|
|
13
13
|
T = TypeVar("T", bound=BaseModel)
|
|
14
14
|
|
|
15
|
-
logger = logging.getLogger("async_operator")
|
|
16
|
-
logger.setLevel(logging.INFO)
|
|
15
|
+
logger = logging.getLogger("texttools.async_operator")
|
|
17
16
|
|
|
18
17
|
|
|
19
18
|
class AsyncOperator(BaseOperator):
|
|
@@ -52,7 +51,7 @@ class AsyncOperator(BaseOperator):
|
|
|
52
51
|
temperature: float,
|
|
53
52
|
logprobs: bool = False,
|
|
54
53
|
top_logprobs: int = 3,
|
|
55
|
-
) -> tuple[
|
|
54
|
+
) -> tuple[T, Any]:
|
|
56
55
|
"""
|
|
57
56
|
Parses a chat completion using OpenAI's structured output format.
|
|
58
57
|
Returns both the parsed object and the raw completion for logging.
|
|
@@ -79,7 +78,7 @@ class AsyncOperator(BaseOperator):
|
|
|
79
78
|
temperature: float,
|
|
80
79
|
logprobs: bool = False,
|
|
81
80
|
top_logprobs: int = 3,
|
|
82
|
-
) -> tuple[
|
|
81
|
+
) -> tuple[T, Any]:
|
|
83
82
|
"""
|
|
84
83
|
Generates a completion using vLLM with JSON schema guidance.
|
|
85
84
|
Returns the parsed output model and raw completion.
|
|
@@ -115,6 +114,7 @@ class AsyncOperator(BaseOperator):
|
|
|
115
114
|
temperature: float,
|
|
116
115
|
logprobs: bool,
|
|
117
116
|
top_logprobs: int | None,
|
|
117
|
+
validator: Callable[[Any], bool] | None,
|
|
118
118
|
# Internal parameters
|
|
119
119
|
prompt_file: str,
|
|
120
120
|
output_model: Type[T],
|
|
@@ -127,7 +127,7 @@ class AsyncOperator(BaseOperator):
|
|
|
127
127
|
"""
|
|
128
128
|
prompt_loader = PromptLoader()
|
|
129
129
|
formatter = Formatter()
|
|
130
|
-
output = ToolOutput(
|
|
130
|
+
output = ToolOutput()
|
|
131
131
|
|
|
132
132
|
try:
|
|
133
133
|
# Prompt configs contain two keys: main_template and analyze template, both are string
|
|
@@ -138,7 +138,7 @@ class AsyncOperator(BaseOperator):
|
|
|
138
138
|
**extra_kwargs,
|
|
139
139
|
)
|
|
140
140
|
|
|
141
|
-
messages
|
|
141
|
+
messages = []
|
|
142
142
|
|
|
143
143
|
if with_analysis:
|
|
144
144
|
analysis = await self._analyze(prompt_configs, temperature)
|
|
@@ -179,6 +179,54 @@ class AsyncOperator(BaseOperator):
|
|
|
179
179
|
|
|
180
180
|
output.result = parsed.result
|
|
181
181
|
|
|
182
|
+
# Retry logic if validation fails
|
|
183
|
+
if validator and not validator(output.result):
|
|
184
|
+
for attempt in range(self.MAX_RETRIES):
|
|
185
|
+
logger.warning(
|
|
186
|
+
f"Validation failed, retrying for the {attempt + 1} time."
|
|
187
|
+
)
|
|
188
|
+
|
|
189
|
+
# Generate new temperature for retry
|
|
190
|
+
retry_temperature = self._get_retry_temp(temperature)
|
|
191
|
+
try:
|
|
192
|
+
if resp_format == "vllm":
|
|
193
|
+
parsed, completion = await self._vllm_completion(
|
|
194
|
+
messages,
|
|
195
|
+
output_model,
|
|
196
|
+
retry_temperature,
|
|
197
|
+
logprobs,
|
|
198
|
+
top_logprobs,
|
|
199
|
+
)
|
|
200
|
+
elif resp_format == "parse":
|
|
201
|
+
parsed, completion = await self._parse_completion(
|
|
202
|
+
messages,
|
|
203
|
+
output_model,
|
|
204
|
+
retry_temperature,
|
|
205
|
+
logprobs,
|
|
206
|
+
top_logprobs,
|
|
207
|
+
)
|
|
208
|
+
|
|
209
|
+
output.result = parsed.result
|
|
210
|
+
|
|
211
|
+
# Check if retry was successful
|
|
212
|
+
if validator(output.result):
|
|
213
|
+
logger.info(
|
|
214
|
+
f"Validation passed on retry attempt {attempt + 1}"
|
|
215
|
+
)
|
|
216
|
+
break
|
|
217
|
+
else:
|
|
218
|
+
logger.warning(
|
|
219
|
+
f"Validation still failing after retry attempt {attempt + 1}"
|
|
220
|
+
)
|
|
221
|
+
|
|
222
|
+
except Exception as e:
|
|
223
|
+
logger.error(f"Retry attempt {attempt + 1} failed: {e}")
|
|
224
|
+
# Continue to next retry attempt if this one fails
|
|
225
|
+
|
|
226
|
+
# Final check after all retries
|
|
227
|
+
if validator and not validator(output.result):
|
|
228
|
+
output.errors.append("Validation failed after all retry attempts")
|
|
229
|
+
|
|
182
230
|
if logprobs:
|
|
183
231
|
output.logprobs = self._extract_logprobs(completion)
|
|
184
232
|
|
|
@@ -189,4 +237,5 @@ class AsyncOperator(BaseOperator):
|
|
|
189
237
|
|
|
190
238
|
except Exception as e:
|
|
191
239
|
logger.error(f"AsyncTheTool failed: {e}")
|
|
192
|
-
|
|
240
|
+
output.errors.append(str(e))
|
|
241
|
+
return output
|
|
@@ -1,8 +1,9 @@
|
|
|
1
|
-
from typing import TypeVar, Type, Any
|
|
1
|
+
from typing import TypeVar, Type, Any, Union
|
|
2
2
|
import json
|
|
3
3
|
import re
|
|
4
4
|
import math
|
|
5
5
|
import logging
|
|
6
|
+
import random
|
|
6
7
|
|
|
7
8
|
from pydantic import BaseModel
|
|
8
9
|
from openai import OpenAI, AsyncOpenAI
|
|
@@ -10,12 +11,16 @@ from openai import OpenAI, AsyncOpenAI
|
|
|
10
11
|
# Base Model type for output models
|
|
11
12
|
T = TypeVar("T", bound=BaseModel)
|
|
12
13
|
|
|
13
|
-
|
|
14
|
-
|
|
14
|
+
ClientType = Union[OpenAI, AsyncOpenAI]
|
|
15
|
+
|
|
16
|
+
logger = logging.getLogger("texttools.base_operator")
|
|
15
17
|
|
|
16
18
|
|
|
17
19
|
class BaseOperator:
|
|
18
|
-
|
|
20
|
+
# Max retry in case of failed output validation
|
|
21
|
+
MAX_RETRIES = 3
|
|
22
|
+
|
|
23
|
+
def __init__(self, client: ClientType, model: str):
|
|
19
24
|
self.client = client
|
|
20
25
|
self.model = model
|
|
21
26
|
|
|
@@ -40,16 +45,10 @@ class BaseOperator:
|
|
|
40
45
|
"""
|
|
41
46
|
Convert a JSON response string to output model.
|
|
42
47
|
"""
|
|
43
|
-
# Clean the response string
|
|
44
48
|
cleaned_json = self._clean_json_response(response_string)
|
|
45
|
-
|
|
46
|
-
# Fix Python-style booleans
|
|
47
49
|
cleaned_json = cleaned_json.replace("False", "false").replace("True", "true")
|
|
48
|
-
|
|
49
|
-
# Convert string to Python dictionary
|
|
50
50
|
response_dict = json.loads(cleaned_json)
|
|
51
51
|
|
|
52
|
-
# Convert dictionary to output model
|
|
53
52
|
return output_model(**response_dict)
|
|
54
53
|
|
|
55
54
|
def _extract_logprobs(self, completion: dict) -> list[dict[str, Any]]:
|
|
@@ -63,7 +62,7 @@ class BaseOperator:
|
|
|
63
62
|
|
|
64
63
|
for choice in completion.choices:
|
|
65
64
|
if not getattr(choice, "logprobs", None):
|
|
66
|
-
logger.error("logprobs is not
|
|
65
|
+
logger.error("logprobs is not available for the chosen model.")
|
|
67
66
|
return []
|
|
68
67
|
|
|
69
68
|
for logprob_item in choice.logprobs.content:
|
|
@@ -86,3 +85,12 @@ class BaseOperator:
|
|
|
86
85
|
logprobs_data.append(token_entry)
|
|
87
86
|
|
|
88
87
|
return logprobs_data
|
|
88
|
+
|
|
89
|
+
def _get_retry_temp(self, base_temp: float) -> float:
|
|
90
|
+
"""
|
|
91
|
+
Calculate temperature for retry attempts.
|
|
92
|
+
"""
|
|
93
|
+
delta_temp = random.choice([-1, 1]) * random.uniform(0.1, 0.9)
|
|
94
|
+
new_temp = base_temp + delta_temp
|
|
95
|
+
|
|
96
|
+
return max(0.0, min(new_temp, 1.5))
|
|
@@ -1,4 +1,4 @@
|
|
|
1
|
-
from typing import Any, TypeVar, Type, Literal
|
|
1
|
+
from typing import Any, TypeVar, Type, Literal, Callable
|
|
2
2
|
import logging
|
|
3
3
|
|
|
4
4
|
from openai import OpenAI
|
|
@@ -6,14 +6,12 @@ from pydantic import BaseModel
|
|
|
6
6
|
|
|
7
7
|
from texttools.tools.internals.output_models import ToolOutput
|
|
8
8
|
from texttools.tools.internals.base_operator import BaseOperator
|
|
9
|
-
from texttools.tools.internals.formatters import Formatter
|
|
10
9
|
from texttools.tools.internals.prompt_loader import PromptLoader
|
|
11
10
|
|
|
12
11
|
# Base Model type for output models
|
|
13
12
|
T = TypeVar("T", bound=BaseModel)
|
|
14
13
|
|
|
15
|
-
logger = logging.getLogger("operator")
|
|
16
|
-
logger.setLevel(logging.INFO)
|
|
14
|
+
logger = logging.getLogger("texttools.operator")
|
|
17
15
|
|
|
18
16
|
|
|
19
17
|
class Operator(BaseOperator):
|
|
@@ -52,7 +50,7 @@ class Operator(BaseOperator):
|
|
|
52
50
|
temperature: float,
|
|
53
51
|
logprobs: bool = False,
|
|
54
52
|
top_logprobs: int = 3,
|
|
55
|
-
) -> tuple[
|
|
53
|
+
) -> tuple[T, Any]:
|
|
56
54
|
"""
|
|
57
55
|
Parses a chat completion using OpenAI's structured output format.
|
|
58
56
|
Returns both the parsed object and the raw completion for logging.
|
|
@@ -79,7 +77,7 @@ class Operator(BaseOperator):
|
|
|
79
77
|
temperature: float,
|
|
80
78
|
logprobs: bool = False,
|
|
81
79
|
top_logprobs: int = 3,
|
|
82
|
-
) -> tuple[
|
|
80
|
+
) -> tuple[T, Any]:
|
|
83
81
|
"""
|
|
84
82
|
Generates a completion using vLLM with JSON schema guidance.
|
|
85
83
|
Returns the parsed output model and raw completion.
|
|
@@ -115,6 +113,7 @@ class Operator(BaseOperator):
|
|
|
115
113
|
temperature: float,
|
|
116
114
|
logprobs: bool,
|
|
117
115
|
top_logprobs: int | None,
|
|
116
|
+
validator: Callable[[Any], bool] | None,
|
|
118
117
|
# Internal parameters
|
|
119
118
|
prompt_file: str,
|
|
120
119
|
output_model: Type[T],
|
|
@@ -126,8 +125,7 @@ class Operator(BaseOperator):
|
|
|
126
125
|
Execute the LLM pipeline with the given input text.
|
|
127
126
|
"""
|
|
128
127
|
prompt_loader = PromptLoader()
|
|
129
|
-
|
|
130
|
-
output = ToolOutput(result="", analysis="", logprobs=[], errors=[])
|
|
128
|
+
output = ToolOutput()
|
|
131
129
|
|
|
132
130
|
try:
|
|
133
131
|
# Prompt configs contain two keys: main_template and analyze template, both are string
|
|
@@ -138,7 +136,7 @@ class Operator(BaseOperator):
|
|
|
138
136
|
**extra_kwargs,
|
|
139
137
|
)
|
|
140
138
|
|
|
141
|
-
messages
|
|
139
|
+
messages = []
|
|
142
140
|
|
|
143
141
|
if with_analysis:
|
|
144
142
|
analysis = self._analyze(prompt_configs, temperature)
|
|
@@ -159,7 +157,7 @@ class Operator(BaseOperator):
|
|
|
159
157
|
)
|
|
160
158
|
|
|
161
159
|
messages.append(self._build_user_message(prompt_configs["main_template"]))
|
|
162
|
-
messages
|
|
160
|
+
messages
|
|
163
161
|
|
|
164
162
|
if resp_format == "vllm":
|
|
165
163
|
parsed, completion = self._vllm_completion(
|
|
@@ -179,6 +177,54 @@ class Operator(BaseOperator):
|
|
|
179
177
|
|
|
180
178
|
output.result = parsed.result
|
|
181
179
|
|
|
180
|
+
# Retry logic if validation fails
|
|
181
|
+
if validator and not validator(output.result):
|
|
182
|
+
for attempt in range(self.MAX_RETRIES):
|
|
183
|
+
logger.warning(
|
|
184
|
+
f"Validation failed, retrying for the {attempt + 1} time."
|
|
185
|
+
)
|
|
186
|
+
|
|
187
|
+
# Generate new temperature for retry
|
|
188
|
+
retry_temperature = self._get_retry_temp(temperature)
|
|
189
|
+
try:
|
|
190
|
+
if resp_format == "vllm":
|
|
191
|
+
parsed, completion = self._vllm_completion(
|
|
192
|
+
messages,
|
|
193
|
+
output_model,
|
|
194
|
+
retry_temperature,
|
|
195
|
+
logprobs,
|
|
196
|
+
top_logprobs,
|
|
197
|
+
)
|
|
198
|
+
elif resp_format == "parse":
|
|
199
|
+
parsed, completion = self._parse_completion(
|
|
200
|
+
messages,
|
|
201
|
+
output_model,
|
|
202
|
+
retry_temperature,
|
|
203
|
+
logprobs,
|
|
204
|
+
top_logprobs,
|
|
205
|
+
)
|
|
206
|
+
|
|
207
|
+
output.result = parsed.result
|
|
208
|
+
|
|
209
|
+
# Check if retry was successful
|
|
210
|
+
if validator(output.result):
|
|
211
|
+
logger.info(
|
|
212
|
+
f"Validation passed on retry attempt {attempt + 1}"
|
|
213
|
+
)
|
|
214
|
+
break
|
|
215
|
+
else:
|
|
216
|
+
logger.warning(
|
|
217
|
+
f"Validation still failing after retry attempt {attempt + 1}"
|
|
218
|
+
)
|
|
219
|
+
|
|
220
|
+
except Exception as e:
|
|
221
|
+
logger.error(f"Retry attempt {attempt + 1} failed: {e}")
|
|
222
|
+
# Continue to next retry attempt if this one fails
|
|
223
|
+
|
|
224
|
+
# Final check after all retries
|
|
225
|
+
if validator and not validator(output.result):
|
|
226
|
+
output.errors.append("Validation failed after all retry attempts")
|
|
227
|
+
|
|
182
228
|
if logprobs:
|
|
183
229
|
output.logprobs = self._extract_logprobs(completion)
|
|
184
230
|
|
|
@@ -188,5 +234,6 @@ class Operator(BaseOperator):
|
|
|
188
234
|
return output
|
|
189
235
|
|
|
190
236
|
except Exception as e:
|
|
191
|
-
logger.error(f"
|
|
192
|
-
|
|
237
|
+
logger.error(f"TheTool failed: {e}")
|
|
238
|
+
output.errors.append(str(e))
|
|
239
|
+
return output
|
|
@@ -4,10 +4,13 @@ from pydantic import BaseModel, Field
|
|
|
4
4
|
|
|
5
5
|
|
|
6
6
|
class ToolOutput(BaseModel):
|
|
7
|
-
result:
|
|
8
|
-
analysis: str
|
|
9
|
-
logprobs: list[dict[str, Any]]
|
|
10
|
-
errors: list[str]
|
|
7
|
+
result: Any = None
|
|
8
|
+
analysis: str = ""
|
|
9
|
+
logprobs: list[dict[str, Any]] = []
|
|
10
|
+
errors: list[str] = []
|
|
11
|
+
|
|
12
|
+
def __repr__(self) -> str:
|
|
13
|
+
return f"ToolOutput(result_type='{type(self.result)}', result='{self.result}', analysis='{self.analysis}', logprobs='{self.logprobs}', errors='{self.errors}'"
|
|
11
14
|
|
|
12
15
|
|
|
13
16
|
class StrOutput(BaseModel):
|
|
@@ -11,15 +11,10 @@ class PromptLoader:
|
|
|
11
11
|
- Load and parse YAML prompt definitions.
|
|
12
12
|
- Select the right template (by mode, if applicable).
|
|
13
13
|
- Inject variables (`{input}`, plus any extra kwargs) into the templates.
|
|
14
|
-
- Return a dict with:
|
|
15
|
-
{
|
|
16
|
-
"main_template": "...",
|
|
17
|
-
"analyze_template": "..." | None
|
|
18
|
-
}
|
|
19
14
|
"""
|
|
20
15
|
|
|
21
|
-
MAIN_TEMPLATE
|
|
22
|
-
ANALYZE_TEMPLATE
|
|
16
|
+
MAIN_TEMPLATE = "main_template"
|
|
17
|
+
ANALYZE_TEMPLATE = "analyze_template"
|
|
23
18
|
|
|
24
19
|
# Use lru_cache to load each file once
|
|
25
20
|
@lru_cache(maxsize=32)
|
|
@@ -1,4 +1,4 @@
|
|
|
1
|
-
from typing import Literal, Any
|
|
1
|
+
from typing import Literal, Any, Callable
|
|
2
2
|
|
|
3
3
|
from openai import OpenAI
|
|
4
4
|
|
|
@@ -32,6 +32,7 @@ class TheTool:
|
|
|
32
32
|
temperature: float | None = 0.0,
|
|
33
33
|
logprobs: bool = False,
|
|
34
34
|
top_logprobs: int | None = None,
|
|
35
|
+
validator: Callable[[Any], bool] | None = None,
|
|
35
36
|
) -> OutputModels.ToolOutput:
|
|
36
37
|
"""
|
|
37
38
|
Categorize a text into a single Islamic studies domain category.
|
|
@@ -50,6 +51,7 @@ class TheTool:
|
|
|
50
51
|
temperature=temperature,
|
|
51
52
|
logprobs=logprobs,
|
|
52
53
|
top_logprobs=top_logprobs,
|
|
54
|
+
validator=validator,
|
|
53
55
|
# Internal parameters
|
|
54
56
|
prompt_file="categorizer.yaml",
|
|
55
57
|
output_model=OutputModels.CategorizerOutput,
|
|
@@ -67,6 +69,7 @@ class TheTool:
|
|
|
67
69
|
temperature: float | None = 0.0,
|
|
68
70
|
logprobs: bool = False,
|
|
69
71
|
top_logprobs: int | None = None,
|
|
72
|
+
validator: Callable[[Any], bool] | None = None,
|
|
70
73
|
) -> OutputModels.ToolOutput:
|
|
71
74
|
"""
|
|
72
75
|
Extract salient keywords from text.
|
|
@@ -86,6 +89,7 @@ class TheTool:
|
|
|
86
89
|
temperature=temperature,
|
|
87
90
|
logprobs=logprobs,
|
|
88
91
|
top_logprobs=top_logprobs,
|
|
92
|
+
validator=validator,
|
|
89
93
|
# Internal parameters
|
|
90
94
|
prompt_file="extract_keywords.yaml",
|
|
91
95
|
output_model=OutputModels.ListStrOutput,
|
|
@@ -102,6 +106,7 @@ class TheTool:
|
|
|
102
106
|
temperature: float | None = 0.0,
|
|
103
107
|
logprobs: bool = False,
|
|
104
108
|
top_logprobs: int | None = None,
|
|
109
|
+
validator: Callable[[Any], bool] | None = None,
|
|
105
110
|
) -> OutputModels.ToolOutput:
|
|
106
111
|
"""
|
|
107
112
|
Perform Named Entity Recognition (NER) over the input text.
|
|
@@ -121,6 +126,7 @@ class TheTool:
|
|
|
121
126
|
temperature=temperature,
|
|
122
127
|
logprobs=logprobs,
|
|
123
128
|
top_logprobs=top_logprobs,
|
|
129
|
+
validator=validator,
|
|
124
130
|
# Internal parameters
|
|
125
131
|
prompt_file="extract_entities.yaml",
|
|
126
132
|
output_model=OutputModels.ListDictStrStrOutput,
|
|
@@ -136,6 +142,7 @@ class TheTool:
|
|
|
136
142
|
temperature: float | None = 0.0,
|
|
137
143
|
logprobs: bool = False,
|
|
138
144
|
top_logprobs: int | None = None,
|
|
145
|
+
validator: Callable[[Any], bool] | None = None,
|
|
139
146
|
) -> OutputModels.ToolOutput:
|
|
140
147
|
"""
|
|
141
148
|
Detect if the input is phrased as a question.
|
|
@@ -154,6 +161,7 @@ class TheTool:
|
|
|
154
161
|
temperature=temperature,
|
|
155
162
|
logprobs=logprobs,
|
|
156
163
|
top_logprobs=top_logprobs,
|
|
164
|
+
validator=validator,
|
|
157
165
|
# Internal parameters
|
|
158
166
|
prompt_file="is_question.yaml",
|
|
159
167
|
output_model=OutputModels.BoolOutput,
|
|
@@ -171,6 +179,7 @@ class TheTool:
|
|
|
171
179
|
temperature: float | None = 0.0,
|
|
172
180
|
logprobs: bool = False,
|
|
173
181
|
top_logprobs: int | None = None,
|
|
182
|
+
validator: Callable[[Any], bool] | None = None,
|
|
174
183
|
) -> OutputModels.ToolOutput:
|
|
175
184
|
"""
|
|
176
185
|
Generate a single question from the given text.
|
|
@@ -190,6 +199,7 @@ class TheTool:
|
|
|
190
199
|
temperature=temperature,
|
|
191
200
|
logprobs=logprobs,
|
|
192
201
|
top_logprobs=top_logprobs,
|
|
202
|
+
validator=validator,
|
|
193
203
|
# Internal parameters
|
|
194
204
|
prompt_file="text_to_question.yaml",
|
|
195
205
|
output_model=OutputModels.StrOutput,
|
|
@@ -207,6 +217,7 @@ class TheTool:
|
|
|
207
217
|
logprobs: bool = False,
|
|
208
218
|
top_logprobs: int | None = None,
|
|
209
219
|
mode: Literal["default", "reason"] = "default",
|
|
220
|
+
validator: Callable[[Any], bool] | None = None,
|
|
210
221
|
) -> OutputModels.ToolOutput:
|
|
211
222
|
"""
|
|
212
223
|
Merge multiple questions into a single unified question.
|
|
@@ -227,6 +238,7 @@ class TheTool:
|
|
|
227
238
|
temperature=temperature,
|
|
228
239
|
logprobs=logprobs,
|
|
229
240
|
top_logprobs=top_logprobs,
|
|
241
|
+
validator=validator,
|
|
230
242
|
# Internal parameters
|
|
231
243
|
prompt_file="merge_questions.yaml",
|
|
232
244
|
output_model=OutputModels.StrOutput,
|
|
@@ -244,6 +256,7 @@ class TheTool:
|
|
|
244
256
|
logprobs: bool = False,
|
|
245
257
|
top_logprobs: int | None = None,
|
|
246
258
|
mode: Literal["positive", "negative", "hard_negative"] = "positive",
|
|
259
|
+
validator: Callable[[Any], bool] | None = None,
|
|
247
260
|
) -> OutputModels.ToolOutput:
|
|
248
261
|
"""
|
|
249
262
|
Rewrite a text with different modes.
|
|
@@ -263,6 +276,7 @@ class TheTool:
|
|
|
263
276
|
temperature=temperature,
|
|
264
277
|
logprobs=logprobs,
|
|
265
278
|
top_logprobs=top_logprobs,
|
|
279
|
+
validator=validator,
|
|
266
280
|
# Internal parameters
|
|
267
281
|
prompt_file="rewrite.yaml",
|
|
268
282
|
output_model=OutputModels.StrOutput,
|
|
@@ -280,6 +294,7 @@ class TheTool:
|
|
|
280
294
|
temperature: float | None = 0.0,
|
|
281
295
|
logprobs: bool = False,
|
|
282
296
|
top_logprobs: int | None = None,
|
|
297
|
+
validator: Callable[[Any], bool] | None = None,
|
|
283
298
|
) -> OutputModels.ToolOutput:
|
|
284
299
|
"""
|
|
285
300
|
Generate a list of questions about a subject.
|
|
@@ -300,6 +315,7 @@ class TheTool:
|
|
|
300
315
|
temperature=temperature,
|
|
301
316
|
logprobs=logprobs,
|
|
302
317
|
top_logprobs=top_logprobs,
|
|
318
|
+
validator=validator,
|
|
303
319
|
# Internal parameters
|
|
304
320
|
prompt_file="subject_to_question.yaml",
|
|
305
321
|
output_model=OutputModels.ReasonListStrOutput,
|
|
@@ -316,6 +332,7 @@ class TheTool:
|
|
|
316
332
|
temperature: float | None = 0.0,
|
|
317
333
|
logprobs: bool = False,
|
|
318
334
|
top_logprobs: int | None = None,
|
|
335
|
+
validator: Callable[[Any], bool] | None = None,
|
|
319
336
|
) -> OutputModels.ToolOutput:
|
|
320
337
|
"""
|
|
321
338
|
Summarize the given subject text.
|
|
@@ -335,6 +352,7 @@ class TheTool:
|
|
|
335
352
|
temperature=temperature,
|
|
336
353
|
logprobs=logprobs,
|
|
337
354
|
top_logprobs=top_logprobs,
|
|
355
|
+
validator=validator,
|
|
338
356
|
# Internal parameters
|
|
339
357
|
prompt_file="summarize.yaml",
|
|
340
358
|
output_model=OutputModels.StrOutput,
|
|
@@ -351,6 +369,7 @@ class TheTool:
|
|
|
351
369
|
temperature: float | None = 0.0,
|
|
352
370
|
logprobs: bool = False,
|
|
353
371
|
top_logprobs: int | None = None,
|
|
372
|
+
validator: Callable[[Any], bool] | None = None,
|
|
354
373
|
) -> OutputModels.ToolOutput:
|
|
355
374
|
"""
|
|
356
375
|
Translate text between languages.
|
|
@@ -370,6 +389,7 @@ class TheTool:
|
|
|
370
389
|
temperature=temperature,
|
|
371
390
|
logprobs=logprobs,
|
|
372
391
|
top_logprobs=top_logprobs,
|
|
392
|
+
validator=validator,
|
|
373
393
|
# Internal parameters
|
|
374
394
|
prompt_file="translate.yaml",
|
|
375
395
|
output_model=OutputModels.StrOutput,
|
|
@@ -409,4 +429,5 @@ class TheTool:
|
|
|
409
429
|
user_prompt=None,
|
|
410
430
|
with_analysis=False,
|
|
411
431
|
mode=None,
|
|
432
|
+
validator=None,
|
|
412
433
|
)
|
|
@@ -1,30 +0,0 @@
|
|
|
1
|
-
hamtaa_texttools-1.1.2.dist-info/licenses/LICENSE,sha256=Hb2YOBKy2MJQLnyLrX37B4ZVuac8eaIcE71SvVIMOLg,1082
|
|
2
|
-
texttools/__init__.py,sha256=v3tQCH_Cjj47fCpuhK6sKSVAqEjNkc-cZbY4OJa4IZw,202
|
|
3
|
-
texttools/batch/__init__.py,sha256=q50JsQsmQGp_8RW0KNasYeYWVV0R4FUNZ-ujXwEJemY,143
|
|
4
|
-
texttools/batch/batch_manager.py,sha256=UCJaOq-tTy6kTZJvFuBDBSmzlDeVlLTeFlR83e6eXkQ,8808
|
|
5
|
-
texttools/batch/batch_runner.py,sha256=JNlwKkCdSDGOxp7Wwl4F2MaXMcmXK9DG0oaNOqUKNpI,10293
|
|
6
|
-
texttools/prompts/README.md,sha256=rclMaCV1N8gT1KcpZu0-ka0dKGNg2f1CEcRMdQkgQOc,1379
|
|
7
|
-
texttools/prompts/categorizer.yaml,sha256=GMqIIzQFhgnlpkgU1qi3FAD3mD4A2jiWD5TilQ2XnnE,1204
|
|
8
|
-
texttools/prompts/extract_entities.yaml,sha256=KiKjeDpHaeh3JVtZ6q1pa3k4DYucUIU9WnEcRTCA-SE,651
|
|
9
|
-
texttools/prompts/extract_keywords.yaml,sha256=0O7ypL_OsEOxtvlQ2CZjnsv9637DJwAKprZsf9Vo2_s,769
|
|
10
|
-
texttools/prompts/is_question.yaml,sha256=d0-vKRbXWkxvO64ikvxRjEmpAXGpCYIPGhgexvPPjws,471
|
|
11
|
-
texttools/prompts/merge_questions.yaml,sha256=0J85GvTirZB4ELwH3sk8ub_WcqqpYf6PrMKr3djlZeo,1792
|
|
12
|
-
texttools/prompts/rewrite.yaml,sha256=LO7He_IA3MZKz8a-LxH9DHJpOjpYwaYN1pbjp1Y0tFo,5392
|
|
13
|
-
texttools/prompts/run_custom.yaml,sha256=38OkCoVITbuuS9c08UZSP1jZW4WjSmRIi8fR0RAiPu4,108
|
|
14
|
-
texttools/prompts/subject_to_question.yaml,sha256=C7x7rNNm6U_ZG9HOn6zuzYOtvJUZ2skuWbL1-aYdd3E,1147
|
|
15
|
-
texttools/prompts/summarize.yaml,sha256=o6rxGPfWtZd61Duvm8NVvCJqfq73b-wAuMSKR6UYUqY,459
|
|
16
|
-
texttools/prompts/text_to_question.yaml,sha256=UheKYpDn6iyKI8NxunHZtFpNyfCLZZe5cvkuXpurUJY,783
|
|
17
|
-
texttools/prompts/translate.yaml,sha256=mGT2uBCei6uucWqVbs4silk-UV060v3G0jnt0P6sr50,634
|
|
18
|
-
texttools/tools/__init__.py,sha256=hG1I28Q7BJ1Dbs95x6QMKXdsAlC5Eh_tqC-EbAibwiU,114
|
|
19
|
-
texttools/tools/async_the_tool.py,sha256=ORn3xk6zJ3HdDNo_zj9cTlY05zsQ1un5H96xLIUUOg4,14446
|
|
20
|
-
texttools/tools/the_tool.py,sha256=q-bvYv38CuBMstYOV_42zv2frHgsaLRzo_XAGNrdNWA,14212
|
|
21
|
-
texttools/tools/internals/async_operator.py,sha256=vZvU5XPdNKz--ZYheEQOIRzVNi-Ni1x9GxgcDBlbYJM,6760
|
|
22
|
-
texttools/tools/internals/base_operator.py,sha256=3GUSEN87Y_1CqkeK_QXkFG8O7GPliGSVUtiPp1taQRM,3051
|
|
23
|
-
texttools/tools/internals/formatters.py,sha256=tACNLP6PeoqaRpNudVxBaHA25zyWqWYPZQuYysIu88g,941
|
|
24
|
-
texttools/tools/internals/operator.py,sha256=M-0-6SmRmW7PX4CWhMAgoSwtk_2w1_YrItK3FhdObCU,6659
|
|
25
|
-
texttools/tools/internals/output_models.py,sha256=gbVbzBWeyHUVNsCBuawdgz9ZEzsC7wfygGgZJsAaexY,1662
|
|
26
|
-
texttools/tools/internals/prompt_loader.py,sha256=1khayXcRC5w0Vf2SufpNaN1IUIhbKzS5ATiKheoBcGE,2082
|
|
27
|
-
hamtaa_texttools-1.1.2.dist-info/METADATA,sha256=J2YfhGNN2SpOq5P3iOsniaUzpuUaP20dTlPBiQX6Gss,7144
|
|
28
|
-
hamtaa_texttools-1.1.2.dist-info/WHEEL,sha256=_zCd3N1l69ArxyTb8rzEoP9TpbYXkqRFSNOD5OuxnTs,91
|
|
29
|
-
hamtaa_texttools-1.1.2.dist-info/top_level.txt,sha256=5Mh0jIxxZ5rOXHGJ6Mp-JPKviywwN0MYuH0xk5bEWqE,10
|
|
30
|
-
hamtaa_texttools-1.1.2.dist-info/RECORD,,
|
|
File without changes
|
|
File without changes
|
|
File without changes
|