hamtaa-texttools 1.1.13__py3-none-any.whl → 1.1.14__py3-none-any.whl

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
@@ -1,6 +1,6 @@
1
1
  Metadata-Version: 2.4
2
2
  Name: hamtaa-texttools
3
- Version: 1.1.13
3
+ Version: 1.1.14
4
4
  Summary: A high-level NLP toolkit built on top of modern LLMs.
5
5
  Author-email: Tohidi <the.mohammad.tohidi@gmail.com>, Montazer <montazerh82@gmail.com>, Givechi <mohamad.m.givechi@gmail.com>, MoosaviNejad <erfanmoosavi84@gmail.com>
6
6
  License: MIT License
@@ -50,7 +50,7 @@ It provides ready-to-use utilities for **translation, question detection, keywor
50
50
  TextTools provides a rich collection of high-level NLP utilities,
51
51
  Each tool is designed to work with structured outputs (JSON / Pydantic).
52
52
 
53
- - **`categorize()`** - Classifies text into Islamic studies categories
53
+ - **`categorize()`** - Classifies text into given categories (You have to create a category tree)
54
54
  - **`extract_keywords()`** - Extracts keywords from text
55
55
  - **`extract_entities()`** - Named Entity Recognition (NER) system
56
56
  - **`is_question()`** - Binary detection of whether input is a question
@@ -64,7 +64,7 @@ Each tool is designed to work with structured outputs (JSON / Pydantic).
64
64
 
65
65
  ---
66
66
 
67
- ## ⚙️ `with_analysis`, `logprobs`, `output_lang`, `user_prompt`, `temperature` and `validator` parameters
67
+ ## ⚙️ `with_analysis`, `logprobs`, `output_lang`, `user_prompt`, `temperature`, `validator` and `priority` parameters
68
68
 
69
69
  TextTools provides several optional flags to customize LLM behavior:
70
70
 
@@ -72,6 +72,7 @@ TextTools provides several optional flags to customize LLM behavior:
72
72
  **Note:** This doubles token usage per call because it triggers an additional LLM request.
73
73
 
74
74
  - **`logprobs (bool)`** → Returns token-level probabilities for the generated output. You can also specify `top_logprobs=<N>` to get the top N alternative tokens and their probabilities.
75
+ **Note:** This feature works if it's supported by the model.
75
76
 
76
77
  - **`output_lang (str)`** → Forces the model to respond in a specific language. The model will ignore other instructions about language and respond strictly in the requested language.
77
78
 
@@ -79,9 +80,10 @@ TextTools provides several optional flags to customize LLM behavior:
79
80
 
80
81
  - **`temperature (float)`** → Determines how creative the model should respond. Takes a float number from `0.0` to `2.0`.
81
82
 
82
- - **`validator (Callable)`** → Forces TheTool to validate the output result based on your custom validator. Validator should return bool (True if there were no problem, False if the validation failed.) If validator failed, TheTool will retry to get another output by modifying `temperature`. You can specify `max_validation_retries=<N>` to change the number of retries.
83
+ - **`validator (Callable)`** → Forces TheTool to validate the output result based on your custom validator. Validator should return a bool (True if there were no problem, False if the validation fails.) If the validator fails, TheTool will retry to get another output by modifying `temperature`. You can specify `max_validation_retries=<N>` to change the number of retries.
83
84
 
84
- All these parameters can be used individually or together to tailor the behavior of any tool in **TextTools**.
85
+ - **`priority (int)`** Task execution priority level. Higher values = higher priority. Affects processing order in queues.
86
+ **Note:** This feature works if it's supported by the model and vLLM.
85
87
 
86
88
  **Note:** There might be some tools that don't support some of the parameters above.
87
89
 
@@ -95,7 +97,7 @@ Every tool of `TextTools` returns a `ToolOutput` object which is a BaseModel wit
95
97
  - **`logprobs (list)`** → Token-level probabilities for the generated output
96
98
  - **`errors (list[str])`** → Any error that have occured during calling LLM
97
99
 
98
- **None:** You can use `repr(ToolOutput)` to see details of an output.
100
+ **Note:** You can use `repr(ToolOutput)` to see details of an output.
99
101
 
100
102
  ---
101
103
 
@@ -1,13 +1,14 @@
1
- hamtaa_texttools-1.1.13.dist-info/licenses/LICENSE,sha256=Hb2YOBKy2MJQLnyLrX37B4ZVuac8eaIcE71SvVIMOLg,1082
2
- texttools/__init__.py,sha256=EZPPNPafVGvBaxjG9anP0piqH3gAC0DdjdAckQeAgNU,251
3
- texttools/batch/batch_config.py,sha256=FCDXy9TfH7xjd1PHvn_CtdwEQSq-YO5sktiaMZEId58,740
4
- texttools/batch/batch_runner.py,sha256=zzzVIXedmaq-8fqsFtGRR64F7CtYRLlhQeBu8uMwJQg,9385
1
+ hamtaa_texttools-1.1.14.dist-info/licenses/LICENSE,sha256=Hb2YOBKy2MJQLnyLrX37B4ZVuac8eaIcE71SvVIMOLg,1082
2
+ texttools/__init__.py,sha256=dc81lXGWP29k7oVvq2BMoMotz6lgiwX4PO2jHHBe2S8,317
3
+ texttools/batch/batch_config.py,sha256=m1UgILVKjNdWE6laNbfbG4vgi4o2fEegGZbeoam6pnY,749
4
+ texttools/batch/batch_runner.py,sha256=9e4SPLlvLHHs3U7bHkuuMVw8TFNwsGUzRjkAMKN4_ik,9378
5
5
  texttools/batch/internals/batch_manager.py,sha256=UoBe76vmFG72qrSaGKDZf4HzkykFBkkkbL9TLfV8TuQ,8730
6
6
  texttools/batch/internals/utils.py,sha256=F1_7YlVFKhjUROAFX4m0SaP8KiZVZyHRMIIB87VUGQc,373
7
7
  texttools/prompts/README.md,sha256=-5YO93CN93QLifqZpUeUnCOCBbDiOTV-cFQeJ7Gg0I4,1377
8
- texttools/prompts/categorizer.yaml,sha256=GMqIIzQFhgnlpkgU1qi3FAD3mD4A2jiWD5TilQ2XnnE,1204
8
+ texttools/prompts/categorize.yaml,sha256=F7VezB25B_sT5yoC25ezODBddkuDD5lUHKetSpx9FKI,2743
9
+ texttools/prompts/detect_entity.yaml,sha256=1rhMkJOjxSQcT4j_c5SRcIm77AUdeG-rUmeidb6VOFc,981
9
10
  texttools/prompts/extract_entities.yaml,sha256=KiKjeDpHaeh3JVtZ6q1pa3k4DYucUIU9WnEcRTCA-SE,651
10
- texttools/prompts/extract_keywords.yaml,sha256=0O7ypL_OsEOxtvlQ2CZjnsv9637DJwAKprZsf9Vo2_s,769
11
+ texttools/prompts/extract_keywords.yaml,sha256=Vj4Tt3vT6LtpOo_iBZPo9oWI50oVdPGXe5i8yDR8ex4,3177
11
12
  texttools/prompts/is_question.yaml,sha256=d0-vKRbXWkxvO64ikvxRjEmpAXGpCYIPGhgexvPPjws,471
12
13
  texttools/prompts/merge_questions.yaml,sha256=0J85GvTirZB4ELwH3sk8ub_WcqqpYf6PrMKr3djlZeo,1792
13
14
  texttools/prompts/rewrite.yaml,sha256=LO7He_IA3MZKz8a-LxH9DHJpOjpYwaYN1pbjp1Y0tFo,5392
@@ -16,15 +17,15 @@ texttools/prompts/subject_to_question.yaml,sha256=C7x7rNNm6U_ZG9HOn6zuzYOtvJUZ2s
16
17
  texttools/prompts/summarize.yaml,sha256=o6rxGPfWtZd61Duvm8NVvCJqfq73b-wAuMSKR6UYUqY,459
17
18
  texttools/prompts/text_to_question.yaml,sha256=UheKYpDn6iyKI8NxunHZtFpNyfCLZZe5cvkuXpurUJY,783
18
19
  texttools/prompts/translate.yaml,sha256=mGT2uBCei6uucWqVbs4silk-UV060v3G0jnt0P6sr50,634
19
- texttools/tools/async_tools.py,sha256=60VAAZyVRxI2rKVFFiCnbY--F4kNtVxYQticE0RyhOs,24677
20
- texttools/tools/sync_tools.py,sha256=F5TN3KQ_vlF7AC9J0vm2NzjIZC19Ox11tpc9K1SMRwQ,24448
21
- texttools/tools/internals/async_operator.py,sha256=9OzF5FFXYrXX1C6ZDbad1zw9A6BZsDQ65jQVrpqTlPw,6961
20
+ texttools/tools/async_tools.py,sha256=2bWwHd5eaUZwEt4u0rsl6K7QTpE0r83sd8nGaMqStyA,31994
21
+ texttools/tools/sync_tools.py,sha256=DduJMkHtP0pLiEVbGtfuVq__kVhyMy8Hwi528OusyGc,31747
22
+ texttools/tools/internals/async_operator.py,sha256=kVYClefPlcIhaMwcfh2rNX0GROaStjXdVe6bh_XTGKM,7149
22
23
  texttools/tools/internals/formatters.py,sha256=tACNLP6PeoqaRpNudVxBaHA25zyWqWYPZQuYysIu88g,941
24
+ texttools/tools/internals/models.py,sha256=VH9l4d_ER8skBCTj5w9ISdRnsUXb3H2OVehz9r7VgdM,5630
23
25
  texttools/tools/internals/operator_utils.py,sha256=w1k0RJ_W_CRbVc_J2w337VuL-opHpHiCxfhEOwtyuOo,1856
24
- texttools/tools/internals/output_models.py,sha256=ekpbyocmXj_dee7ieOT1zOkMo9cPHT7xcUFCZoUaXA0,1886
25
26
  texttools/tools/internals/prompt_loader.py,sha256=4g6-U8kqrGN7VpNaRcrBcnF-h03PXjUDBP0lL0_4EZY,1953
26
- texttools/tools/internals/sync_operator.py,sha256=zbFLbFvaT9hAdIgpbDv17ljuqqu6ZeIOwCquM4gHTI8,6867
27
- hamtaa_texttools-1.1.13.dist-info/METADATA,sha256=4qkKZKb9DEb1vx2FMD-xRBDj_LCnDeytl-ea3zMVKdc,9179
28
- hamtaa_texttools-1.1.13.dist-info/WHEEL,sha256=_zCd3N1l69ArxyTb8rzEoP9TpbYXkqRFSNOD5OuxnTs,91
29
- hamtaa_texttools-1.1.13.dist-info/top_level.txt,sha256=5Mh0jIxxZ5rOXHGJ6Mp-JPKviywwN0MYuH0xk5bEWqE,10
30
- hamtaa_texttools-1.1.13.dist-info/RECORD,,
27
+ texttools/tools/internals/sync_operator.py,sha256=5kZWO8CGgoXx1FGFOWU1wnj2RWW8_KNEMGnQCkmbYwA,7057
28
+ hamtaa_texttools-1.1.14.dist-info/METADATA,sha256=Smh1J0R1JjMkeNOwc-bIEsAVDexdERwRM3T6ZpBcDkk,9370
29
+ hamtaa_texttools-1.1.14.dist-info/WHEEL,sha256=_zCd3N1l69ArxyTb8rzEoP9TpbYXkqRFSNOD5OuxnTs,91
30
+ hamtaa_texttools-1.1.14.dist-info/top_level.txt,sha256=5Mh0jIxxZ5rOXHGJ6Mp-JPKviywwN0MYuH0xk5bEWqE,10
31
+ hamtaa_texttools-1.1.14.dist-info/RECORD,,
texttools/__init__.py CHANGED
@@ -2,5 +2,6 @@ from .batch.batch_runner import BatchJobRunner
2
2
  from .batch.batch_config import BatchConfig
3
3
  from .tools.sync_tools import TheTool
4
4
  from .tools.async_tools import AsyncTheTool
5
+ from .tools.internals.models import CategoryTree
5
6
 
6
- __all__ = ["TheTool", "AsyncTheTool", "BatchJobRunner", "BatchConfig"]
7
+ __all__ = ["TheTool", "AsyncTheTool", "BatchJobRunner", "BatchConfig", "CategoryTree"]
@@ -1,5 +1,5 @@
1
1
  from dataclasses import dataclass
2
- from typing import Callable
2
+ from collections.abc import Callable
3
3
 
4
4
  from texttools.batch.internals.utils import import_data, export_data
5
5
 
@@ -11,7 +11,7 @@ from pydantic import BaseModel
11
11
 
12
12
  from texttools.batch.internals.batch_manager import BatchManager
13
13
  from texttools.batch.batch_config import BatchConfig
14
- from texttools.tools.internals.output_models import StrOutput
14
+ from texttools.tools.internals.models import StrOutput
15
15
 
16
16
  # Base Model type for output models
17
17
  T = TypeVar("T", bound=BaseModel)
@@ -0,0 +1,77 @@
1
+ main_template:
2
+
3
+ category_list: |
4
+ You are an expert classification agent.
5
+ You receive a list of categories.
6
+
7
+ Your task:
8
+ - Read all provided categories carefully.
9
+ - Consider the user query, intent, and task explanation.
10
+ - Select exactly one category name from the list that best matches the user’s intent.
11
+ - Return only the category name, nothing else.
12
+
13
+ Rules:
14
+ - Never invent categories that are not in the list.
15
+ - If multiple categories seem possible, choose the closest match based on the description and user intent.
16
+ - If descriptions are missing or empty, rely on the category name.
17
+ - If the correct answer cannot be determined with certainty, choose the most likely one.
18
+
19
+ Output format:
20
+ {{
21
+ "reason": "Explanation of why the input belongs to the category"
22
+ "result": "<category_name_only>"
23
+ }}
24
+
25
+ Available categories with their descriptions:
26
+ {category_list}
27
+
28
+ The text that has to be categorized:
29
+ {input}
30
+
31
+ category_tree: |
32
+ You are an expert classification agent.
33
+ You receive a list of categories at the current level of a hierarchical category tree.
34
+
35
+ Your task:
36
+ - Read all provided categories carefully.
37
+ - Consider the user query, intent, and task explanation.
38
+ - Select exactly one category name from the list that best matches the user’s intent.
39
+ - Return only the category name, nothing else.
40
+
41
+ Rules:
42
+ - Never invent categories that are not in the list.
43
+ - If multiple categories seem possible, choose the closest match based on the description and user intent.
44
+ - If descriptions are missing or empty, rely on the category name.
45
+ - If the correct answer cannot be determined with certainty, choose the most likely one.
46
+
47
+ Output format:
48
+ {{
49
+ "reason": "Explanation of why the input belongs to the category"
50
+ "result": "<category_name_only>"
51
+ }}
52
+
53
+ Available categories with their descriptions at this level:
54
+ {category_list}
55
+
56
+ Do not include category descriptions at all. Only write the raw category.
57
+
58
+ The text that has to be categorized:
59
+ {input}
60
+
61
+ analyze_template:
62
+
63
+ category_list: |
64
+ We want to categorize the given text.
65
+ To improve categorization, we need an analysis of the text.
66
+ Analyze the given text and write its main idea and a short analysis of that.
67
+ Analysis should be very short.
68
+ Text:
69
+ {input}
70
+
71
+ category_tree: |
72
+ We want to categorize the given text.
73
+ To improve categorization, we need an analysis of the text.
74
+ Analyze the given text and write its main idea and a short analysis of that.
75
+ Analysis should be very short.
76
+ Text:
77
+ {input}
@@ -0,0 +1,22 @@
1
+ main_template: |
2
+ You are an expert Named Entity Recognition (NER) system. Extract entities from the text.
3
+ The output must strictly follow the provided Pydantic schema.
4
+
5
+ Mapping Rule:
6
+ - Person: شخص
7
+ - Location: مکان
8
+ - Time: زمان
9
+ - Living Beings: موجود زنده
10
+ - Organization: سازمان
11
+ - Concept: مفهوم
12
+
13
+ CRITICAL:
14
+ 1. The final output structure must be a complete JSON object matching the Pydantic schema (List[Entity]).
15
+ 2. Both the extracted text and the type must be in Persian, using the exact mapping provided above.
16
+
17
+ Here is the text: {input}
18
+
19
+ analyze_template: |
20
+ Analyze the following text to identify all potential named entities and their categories (Person, Location, Time, Living Beings, Organization, Concept).
21
+ Provide a brief summary of the entities identified that will help the main process to extract them accurately and apply the correct Persian type label.
22
+ Here is the text: {input}
@@ -1,18 +1,68 @@
1
- main_template: |
2
- You are an expert keyword extractor.
3
- Extract the most relevant keywords from the given text.
4
- Guidelines:
5
- - Keywords must represent the main concepts of the text.
6
- - If two words have overlapping meanings, choose only one.
7
- - Do not include generic or unrelated words.
8
- - Keywords must be single, self-contained words (no phrases).
9
- - Output between 3 and 7 keywords based on the input length.
10
- - Respond only in JSON format:
11
- {{"result": ["keyword1", "keyword2", etc.]}}
12
- Here is the text:
13
- {input}
14
-
15
- analyze_template: |
16
- Analyze the following text to identify its main topics, concepts, and important terms.
17
- Provide a concise summary of your findings that will help in extracting relevant keywords.
18
- {input}
1
+ main_template:
2
+
3
+ auto: |
4
+ You are an expert keyword extractor.
5
+ Extract the most relevant keywords from the given text.
6
+ Guidelines:
7
+ - Keywords must represent the main concepts of the text.
8
+ - If two words have overlapping meanings, choose only one.
9
+ - Do not include generic or unrelated words.
10
+ - Keywords must be single, self-contained words (no phrases).
11
+ - Output between 3 and 7 keywords based on the input length.
12
+ - Respond only in JSON format:
13
+ {{"result": ["keyword1", "keyword2", etc.]}}
14
+ Here is the text:
15
+ {input}
16
+
17
+ threshold: |
18
+ You are an expert keyword extractor specialized in fine-grained concept identification.
19
+ Extract the most specific, content-bearing keywords from the text.
20
+
21
+ Requirements:
22
+ - Choose fine-grained conceptual terms, not general domain labels.
23
+ - Avoid words that only describe the broad topic (e.g., Islam, religion, philosophy, history).
24
+ - Prefer specific names, concepts, doctrines, events, arguments, or terminology.
25
+ - Do not select words only because they appear frequently. A keyword must represent a central conceptual idea, not a repeated surface term.
26
+ - If multiple words express overlapping meaning, select the more specific one.
27
+ - Keywords must be single words (no multi-word expressions).
28
+ - Extract N keywords depending on input length:
29
+ - Short texts (a few sentences): 3 keywords
30
+ - Medium texts (1–4 paragraphs): 4–5 keywords
31
+ - Long texts (more than 4 paragraphs): 6–7 keywords
32
+ - Respond only in JSON format:
33
+ {{"result": ["keyword1", "keyword2", etc.]}}
34
+ Here is the text:
35
+ {input}
36
+
37
+ count: |
38
+ You are an expert keyword extractor with precise output requirements.
39
+ Extract exactly {number_of_keywords} keywords from the given text.
40
+
41
+ Requirements:
42
+ - Extract exactly {number_of_keywords} keywords, no more, no less.
43
+ - Select the {number_of_keywords} most relevant and specific keywords that represent core concepts.
44
+ - Prefer specific terms, names, and concepts over general topic labels.
45
+ - If the text doesn't contain enough distinct keywords, include the most relevant ones even if some are less specific.
46
+ - Keywords must be single words (no multi-word expressions).
47
+ - Order keywords by relevance (most relevant first).
48
+ - Respond only in JSON format:
49
+ {{"result": ["keyword1", "keyword2", "keyword3", ...]}}
50
+
51
+ Here is the text:
52
+ {input}
53
+
54
+ analyze_template:
55
+ auto: |
56
+ Analyze the following text to identify its main topics, concepts, and important terms.
57
+ Provide a concise summary of your findings that will help in extracting relevant keywords.
58
+ {input}
59
+
60
+ threshold: |
61
+ Analyze the following text to identify its main topics, concepts, and important terms.
62
+ Provide a concise summary of your findings that will help in extracting relevant keywords.
63
+ {input}
64
+
65
+ count: |
66
+ Analyze the following text to identify its main topics, concepts, and important terms.
67
+ Provide a concise summary of your findings that will help in extracting relevant keywords.
68
+ {input}