hamtaa-texttools 1.0.2__tar.gz → 1.0.3__tar.gz

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.

Potentially problematic release.


This version of hamtaa-texttools might be problematic. Click here for more details.

Files changed (41) hide show
  1. {hamtaa_texttools-1.0.2/hamtaa_texttools.egg-info → hamtaa_texttools-1.0.3}/PKG-INFO +18 -6
  2. {hamtaa_texttools-1.0.2 → hamtaa_texttools-1.0.3}/README.md +17 -5
  3. {hamtaa_texttools-1.0.2 → hamtaa_texttools-1.0.3/hamtaa_texttools.egg-info}/PKG-INFO +18 -6
  4. {hamtaa_texttools-1.0.2 → hamtaa_texttools-1.0.3}/hamtaa_texttools.egg-info/SOURCES.txt +8 -8
  5. {hamtaa_texttools-1.0.2 → hamtaa_texttools-1.0.3}/pyproject.toml +1 -1
  6. hamtaa_texttools-1.0.3/texttools/__init__.py +9 -0
  7. {hamtaa_texttools-1.0.2/texttools/utils/batch_manager → hamtaa_texttools-1.0.3/texttools/batch}/batch_runner.py +1 -1
  8. {hamtaa_texttools-1.0.2 → hamtaa_texttools-1.0.3}/texttools/formatters/user_merge_formatter/user_merge_formatter.py +0 -17
  9. {hamtaa_texttools-1.0.2 → hamtaa_texttools-1.0.3}/texttools/prompts/README.md +5 -5
  10. hamtaa_texttools-1.0.3/texttools/prompts/categorizer.yaml +31 -0
  11. {hamtaa_texttools-1.0.2 → hamtaa_texttools-1.0.3}/texttools/prompts/keyword_extractor.yaml +4 -1
  12. {hamtaa_texttools-1.0.2 → hamtaa_texttools-1.0.3}/texttools/prompts/ner_extractor.yaml +4 -1
  13. {hamtaa_texttools-1.0.2 → hamtaa_texttools-1.0.3}/texttools/prompts/question_detector.yaml +5 -2
  14. {hamtaa_texttools-1.0.2 → hamtaa_texttools-1.0.3}/texttools/prompts/question_generator.yaml +4 -3
  15. {hamtaa_texttools-1.0.2 → hamtaa_texttools-1.0.3}/texttools/prompts/question_merger.yaml +6 -4
  16. {hamtaa_texttools-1.0.2 → hamtaa_texttools-1.0.3}/texttools/prompts/question_rewriter.yaml +6 -4
  17. {hamtaa_texttools-1.0.2 → hamtaa_texttools-1.0.3}/texttools/prompts/subject_question_generator.yaml +3 -4
  18. {hamtaa_texttools-1.0.2 → hamtaa_texttools-1.0.3}/texttools/prompts/summarizer.yaml +1 -0
  19. {hamtaa_texttools-1.0.2 → hamtaa_texttools-1.0.3}/texttools/prompts/translator.yaml +1 -0
  20. hamtaa_texttools-1.0.3/texttools/tools/__init__.py +4 -0
  21. hamtaa_texttools-1.0.3/texttools/tools/async_the_tool.py +263 -0
  22. hamtaa_texttools-1.0.3/texttools/tools/internals/async_operator.py +288 -0
  23. {hamtaa_texttools-1.0.2/texttools/tools → hamtaa_texttools-1.0.3/texttools/tools/internals}/operator.py +133 -63
  24. {hamtaa_texttools-1.0.2/texttools/tools → hamtaa_texttools-1.0.3/texttools/tools/internals}/output_models.py +8 -0
  25. {hamtaa_texttools-1.0.2/texttools/tools → hamtaa_texttools-1.0.3/texttools/tools/internals}/prompt_loader.py +16 -18
  26. hamtaa_texttools-1.0.3/texttools/tools/the_tool.py +400 -0
  27. hamtaa_texttools-1.0.2/tests/test_tools.py +0 -65
  28. hamtaa_texttools-1.0.2/texttools/__init__.py +0 -9
  29. hamtaa_texttools-1.0.2/texttools/prompts/categorizer.yaml +0 -25
  30. hamtaa_texttools-1.0.2/texttools/tools/__init__.py +0 -3
  31. hamtaa_texttools-1.0.2/texttools/tools/the_tool.py +0 -291
  32. hamtaa_texttools-1.0.2/texttools/utils/__init__.py +0 -4
  33. {hamtaa_texttools-1.0.2 → hamtaa_texttools-1.0.3}/LICENSE +0 -0
  34. {hamtaa_texttools-1.0.2 → hamtaa_texttools-1.0.3}/MANIFEST.in +0 -0
  35. {hamtaa_texttools-1.0.2 → hamtaa_texttools-1.0.3}/hamtaa_texttools.egg-info/dependency_links.txt +0 -0
  36. {hamtaa_texttools-1.0.2 → hamtaa_texttools-1.0.3}/hamtaa_texttools.egg-info/requires.txt +0 -0
  37. {hamtaa_texttools-1.0.2 → hamtaa_texttools-1.0.3}/hamtaa_texttools.egg-info/top_level.txt +0 -0
  38. {hamtaa_texttools-1.0.2 → hamtaa_texttools-1.0.3}/setup.cfg +0 -0
  39. {hamtaa_texttools-1.0.2/texttools/utils/batch_manager → hamtaa_texttools-1.0.3/texttools/batch}/__init__.py +0 -0
  40. {hamtaa_texttools-1.0.2/texttools/utils/batch_manager → hamtaa_texttools-1.0.3/texttools/batch}/batch_manager.py +0 -0
  41. {hamtaa_texttools-1.0.2 → hamtaa_texttools-1.0.3}/texttools/formatters/base_formatter.py +0 -0
@@ -1,6 +1,6 @@
1
1
  Metadata-Version: 2.4
2
2
  Name: hamtaa-texttools
3
- Version: 1.0.2
3
+ Version: 1.0.3
4
4
  Summary: TextTools is a high-level NLP toolkit built on top of modern LLMs.
5
5
  Author-email: Tohidi <the.mohammad.tohidi@gmail.com>, Montazer <montazerh82@gmail.com>, Givechi <mohamad.m.givechi@gmail.com>, MoosaviNejad <erfanmoosavi84@gmail.com>
6
6
  License: MIT License
@@ -37,8 +37,11 @@ Dynamic: license-file
37
37
  ## 📌 Overview
38
38
 
39
39
  **TextTools** is a high-level **NLP toolkit** built on top of modern **LLMs**.
40
+
40
41
  It provides ready-to-use utilities for **translation, question detection, keyword extraction, categorization, NER extractor, and more** — designed to help you integrate AI-powered text processing into your applications with minimal effort.
41
42
 
43
+ **Thread Safety:** All methods in TheTool are thread-safe, allowing concurrent usage across multiple threads without conflicts.
44
+
42
45
  ---
43
46
 
44
47
  ## ✨ Features
@@ -59,11 +62,20 @@ Each tool is designed to work out-of-the-box with structured outputs (JSON / Pyd
59
62
 
60
63
  ---
61
64
 
62
- ## 🔍 `with_analysis` Mode
65
+ ## ⚙️ with_analysis, logprobs, output_lang, and user_prompt parameters
66
+
67
+ TextTools provides several optional flags to customize LLM behavior:
68
+
69
+ - **`with_analysis=True`** → Adds a reasoning step before generating the final output. Useful for debugging, improving prompts, or understanding model behavior.
70
+ ⚠️ Note: This doubles token usage per call because it triggers an additional LLM request.
71
+
72
+ - **`logprobs=True`** → Returns token-level probabilities for the generated output. You can also specify `top_logprobs=<N>` to get the top N alternative tokens and their probabilities.
73
+
74
+ - **`output_lang="en"`** → Forces the model to respond in a specific language. The model will ignore other instructions about language and respond strictly in the requested language.
63
75
 
64
- The `with_analysis=True` flag enhances the tool's output by providing a detailed reasoning chain behind its result. This is valuable for debugging, improving prompts, or understanding model behavior.
76
+ - **`user_prompt="..."`** Allows you to inject a custom instruction or prompt into the model alongside the main template. This gives you fine-grained control over how the model interprets or modifies the input text.
65
77
 
66
- **Please be aware:** This feature works by making an additional LLM API call for each tool invocation, which will **effectively double your token usage** for that operation.
78
+ All these flags can be used individually or together to tailor the behavior of any tool in **TextTools**.
67
79
 
68
80
  ---
69
81
 
@@ -99,7 +111,7 @@ print(the_tool.detect_question("Is this project open source?")["result"])
99
111
  # Output: True
100
112
 
101
113
  # Example: Translation
102
- print(the_tool.translate("سلام، حالت چطوره؟")["result"])
114
+ print(the_tool.translate("سلام، حالت چطوره؟", target_language="English")["result"])
103
115
  # Output: "Hi! How are you?"
104
116
  ```
105
117
 
@@ -113,7 +125,7 @@ Use **TextTools** when you need to:
113
125
  - 🌍 **Translate** and process multilingual corpora with ease
114
126
  - 🧩 **Integrate** LLMs into production pipelines (structured outputs)
115
127
  - 📊 **Analyze** large text collections using embeddings and categorization
116
- - ⚙️ **Automate** common text-processing tasks without reinventing the wheel
128
+ - 👍 **Automate** common text-processing tasks without reinventing the wheel
117
129
 
118
130
  ---
119
131
 
@@ -3,8 +3,11 @@
3
3
  ## 📌 Overview
4
4
 
5
5
  **TextTools** is a high-level **NLP toolkit** built on top of modern **LLMs**.
6
+
6
7
  It provides ready-to-use utilities for **translation, question detection, keyword extraction, categorization, NER extractor, and more** — designed to help you integrate AI-powered text processing into your applications with minimal effort.
7
8
 
9
+ **Thread Safety:** All methods in TheTool are thread-safe, allowing concurrent usage across multiple threads without conflicts.
10
+
8
11
  ---
9
12
 
10
13
  ## ✨ Features
@@ -25,11 +28,20 @@ Each tool is designed to work out-of-the-box with structured outputs (JSON / Pyd
25
28
 
26
29
  ---
27
30
 
28
- ## 🔍 `with_analysis` Mode
31
+ ## ⚙️ with_analysis, logprobs, output_lang, and user_prompt parameters
32
+
33
+ TextTools provides several optional flags to customize LLM behavior:
34
+
35
+ - **`with_analysis=True`** → Adds a reasoning step before generating the final output. Useful for debugging, improving prompts, or understanding model behavior.
36
+ ⚠️ Note: This doubles token usage per call because it triggers an additional LLM request.
37
+
38
+ - **`logprobs=True`** → Returns token-level probabilities for the generated output. You can also specify `top_logprobs=<N>` to get the top N alternative tokens and their probabilities.
39
+
40
+ - **`output_lang="en"`** → Forces the model to respond in a specific language. The model will ignore other instructions about language and respond strictly in the requested language.
29
41
 
30
- The `with_analysis=True` flag enhances the tool's output by providing a detailed reasoning chain behind its result. This is valuable for debugging, improving prompts, or understanding model behavior.
42
+ - **`user_prompt="..."`** Allows you to inject a custom instruction or prompt into the model alongside the main template. This gives you fine-grained control over how the model interprets or modifies the input text.
31
43
 
32
- **Please be aware:** This feature works by making an additional LLM API call for each tool invocation, which will **effectively double your token usage** for that operation.
44
+ All these flags can be used individually or together to tailor the behavior of any tool in **TextTools**.
33
45
 
34
46
  ---
35
47
 
@@ -65,7 +77,7 @@ print(the_tool.detect_question("Is this project open source?")["result"])
65
77
  # Output: True
66
78
 
67
79
  # Example: Translation
68
- print(the_tool.translate("سلام، حالت چطوره؟")["result"])
80
+ print(the_tool.translate("سلام، حالت چطوره؟", target_language="English")["result"])
69
81
  # Output: "Hi! How are you?"
70
82
  ```
71
83
 
@@ -79,7 +91,7 @@ Use **TextTools** when you need to:
79
91
  - 🌍 **Translate** and process multilingual corpora with ease
80
92
  - 🧩 **Integrate** LLMs into production pipelines (structured outputs)
81
93
  - 📊 **Analyze** large text collections using embeddings and categorization
82
- - ⚙️ **Automate** common text-processing tasks without reinventing the wheel
94
+ - 👍 **Automate** common text-processing tasks without reinventing the wheel
83
95
 
84
96
  ---
85
97
 
@@ -1,6 +1,6 @@
1
1
  Metadata-Version: 2.4
2
2
  Name: hamtaa-texttools
3
- Version: 1.0.2
3
+ Version: 1.0.3
4
4
  Summary: TextTools is a high-level NLP toolkit built on top of modern LLMs.
5
5
  Author-email: Tohidi <the.mohammad.tohidi@gmail.com>, Montazer <montazerh82@gmail.com>, Givechi <mohamad.m.givechi@gmail.com>, MoosaviNejad <erfanmoosavi84@gmail.com>
6
6
  License: MIT License
@@ -37,8 +37,11 @@ Dynamic: license-file
37
37
  ## 📌 Overview
38
38
 
39
39
  **TextTools** is a high-level **NLP toolkit** built on top of modern **LLMs**.
40
+
40
41
  It provides ready-to-use utilities for **translation, question detection, keyword extraction, categorization, NER extractor, and more** — designed to help you integrate AI-powered text processing into your applications with minimal effort.
41
42
 
43
+ **Thread Safety:** All methods in TheTool are thread-safe, allowing concurrent usage across multiple threads without conflicts.
44
+
42
45
  ---
43
46
 
44
47
  ## ✨ Features
@@ -59,11 +62,20 @@ Each tool is designed to work out-of-the-box with structured outputs (JSON / Pyd
59
62
 
60
63
  ---
61
64
 
62
- ## 🔍 `with_analysis` Mode
65
+ ## ⚙️ with_analysis, logprobs, output_lang, and user_prompt parameters
66
+
67
+ TextTools provides several optional flags to customize LLM behavior:
68
+
69
+ - **`with_analysis=True`** → Adds a reasoning step before generating the final output. Useful for debugging, improving prompts, or understanding model behavior.
70
+ ⚠️ Note: This doubles token usage per call because it triggers an additional LLM request.
71
+
72
+ - **`logprobs=True`** → Returns token-level probabilities for the generated output. You can also specify `top_logprobs=<N>` to get the top N alternative tokens and their probabilities.
73
+
74
+ - **`output_lang="en"`** → Forces the model to respond in a specific language. The model will ignore other instructions about language and respond strictly in the requested language.
63
75
 
64
- The `with_analysis=True` flag enhances the tool's output by providing a detailed reasoning chain behind its result. This is valuable for debugging, improving prompts, or understanding model behavior.
76
+ - **`user_prompt="..."`** Allows you to inject a custom instruction or prompt into the model alongside the main template. This gives you fine-grained control over how the model interprets or modifies the input text.
65
77
 
66
- **Please be aware:** This feature works by making an additional LLM API call for each tool invocation, which will **effectively double your token usage** for that operation.
78
+ All these flags can be used individually or together to tailor the behavior of any tool in **TextTools**.
67
79
 
68
80
  ---
69
81
 
@@ -99,7 +111,7 @@ print(the_tool.detect_question("Is this project open source?")["result"])
99
111
  # Output: True
100
112
 
101
113
  # Example: Translation
102
- print(the_tool.translate("سلام، حالت چطوره؟")["result"])
114
+ print(the_tool.translate("سلام، حالت چطوره؟", target_language="English")["result"])
103
115
  # Output: "Hi! How are you?"
104
116
  ```
105
117
 
@@ -113,7 +125,7 @@ Use **TextTools** when you need to:
113
125
  - 🌍 **Translate** and process multilingual corpora with ease
114
126
  - 🧩 **Integrate** LLMs into production pipelines (structured outputs)
115
127
  - 📊 **Analyze** large text collections using embeddings and categorization
116
- - ⚙️ **Automate** common text-processing tasks without reinventing the wheel
128
+ - 👍 **Automate** common text-processing tasks without reinventing the wheel
117
129
 
118
130
  ---
119
131
 
@@ -7,8 +7,10 @@ hamtaa_texttools.egg-info/SOURCES.txt
7
7
  hamtaa_texttools.egg-info/dependency_links.txt
8
8
  hamtaa_texttools.egg-info/requires.txt
9
9
  hamtaa_texttools.egg-info/top_level.txt
10
- tests/test_tools.py
11
10
  texttools/__init__.py
11
+ texttools/batch/__init__.py
12
+ texttools/batch/batch_manager.py
13
+ texttools/batch/batch_runner.py
12
14
  texttools/formatters/base_formatter.py
13
15
  texttools/formatters/user_merge_formatter/user_merge_formatter.py
14
16
  texttools/prompts/README.md
@@ -23,11 +25,9 @@ texttools/prompts/subject_question_generator.yaml
23
25
  texttools/prompts/summarizer.yaml
24
26
  texttools/prompts/translator.yaml
25
27
  texttools/tools/__init__.py
26
- texttools/tools/operator.py
27
- texttools/tools/output_models.py
28
- texttools/tools/prompt_loader.py
28
+ texttools/tools/async_the_tool.py
29
29
  texttools/tools/the_tool.py
30
- texttools/utils/__init__.py
31
- texttools/utils/batch_manager/__init__.py
32
- texttools/utils/batch_manager/batch_manager.py
33
- texttools/utils/batch_manager/batch_runner.py
30
+ texttools/tools/internals/async_operator.py
31
+ texttools/tools/internals/operator.py
32
+ texttools/tools/internals/output_models.py
33
+ texttools/tools/internals/prompt_loader.py
@@ -4,7 +4,7 @@ build-backend = "setuptools.build_meta"
4
4
 
5
5
  [project]
6
6
  name = "hamtaa-texttools"
7
- version = "1.0.2"
7
+ version = "1.0.3"
8
8
  authors = [
9
9
  { name = "Tohidi", email = "the.mohammad.tohidi@gmail.com" },
10
10
  { name = "Montazer", email = "montazerh82@gmail.com" },
@@ -0,0 +1,9 @@
1
+ from .batch import BatchJobRunner, SimpleBatchManager
2
+ from .tools import AsyncTheTool, TheTool
3
+
4
+ __all__ = [
5
+ "TheTool",
6
+ "AsyncTheTool",
7
+ "SimpleBatchManager",
8
+ "BatchJobRunner",
9
+ ]
@@ -8,7 +8,7 @@ from typing import Any, Callable
8
8
  from openai import OpenAI
9
9
  from pydantic import BaseModel
10
10
 
11
- from texttools.utils.batch_manager import SimpleBatchManager
11
+ from texttools.batch.batch_manager import SimpleBatchManager
12
12
 
13
13
 
14
14
  class Output(BaseModel):
@@ -13,24 +13,7 @@ class UserMergeFormatter(BaseFormatter):
13
13
  ValueError: If the input messages have invalid structure or roles.
14
14
  """
15
15
 
16
- def _validate_input(self, messages: list[dict[str, str]]):
17
- valid_keys = {"role", "content"}
18
- valid_roles = {"user", "assistant"}
19
-
20
- for message in messages:
21
- # Validate keys
22
- if set(message.keys()) != valid_keys:
23
- raise ValueError(
24
- f"Message dict keys must be exactly {valid_keys}, got {set(message.keys())}"
25
- )
26
- # Validate roles
27
- role = message["role"]
28
- if role != "system" and role not in valid_roles:
29
- raise ValueError(f"Unexpected role: {role}")
30
-
31
16
  def format(self, messages: list[dict[str, str]]) -> list[dict[str, str]]:
32
- self._validate_input(messages)
33
-
34
17
  merged: list[dict[str, str]] = []
35
18
 
36
19
  for message in messages:
@@ -7,20 +7,20 @@ This folder contains YAML files for all prompts used in the project. Each file r
7
7
  - **prompt_file.yaml**: Each YAML file represents a single prompt template.
8
8
  - **main_template**: The main instruction template for the model.
9
9
  - **analyze_template** (optional): A secondary reasoning template used before generating the final response.
10
- - **Modes** (optional): Some prompts may have multiple modes (e.g., `default_mode`, `reason_mode`) to allow different behaviors.
10
+ - **Modes** (optional): Some prompts may have multiple modes (e.g., `default`, `reason`) to allow different behaviors.
11
11
 
12
12
  ### Example YAML Structure
13
13
  ```yaml
14
14
  main_template:
15
- default_mode: |
15
+ default: |
16
16
  Your main instructions here with placeholders like {input}.
17
- reason_mode: |
17
+ reason: |
18
18
  Optional reasoning instructions here.
19
19
 
20
20
  analyze_template:
21
- default_mode: |
21
+ default: |
22
22
  Analyze and summarize the input.
23
- reason_mode: |
23
+ reason: |
24
24
  Optional detailed analysis template.
25
25
  ```
26
26
 
@@ -0,0 +1,31 @@
1
+ main_template: |
2
+ {user_prompt}
3
+ تو یک متخصص علوم دینی هستی
4
+ من به عنوان کاربر یک متن به تو میدم و از تو میخوام که
5
+ اون متن رو در یکی از دسته بندی های زیر طبقه بندی کنی
6
+
7
+ "باورهای دینی",
8
+ "اخلاق اسلامی",
9
+ "احکام و فقه",
10
+ "تاریخ اسلام و شخصیت ها",
11
+ "منابع دینی",
12
+ "دین و جامعه/سیاست",
13
+ "عرفان و معنویت",
14
+ "هیچکدام",
15
+
16
+ فقط با این فرمت json پاسخ بده:
17
+ {{
18
+ "reason": "<دلیل انتخابت رو به صورت خلاصه بگو>",
19
+ "result": "<یکی از دسته بندی ها>"
20
+ }}
21
+
22
+ متنی که باید طبقه بندی کنی:
23
+ {input}
24
+
25
+ analyze_template: |
26
+ هدف ما طبقه بندی متن هست
27
+ متن رو بخون و ایده اصلی و آنالیزی کوتاه از اون رو ارائه بده
28
+
29
+ بسیار خلاصه باشه خروجی تو
30
+ نهایتا 20 کلمه
31
+ {input}
@@ -1,9 +1,12 @@
1
1
  main_template: |
2
+ {user_prompt}
2
3
  Extract the most relevant keywords from the following text. Provide them as a list of strings.
3
- {input}
4
4
  Respond only in JSON format:
5
5
  {{"result": ["keyword1", "keyword2", ...]}}.
6
+ No addition, No comments, No explanation.
6
7
  Respond in the language of the input text.
8
+ Here is the text:
9
+ {input}
7
10
 
8
11
  analyze_template: |
9
12
  Analyze the following text to identify its main topics, concepts, and important terms.
@@ -1,7 +1,7 @@
1
1
  main_template: |
2
+ {user_prompt}
2
3
  Identify and extract all named entities (e.g., PER, ORG, LOC, DAT, etc.) from the following text. For each entity, provide its text and a clear type.
3
4
  Respond as a JSON array of objects.
4
- {input}
5
5
  Respond only in JSON format:
6
6
  {{
7
7
  "result": [
@@ -11,6 +11,9 @@ main_template: |
11
11
  }}
12
12
  ]
13
13
  }}
14
+ No addition, No comments, No explanation.
15
+ Here is the text:
16
+ {input}
14
17
 
15
18
  analyze_template: |
16
19
  Read the following text and identify any proper nouns, key concepts, or specific mentions that might represent named entities.
@@ -1,7 +1,10 @@
1
1
  main_template: |
2
+ {user_prompt}
2
3
  Determine that if the following text contains any question or request of some kind or not.
3
- Respond only in JSON format:
4
- {{"result": "true/false"}}
4
+ Respond only in JSON format (Output should be a bool):
5
+ {{"result": True/False}}
6
+ No addition, No comments, No explanation.
7
+ Here is the text:
5
8
  {input}
6
9
 
7
10
  analyze_template: |
@@ -1,15 +1,16 @@
1
1
  main_template: |
2
+ {user_prompt}
2
3
  Given the following answer, generate a single,
3
4
  appropriate question that this answer would directly respond to.
4
5
  The generated answer should be independently meaningful,
5
6
  and not mentioning any verbs like, this, that, he or she on the question.
6
7
  The generated question must be in the language of the user's input.
7
- Here is the text:
8
- {input}
9
8
  Respond only in JSON format:
10
9
  {{"result": "string"}}
11
10
  Respond only with the new generated question, without any additional information.
12
-
11
+ Here is the text:
12
+ {input}
13
+
13
14
  analyze_template: |
14
15
  Analyze the following answer to identify its key facts,
15
16
  main subject, and what kind of information it provides.
@@ -1,6 +1,7 @@
1
1
  main_template:
2
2
 
3
- default_mode: |
3
+ default: |
4
+ {user_prompt}
4
5
  You are a language expert.
5
6
  I will give you a list of questions that are semantically similar.
6
7
  Your task is to merge them into one unified question that:
@@ -15,7 +16,8 @@ main_template:
15
16
  Respond only with the new generated question, without any additional information.
16
17
  The generated question must be in the language of the users input.
17
18
 
18
- reason_mode: |
19
+ reason: |
20
+ {user_prompt}
19
21
  You are an AI assistant helping to unify semantically similar questions.
20
22
  First, briefly extract the unique intent or content from each input question.
21
23
  Then, write one merged question that combines all their content clearly and naturally, without redundancy.
@@ -29,7 +31,7 @@ main_template:
29
31
 
30
32
  analyze_template:
31
33
 
32
- default_mode: |
34
+ default: |
33
35
  Analyze the following questions to identify their core intent, key concepts,
34
36
  and the specific information they are seeking.
35
37
  Provide a brief, summarized understanding of the questions' meaning that
@@ -37,7 +39,7 @@ analyze_template:
37
39
  Respond in the language of the question.
38
40
  Here is the question: {input}
39
41
 
40
- reason_mode: |
42
+ reason: |
41
43
  Analyze the following questions to identify their exact wording, phrasing,
42
44
  and the literal meaning it conveys.
43
45
  Provide a brief, summarized analysis of their linguistic structure and current meaning,
@@ -1,6 +1,7 @@
1
1
  main_template:
2
2
 
3
- same_meaning_different_wording_mode: |
3
+ same_meaning_different_wording: |
4
+ {user_prompt}
4
5
  Rewrite the following question using completely different wording and phrasing,
5
6
  ensuring its original meaning is perfectly preserved. The rewritten question
6
7
  should be distinct from the original but convey the exact same inquiry.
@@ -11,7 +12,8 @@ main_template:
11
12
  Respond only with the new generated question, without any additional information.
12
13
  The generated question must be in the language of the users input.
13
14
 
14
- different_meaning_similar_wording_mode: |
15
+ different_meaning_similar_wording: |
16
+ {user_prompt}
15
17
  Rewrite the following question using *very similar wording and phrasing*
16
18
  to the original, but ensure the rewritten question has a *completely different meaning*.
17
19
  Focus on subtle changes that drastically alter the intent or subject of the question.
@@ -24,7 +26,7 @@ main_template:
24
26
 
25
27
  analyze_template:
26
28
 
27
- same_meaning_different_wording_mode: |
29
+ same_meaning_different_wording: |
28
30
  Analyze the following question to identify its core intent, key concepts,
29
31
  and the specific information it is seeking.
30
32
  Provide a brief, summarized understanding of the question's meaning that
@@ -33,7 +35,7 @@ analyze_template:
33
35
  Here is the question:
34
36
  {input}
35
37
 
36
- different_meaning_similar_wording_mode: |
38
+ different_meaning_similar_wording: |
37
39
  Analyze the following question to identify its exact wording, phrasing,
38
40
  and the literal meaning it conveys.
39
41
  Provide a brief, summarized analysis of its linguistic structure and current meaning,
@@ -1,19 +1,19 @@
1
1
  main_template: |
2
+ {user_prompt}
2
3
  Given the following subject, generate {number_of_questions} appropriate questions that this subject would directly respond to.
3
4
  The generated subject should be independently meaningful,
4
5
  and it must not mention any verbs like, this, that, he or she and etc. in the question.
5
- The generated question must be in {language} language.
6
6
  Here is the text:
7
7
  {input}
8
8
  Respond only with the new generated question, without any additional information.
9
- The generated question must be in {language} language.
10
9
  Generate {number_of_questions} number of questions in the questions list.
11
10
  You must return ONLY a single JSON object that matches the schema.
12
11
  There is a `reason` key, fill that up with a really summerized version
13
12
  of your thoughts.
14
13
  The `reason` must be less than 20 words.
14
+ Don't forget to fill the reason.
15
15
  Respond only in JSON format:
16
- {{"result": [string, string, ...]}}
16
+ {{"result": [str, str, ...], "reason": str}}
17
17
 
18
18
  analyze_template: |
19
19
  Our goal is to generate questions, from the given subject that I've provided.
@@ -24,4 +24,3 @@ analyze_template: |
24
24
  What point of views can we see and generate questoins from it? (Questions that real users might have.)
25
25
  Here is the subject:
26
26
  {input}
27
- Respond only in {language} language.
@@ -1,4 +1,5 @@
1
1
  main_template: |
2
+ {user_prompt}
2
3
  Provide a concise summary of the following text:
3
4
  {input}
4
5
  Respond only in JSON format:
@@ -1,4 +1,5 @@
1
1
  main_template: |
2
+ {user_prompt}
2
3
  You are a {target_language} translator.
3
4
  Output only the translated text. No comments, no explanations, no markdown.
4
5
  Translate the following text to {target_language}:
@@ -0,0 +1,4 @@
1
+ from .async_the_tool import AsyncTheTool
2
+ from .the_tool import TheTool
3
+
4
+ __all__ = ["TheTool", "AsyncTheTool"]