hamtaa-texttools 1.1.1__py3-none-any.whl → 1.2.0__py3-none-any.whl

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
Files changed (47) hide show
  1. hamtaa_texttools-1.2.0.dist-info/METADATA +212 -0
  2. hamtaa_texttools-1.2.0.dist-info/RECORD +34 -0
  3. texttools/__init__.py +6 -8
  4. texttools/batch/__init__.py +0 -4
  5. texttools/batch/config.py +40 -0
  6. texttools/batch/{batch_manager.py → manager.py} +41 -42
  7. texttools/batch/runner.py +228 -0
  8. texttools/core/__init__.py +0 -0
  9. texttools/core/engine.py +254 -0
  10. texttools/core/exceptions.py +22 -0
  11. texttools/core/internal_models.py +58 -0
  12. texttools/core/operators/async_operator.py +194 -0
  13. texttools/core/operators/sync_operator.py +192 -0
  14. texttools/models.py +88 -0
  15. texttools/prompts/categorize.yaml +36 -0
  16. texttools/prompts/check_fact.yaml +24 -0
  17. texttools/prompts/extract_entities.yaml +7 -3
  18. texttools/prompts/extract_keywords.yaml +80 -18
  19. texttools/prompts/is_question.yaml +6 -2
  20. texttools/prompts/merge_questions.yaml +12 -5
  21. texttools/prompts/propositionize.yaml +24 -0
  22. texttools/prompts/rewrite.yaml +9 -10
  23. texttools/prompts/run_custom.yaml +2 -2
  24. texttools/prompts/subject_to_question.yaml +7 -3
  25. texttools/prompts/summarize.yaml +6 -2
  26. texttools/prompts/text_to_question.yaml +12 -6
  27. texttools/prompts/translate.yaml +7 -2
  28. texttools/py.typed +0 -0
  29. texttools/tools/__init__.py +0 -4
  30. texttools/tools/async_tools.py +1093 -0
  31. texttools/tools/sync_tools.py +1092 -0
  32. hamtaa_texttools-1.1.1.dist-info/METADATA +0 -183
  33. hamtaa_texttools-1.1.1.dist-info/RECORD +0 -30
  34. texttools/batch/batch_runner.py +0 -263
  35. texttools/prompts/README.md +0 -35
  36. texttools/prompts/categorizer.yaml +0 -28
  37. texttools/tools/async_the_tool.py +0 -414
  38. texttools/tools/internals/async_operator.py +0 -179
  39. texttools/tools/internals/base_operator.py +0 -91
  40. texttools/tools/internals/formatters.py +0 -24
  41. texttools/tools/internals/operator.py +0 -179
  42. texttools/tools/internals/output_models.py +0 -59
  43. texttools/tools/internals/prompt_loader.py +0 -57
  44. texttools/tools/the_tool.py +0 -412
  45. {hamtaa_texttools-1.1.1.dist-info → hamtaa_texttools-1.2.0.dist-info}/WHEEL +0 -0
  46. {hamtaa_texttools-1.1.1.dist-info → hamtaa_texttools-1.2.0.dist-info}/licenses/LICENSE +0 -0
  47. {hamtaa_texttools-1.1.1.dist-info → hamtaa_texttools-1.2.0.dist-info}/top_level.txt +0 -0
@@ -1,183 +0,0 @@
1
- Metadata-Version: 2.4
2
- Name: hamtaa-texttools
3
- Version: 1.1.1
4
- Summary: A high-level NLP toolkit built on top of modern LLMs.
5
- Author-email: Tohidi <the.mohammad.tohidi@gmail.com>, Montazer <montazerh82@gmail.com>, Givechi <mohamad.m.givechi@gmail.com>, MoosaviNejad <erfanmoosavi84@gmail.com>
6
- License: MIT License
7
-
8
- Copyright (c) 2025 Hamtaa
9
-
10
- Permission is hereby granted, free of charge, to any person obtaining a copy
11
- of this software and associated documentation files (the "Software"), to deal
12
- in the Software without restriction, including without limitation the rights
13
- to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
14
- copies of the Software, and to permit persons to whom the Software is
15
- furnished to do so, subject to the following conditions:
16
-
17
- The above copyright notice and this permission notice shall be included in all
18
- copies or substantial portions of the Software.
19
-
20
- THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
21
- IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
22
- FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
23
- AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
24
- LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
25
- OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
26
- SOFTWARE.
27
- Keywords: nlp,llm,text-processing,openai
28
- Requires-Python: >=3.8
29
- Description-Content-Type: text/markdown
30
- License-File: LICENSE
31
- Requires-Dist: openai==1.97.1
32
- Requires-Dist: pyyaml>=6.0
33
- Dynamic: license-file
34
-
35
- # TextTools
36
-
37
- ## 📌 Overview
38
-
39
- **TextTools** is a high-level **NLP toolkit** built on top of modern **LLMs**.
40
-
41
- It provides both **sync (`TheTool`)** and **async (`AsyncTheTool`)** APIs for maximum flexibility.
42
-
43
- It provides ready-to-use utilities for **translation, question detection, keyword extraction, categorization, NER extractor, and more** — designed to help you integrate AI-powered text processing into your applications with minimal effort.
44
-
45
- ---
46
-
47
- ## ✨ Features
48
-
49
- TextTools provides a rich collection of high-level NLP utilities built on top of LLMs.
50
- Each tool is designed to work out-of-the-box with structured outputs (JSON / Pydantic).
51
-
52
- - **`categorize()`** - Classifies text into Islamic studies categories
53
- - **`is_question()`** - Binary detection of whether input is a question
54
- - **`extract_keywords()`** - Extracts keywords from text
55
- - **`extract_entities()`** - Named Entity Recognition (NER) system
56
- - **`summarize()`** - Text summarization
57
- - **`text_to_question()`** - Generates questions from text
58
- - **`merge_questions()`** - Merges multiple questions with different modes
59
- - **`rewrite()`** - Rewrites text with different wording/meaning
60
- - **`subject_to_question()`** - Generates questions about a specific subject
61
- - **`translate()`** - Text translation between languages
62
- - **`run_custom()`** - Allows users to define a custom tool with arbitrary BaseModel
63
-
64
- ---
65
-
66
- ## ⚙️ `with_analysis`, `logprobs`, `output_lang`, `user_prompt` and `temperature` parameters
67
-
68
- TextTools provides several optional flags to customize LLM behavior:
69
-
70
- - **`with_analysis=True`** → Adds a reasoning step before generating the final output. Useful for debugging, improving prompts, or understanding model behavior.
71
- Note: This doubles token usage per call because it triggers an additional LLM request.
72
-
73
- - **`logprobs=True`** → Returns token-level probabilities for the generated output. You can also specify `top_logprobs=<N>` to get the top N alternative tokens and their probabilities.
74
-
75
- - **`output_lang="en"`** → Forces the model to respond in a specific language. The model will ignore other instructions about language and respond strictly in the requested language.
76
-
77
- - **`user_prompt="..."`** → Allows you to inject a custom instruction or prompt into the model alongside the main template. This gives you fine-grained control over how the model interprets or modifies the input text.
78
-
79
- - **`temperature=0.0`** → Determines how creative the model should respond. Takes a float number from `0.0` to `1.0`.
80
-
81
- All these parameters can be used individually or together to tailor the behavior of any tool in **TextTools**.
82
-
83
- **Note:** There might be some tools that don't support some of the parameters above.
84
-
85
- ---
86
-
87
- ## 🚀 Installation
88
-
89
- Install the latest release via PyPI:
90
-
91
- ```bash
92
- pip install -U hamtaa-texttools
93
- ```
94
-
95
- ---
96
-
97
- ## Sync vs Async
98
- | Tool | Style | Use case |
99
- |--------------|---------|---------------------------------------------|
100
- | `TheTool` | Sync | Simple scripts, sequential workflows |
101
- | `AsyncTheTool` | Async | High-throughput apps, APIs, concurrent tasks |
102
-
103
- ---
104
-
105
- ## ⚡ Quick Start (Sync)
106
-
107
- ```python
108
- from openai import OpenAI
109
- from texttools import TheTool
110
-
111
- # Create your OpenAI client
112
- client = OpenAI(base_url = "your_url", API_KEY = "your_api_key")
113
-
114
- # Specify the model
115
- model = "gpt-4o-mini"
116
-
117
- # Create an instance of TheTool
118
- the_tool = TheTool(client=client, model=model)
119
-
120
- # Example: Question Detection
121
- detection = the_tool.is_question("Is this project open source?", logprobs=True, top_logprobs=2)
122
- print(detection.result)
123
- print(detection.logprobs)
124
- # Output: True \n --logprobs
125
-
126
- # Example: Translation
127
- translation = the_tool.translate("سلام، حالت چطوره؟" target_language="English", with_analysis=True)
128
- print(translation.result)
129
- print(translation.analysis)
130
- # Output: "Hi! How are you?" \n --analysis
131
- ```
132
-
133
- ---
134
-
135
- ## ⚡ Quick Start (Async)
136
-
137
- ```python
138
- import asyncio
139
- from openai import AsyncOpenAI
140
- from texttools import AsyncTheTool
141
-
142
- async def main():
143
- # Create your AsyncOpenAI client
144
- async_client = AsyncOpenAI(base_url="your_url", api_key="your_api_key")
145
-
146
- # Specify the model
147
- model = "gpt-4o-mini"
148
-
149
- # Create an instance of AsyncTheTool
150
- the_tool = AsyncTheTool(client=async_client, model=model)
151
-
152
- # Example: Async Translation
153
- translation = await the_tool.translate("سلام، حالت چطوره؟", target_language="English")
154
- print(translation.result)
155
- # Output: "Hi! How are you?"
156
-
157
- asyncio.run(main())
158
- ```
159
-
160
- ---
161
-
162
- ## 📚 Use Cases
163
-
164
- Use **TextTools** when you need to:
165
-
166
- - 🔍 **Classify** large datasets quickly without model training
167
- - 🌍 **Translate** and process multilingual corpora with ease
168
- - 🧩 **Integrate** LLMs into production pipelines (structured outputs)
169
- - 📊 **Analyze** large text collections using embeddings and categorization
170
- - 👍 **Automate** common text-processing tasks without reinventing the wheel
171
-
172
- ---
173
-
174
- ## 🤝 Contributing
175
-
176
- Contributions are welcome!
177
- Feel free to **open issues, suggest new features, or submit pull requests**.
178
-
179
- ---
180
-
181
- ## License
182
-
183
- This project is licensed under the MIT License - see the [LICENSE](LICENSE) file for details.
@@ -1,30 +0,0 @@
1
- hamtaa_texttools-1.1.1.dist-info/licenses/LICENSE,sha256=Hb2YOBKy2MJQLnyLrX37B4ZVuac8eaIcE71SvVIMOLg,1082
2
- texttools/__init__.py,sha256=v3tQCH_Cjj47fCpuhK6sKSVAqEjNkc-cZbY4OJa4IZw,202
3
- texttools/batch/__init__.py,sha256=q50JsQsmQGp_8RW0KNasYeYWVV0R4FUNZ-ujXwEJemY,143
4
- texttools/batch/batch_manager.py,sha256=leVIFkR-3HpDkQi_MK3TgFNnHYsCN-wbS4mTWoPmO3c,8828
5
- texttools/batch/batch_runner.py,sha256=cgiCYLIBQQC0dBWM8_lVP9c5QLJoAmS2ijMtp0p3U2o,10313
6
- texttools/prompts/README.md,sha256=rclMaCV1N8gT1KcpZu0-ka0dKGNg2f1CEcRMdQkgQOc,1379
7
- texttools/prompts/categorizer.yaml,sha256=GMqIIzQFhgnlpkgU1qi3FAD3mD4A2jiWD5TilQ2XnnE,1204
8
- texttools/prompts/extract_entities.yaml,sha256=KiKjeDpHaeh3JVtZ6q1pa3k4DYucUIU9WnEcRTCA-SE,651
9
- texttools/prompts/extract_keywords.yaml,sha256=0O7ypL_OsEOxtvlQ2CZjnsv9637DJwAKprZsf9Vo2_s,769
10
- texttools/prompts/is_question.yaml,sha256=d0-vKRbXWkxvO64ikvxRjEmpAXGpCYIPGhgexvPPjws,471
11
- texttools/prompts/merge_questions.yaml,sha256=0J85GvTirZB4ELwH3sk8ub_WcqqpYf6PrMKr3djlZeo,1792
12
- texttools/prompts/rewrite.yaml,sha256=LO7He_IA3MZKz8a-LxH9DHJpOjpYwaYN1pbjp1Y0tFo,5392
13
- texttools/prompts/run_custom.yaml,sha256=38OkCoVITbuuS9c08UZSP1jZW4WjSmRIi8fR0RAiPu4,108
14
- texttools/prompts/subject_to_question.yaml,sha256=C7x7rNNm6U_ZG9HOn6zuzYOtvJUZ2skuWbL1-aYdd3E,1147
15
- texttools/prompts/summarize.yaml,sha256=o6rxGPfWtZd61Duvm8NVvCJqfq73b-wAuMSKR6UYUqY,459
16
- texttools/prompts/text_to_question.yaml,sha256=UheKYpDn6iyKI8NxunHZtFpNyfCLZZe5cvkuXpurUJY,783
17
- texttools/prompts/translate.yaml,sha256=mGT2uBCei6uucWqVbs4silk-UV060v3G0jnt0P6sr50,634
18
- texttools/tools/__init__.py,sha256=hG1I28Q7BJ1Dbs95x6QMKXdsAlC5Eh_tqC-EbAibwiU,114
19
- texttools/tools/async_the_tool.py,sha256=h6-Zkedet-eRUrkV5fANNoh4WmoqhXU5wJEHpd8nyNU,14377
20
- texttools/tools/the_tool.py,sha256=lKy3_CKcWo2cBLQ7dDgvh7-oos7UOx1NYM26tcMhwaI,14143
21
- texttools/tools/internals/async_operator.py,sha256=Kj-DLBKcKbZPCJYn4lVo4Iiei11M04pwgWpIl8L69aM,6169
22
- texttools/tools/internals/base_operator.py,sha256=OWJe8ybA6qmmoc7ysYeB8ccHPneDlEtmFGH1jLWQCeY,3135
23
- texttools/tools/internals/formatters.py,sha256=tACNLP6PeoqaRpNudVxBaHA25zyWqWYPZQuYysIu88g,941
24
- texttools/tools/internals/operator.py,sha256=g1E1WkgnKRDgOs6fEFu0-gPCw1Bniwb4VI9Er3Op_gk,6063
25
- texttools/tools/internals/output_models.py,sha256=gbVbzBWeyHUVNsCBuawdgz9ZEzsC7wfygGgZJsAaexY,1662
26
- texttools/tools/internals/prompt_loader.py,sha256=rbitJD3e8vAdcooP1Yx6KnSI83g28ho-FegfZ1cJ4j4,1979
27
- hamtaa_texttools-1.1.1.dist-info/METADATA,sha256=Cc1Rq94QyXgJ8SNhsBgyUfhho3oywzGpx6y16s50b-Q,7144
28
- hamtaa_texttools-1.1.1.dist-info/WHEEL,sha256=_zCd3N1l69ArxyTb8rzEoP9TpbYXkqRFSNOD5OuxnTs,91
29
- hamtaa_texttools-1.1.1.dist-info/top_level.txt,sha256=5Mh0jIxxZ5rOXHGJ6Mp-JPKviywwN0MYuH0xk5bEWqE,10
30
- hamtaa_texttools-1.1.1.dist-info/RECORD,,
@@ -1,263 +0,0 @@
1
- import json
2
- import os
3
- import time
4
- from dataclasses import dataclass
5
- from pathlib import Path
6
- from typing import Any, Callable
7
- import logging
8
-
9
- from dotenv import load_dotenv
10
- from openai import OpenAI
11
- from pydantic import BaseModel
12
-
13
- from texttools.batch import SimpleBatchManager
14
-
15
- # Configure logger
16
- logger = logging.getLogger("batch_runner")
17
- logger.setLevel(logging.INFO)
18
-
19
-
20
- class OutputModel(BaseModel):
21
- desired_output: str
22
-
23
-
24
- def export_data(data):
25
- """
26
- Produces a structure of the following form from an initial data structure:
27
- [{"id": str, "text": str},...]
28
- """
29
- return data
30
-
31
-
32
- def import_data(data):
33
- """
34
- Takes the output and adds and aggregates it to the original structure.
35
- """
36
- return data
37
-
38
-
39
- @dataclass
40
- class BatchConfig:
41
- """
42
- Configuration for batch job runner.
43
- """
44
-
45
- system_prompt: str = ""
46
- job_name: str = ""
47
- input_data_path: str = ""
48
- output_data_filename: str = ""
49
- model: str = "gpt-4.1-mini"
50
- MAX_BATCH_SIZE: int = 100
51
- MAX_TOTAL_TOKENS: int = 2000000
52
- CHARS_PER_TOKEN: float = 2.7
53
- PROMPT_TOKEN_MULTIPLIER: int = 1000
54
- BASE_OUTPUT_DIR: str = "Data/batch_entity_result"
55
- import_function: Callable = import_data
56
- export_function: Callable = export_data
57
- poll_interval_seconds: int = 30
58
- max_retries: int = 3
59
-
60
-
61
- class BatchJobRunner:
62
- """
63
- Handles running batch jobs using a batch manager and configuration.
64
- """
65
-
66
- def __init__(
67
- self, config: BatchConfig = BatchConfig(), output_model: type = OutputModel
68
- ):
69
- self.config = config
70
- self.system_prompt = config.system_prompt
71
- self.job_name = config.job_name
72
- self.input_data_path = config.input_data_path
73
- self.output_data_filename = config.output_data_filename
74
- self.model = config.model
75
- self.output_model = output_model
76
- self.manager = self._init_manager()
77
- self.data = self._load_data()
78
- self.parts: list[list[dict[str, Any]]] = []
79
- self._partition_data()
80
- Path(self.config.BASE_OUTPUT_DIR).mkdir(parents=True, exist_ok=True)
81
- # Map part index to job name
82
- self.part_idx_to_job_name: dict[int, str] = {}
83
- # Track retry attempts per part
84
- self.part_attempts: dict[int, int] = {}
85
-
86
- def _init_manager(self) -> SimpleBatchManager:
87
- load_dotenv()
88
- api_key = os.getenv("OPENAI_API_KEY")
89
- client = OpenAI(api_key=api_key)
90
- return SimpleBatchManager(
91
- client=client,
92
- model=self.model,
93
- prompt_template=self.system_prompt,
94
- output_model=self.output_model,
95
- )
96
-
97
- def _load_data(self):
98
- with open(self.input_data_path, "r", encoding="utf-8") as f:
99
- data = json.load(f)
100
- data = self.config.export_function(data)
101
-
102
- # Ensure data is a list of dicts with 'id' and 'content' as strings
103
- if not isinstance(data, list):
104
- raise ValueError(
105
- 'Exported data must be a list in this form: [ {"id": str, "content": str},...]'
106
- )
107
- for item in data:
108
- if not (isinstance(item, dict) and "id" in item and "content" in item):
109
- raise ValueError(
110
- "Each item must be a dict with 'id' and 'content' keys."
111
- )
112
- if not (isinstance(item["id"], str) and isinstance(item["content"], str)):
113
- raise ValueError("'id' and 'content' must be strings.")
114
- return data
115
-
116
- def _partition_data(self):
117
- total_length = sum(len(item["content"]) for item in self.data)
118
- prompt_length = len(self.system_prompt)
119
- total = total_length + (prompt_length * len(self.data))
120
- calculation = total / self.config.CHARS_PER_TOKEN
121
- logger.info(
122
- f"Total chars: {total_length}, Prompt chars: {prompt_length}, Total: {total}, Tokens: {calculation}"
123
- )
124
- if calculation < self.config.MAX_TOTAL_TOKENS:
125
- self.parts = [self.data]
126
- else:
127
- # Partition into chunks of MAX_BATCH_SIZE
128
- self.parts = [
129
- self.data[i : i + self.config.MAX_BATCH_SIZE]
130
- for i in range(0, len(self.data), self.config.MAX_BATCH_SIZE)
131
- ]
132
- logger.info(f"Data split into {len(self.parts)} part(s)")
133
-
134
- def _submit_all_jobs(self) -> None:
135
- for idx, part in enumerate(self.parts):
136
- if self._result_exists(idx):
137
- logger.info(f"Skipping part {idx + 1}: result already exists.")
138
- continue
139
- part_job_name = (
140
- f"{self.job_name}_part_{idx + 1}"
141
- if len(self.parts) > 1
142
- else self.job_name
143
- )
144
- # If a job with this name already exists, register and skip submitting
145
- existing_job = self.manager._load_state(part_job_name)
146
- if existing_job:
147
- logger.info(
148
- f"Skipping part {idx + 1}: job already exists ({part_job_name})."
149
- )
150
- self.part_idx_to_job_name[idx] = part_job_name
151
- self.part_attempts.setdefault(idx, 0)
152
- continue
153
-
154
- payload = part
155
- logger.info(
156
- f"Submitting job for part {idx + 1}/{len(self.parts)}: {part_job_name}"
157
- )
158
- self.manager.start(payload, job_name=part_job_name)
159
- self.part_idx_to_job_name[idx] = part_job_name
160
- self.part_attempts.setdefault(idx, 0)
161
- # This is added for letting file get uploaded, before starting the next part.
162
- logger.info("Uploading...")
163
- time.sleep(30)
164
-
165
- def run(self):
166
- # Submit all jobs up-front for concurrent execution
167
- self._submit_all_jobs()
168
- pending_parts: set[int] = set(self.part_idx_to_job_name.keys())
169
- logger.info(f"Pending parts: {sorted(pending_parts)}")
170
- # Polling loop
171
- while pending_parts:
172
- finished_this_round: list[int] = []
173
- for part_idx in list(pending_parts):
174
- job_name = self.part_idx_to_job_name[part_idx]
175
- status = self.manager.check_status(job_name=job_name)
176
- logger.info(f"Status for {job_name}: {status}")
177
- if status == "completed":
178
- logger.info(
179
- f"Job completed. Fetching results for part {part_idx + 1}..."
180
- )
181
- output_data, log = self.manager.fetch_results(
182
- job_name=job_name, remove_cache=False
183
- )
184
- output_data = self.config.import_function(output_data)
185
- self._save_results(output_data, log, part_idx)
186
- logger.info(f"Fetched and saved results for part {part_idx + 1}.")
187
- finished_this_round.append(part_idx)
188
- elif status == "failed":
189
- attempt = self.part_attempts.get(part_idx, 0) + 1
190
- self.part_attempts[part_idx] = attempt
191
- if attempt <= self.config.max_retries:
192
- logger.info(
193
- f"Job {job_name} failed (attempt {attempt}). Retrying after short backoff..."
194
- )
195
- self.manager._clear_state(job_name)
196
- time.sleep(10)
197
- payload = self._to_manager_payload(self.parts[part_idx])
198
- new_job_name = (
199
- f"{self.job_name}_part_{part_idx + 1}_retry_{attempt}"
200
- )
201
- self.manager.start(payload, job_name=new_job_name)
202
- self.part_idx_to_job_name[part_idx] = new_job_name
203
- else:
204
- logger.info(
205
- f"Job {job_name} failed after {attempt - 1} retries. Marking as failed."
206
- )
207
- finished_this_round.append(part_idx)
208
- else:
209
- # Still running or queued
210
- continue
211
- # Remove finished parts
212
- for part_idx in finished_this_round:
213
- pending_parts.discard(part_idx)
214
- if pending_parts:
215
- logger.info(
216
- f"Waiting {self.config.poll_interval_seconds}s before next status check for parts: {sorted(pending_parts)}"
217
- )
218
- time.sleep(self.config.poll_interval_seconds)
219
-
220
- def _save_results(
221
- self,
222
- output_data: list[dict[str, Any]] | dict[str, Any],
223
- log: list[Any],
224
- part_idx: int,
225
- ):
226
- part_suffix = f"_part_{part_idx + 1}" if len(self.parts) > 1 else ""
227
- result_path = (
228
- Path(self.config.BASE_OUTPUT_DIR)
229
- / f"{Path(self.output_data_filename).stem}{part_suffix}.json"
230
- )
231
- if not output_data:
232
- logger.info("No output data to save. Skipping this part.")
233
- return
234
- else:
235
- with open(result_path, "w", encoding="utf-8") as f:
236
- json.dump(output_data, f, ensure_ascii=False, indent=4)
237
- if log:
238
- log_path = (
239
- Path(self.config.BASE_OUTPUT_DIR)
240
- / f"{Path(self.output_data_filename).stem}{part_suffix}_log.json"
241
- )
242
- with open(log_path, "w", encoding="utf-8") as f:
243
- json.dump(log, f, ensure_ascii=False, indent=4)
244
-
245
- def _result_exists(self, part_idx: int) -> bool:
246
- part_suffix = f"_part_{part_idx + 1}" if len(self.parts) > 1 else ""
247
- result_path = (
248
- Path(self.config.BASE_OUTPUT_DIR)
249
- / f"{Path(self.output_data_filename).stem}{part_suffix}.json"
250
- )
251
- return result_path.exists()
252
-
253
-
254
- if __name__ == "__main__":
255
- logger.info("=== Batch Job Runner ===")
256
- config = BatchConfig(
257
- system_prompt="",
258
- job_name="job_name",
259
- input_data_path="Data.json",
260
- output_data_filename="output",
261
- )
262
- runner = BatchJobRunner(config)
263
- runner.run()
@@ -1,35 +0,0 @@
1
- # Prompts
2
-
3
- ## Overview
4
- This folder contains YAML files for all prompts used in the project. Each file represents a separate prompt template, which can be loaded by tools or scripts that require structured prompts for AI models.
5
-
6
- ---
7
-
8
- ## Structure
9
- - **prompt_file.yaml**: Each YAML file represents a single prompt template.
10
- - **main_template**: The main instruction template for the model.
11
- - **analyze_template** (optional): A secondary reasoning template used before generating the final response.
12
- - **Modes** (optional): Some prompts may have multiple modes (e.g., `default`, `reason`) to allow different behaviors.
13
-
14
- ### Example YAML Structure
15
- ```yaml
16
- main_template:
17
- default: |
18
- Your main instructions here with placeholders like {input}.
19
- reason: |
20
- Optional reasoning instructions here.
21
-
22
- analyze_template:
23
- default: |
24
- Analyze and summarize the input.
25
- reason: |
26
- Optional detailed analysis template.
27
- ```
28
-
29
- ---
30
-
31
- ## Guidelines
32
- 1. **Naming**: Use descriptive names for each YAML file corresponding to the tool or task it serves.
33
- 2. **Placeholders**: Use `{input}` or other relevant placeholders to dynamically inject data.
34
- 3. **Modes**: If using modes, ensure both `main_template` and `analyze_template` contain the corresponding keys.
35
- 4. **Consistency**: Keep formatting consistent across files for easier parsing by scripts.
@@ -1,28 +0,0 @@
1
- main_template: |
2
- تو یک متخصص علوم دینی هستی
3
- من یک متن به تو میدهم و تو باید
4
- آن متن را در یکی از دسته بندی های زیر طبقه بندی کنی
5
- دسته بندی ها:
6
- "باورهای دینی",
7
- "اخلاق اسلامی",
8
- "احکام و فقه",
9
- "تاریخ اسلام و شخصیت ها",
10
- "منابع دینی",
11
- "دین و جامعه/سیاست",
12
- "عرفان و معنویت",
13
- "هیچکدام",
14
- فقط با این فرمت json پاسخ بده:
15
- {{
16
- "reason": "<دلیل انتخابت رو به صورت خلاصه بگو>",
17
- "result": "<یکی از دسته بندی ها>"
18
- }}
19
- متنی که باید طبقه بندی کنی:
20
- {input}
21
-
22
- analyze_template: |
23
- ما میخواهیم متنی که داده می شود را طبقه بندی کنیم.
24
- برای بهبود طبقه بندی، نیاز به آنالیز متن داریم.
25
- متنی که داده می شود را آنالیز کن و ایده اصلی و آنالیزی کوتاه از آن را بنویس.
26
- آنالیز باید بسیار خلاصه باشد
27
- نهایتا 20 کلمه
28
- {input}