llm_batch_helper 0.1.5__py3-none-any.whl → 0.1.6__py3-none-any.whl

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
@@ -3,7 +3,7 @@ from .config import LLMConfig
3
3
  from .input_handlers import get_prompts, read_prompt_files, read_prompt_list
4
4
  from .providers import process_prompts_batch
5
5
 
6
- __version__ = "0.1.5"
6
+ __version__ = "0.1.6"
7
7
 
8
8
  __all__ = [
9
9
  "LLMCache",
@@ -4,7 +4,6 @@ from typing import Any, Dict, List, Optional, Tuple, Union
4
4
 
5
5
  import httpx
6
6
  import openai
7
- from dotenv import load_dotenv
8
7
  from tenacity import retry, retry_if_exception_type, stop_after_attempt, wait_exponential
9
8
  from tqdm.asyncio import tqdm_asyncio
10
9
 
@@ -12,8 +11,6 @@ from .cache import LLMCache
12
11
  from .config import LLMConfig
13
12
  from .input_handlers import get_prompts
14
13
 
15
- load_dotenv()
16
-
17
14
 
18
15
  @retry(
19
16
  stop=stop_after_attempt(5),
@@ -1,6 +1,6 @@
1
1
  Metadata-Version: 2.3
2
2
  Name: llm_batch_helper
3
- Version: 0.1.5
3
+ Version: 0.1.6
4
4
  Summary: A Python package that enables batch submission of prompts to LLM APIs, with built-in async capabilities and response caching.
5
5
  License: MIT
6
6
  Keywords: llm,openai,together,batch,async,ai,nlp,api
@@ -19,7 +19,6 @@ Classifier: Topic :: Scientific/Engineering :: Artificial Intelligence
19
19
  Classifier: Topic :: Software Development :: Libraries :: Python Modules
20
20
  Requires-Dist: httpx (>=0.24.0,<2.0.0)
21
21
  Requires-Dist: openai (>=1.0.0,<2.0.0)
22
- Requires-Dist: python-dotenv (>=1.0.0,<2.0.0)
23
22
  Requires-Dist: tenacity (>=8.0.0,<9.0.0)
24
23
  Requires-Dist: tqdm (>=4.65.0,<5.0.0)
25
24
  Project-URL: Homepage, https://github.com/TianyiPeng/LLM_batch_helper
@@ -28,7 +27,29 @@ Description-Content-Type: text/markdown
28
27
 
29
28
  # LLM Batch Helper
30
29
 
31
- A Python package that enables batch submission of prompts to LLM APIs, with built-in async capabilities and response caching.
30
+ [![PyPI version](https://badge.fury.io/py/llm_batch_helper.svg)](https://badge.fury.io/py/llm_batch_helper)
31
+ [![Downloads](https://pepy.tech/badge/llm_batch_helper)](https://pepy.tech/project/llm_batch_helper)
32
+ [![Downloads/Month](https://pepy.tech/badge/llm_batch_helper/month)](https://pepy.tech/project/llm_batch_helper)
33
+ [![Documentation Status](https://readthedocs.org/projects/llm-batch-helper/badge/?version=latest)](https://llm-batch-helper.readthedocs.io/en/latest/?badge=latest)
34
+ [![License: MIT](https://img.shields.io/badge/License-MIT-yellow.svg)](https://opensource.org/licenses/MIT)
35
+
36
+ A Python package that enables batch submission of prompts to LLM APIs, with built-in async capabilities, response caching, prompt verification, and more. This package is designed to streamline applications like LLM simulation, LLM-as-a-judge, and other batch processing scenarios.
37
+
38
+ 📖 **[Complete Documentation](https://llm-batch-helper.readthedocs.io/)** | 🚀 **[Quick Start Guide](https://llm-batch-helper.readthedocs.io/en/latest/quickstart.html)**
39
+
40
+ ## Why we designed this package
41
+
42
+ Calling LLM APIs has become increasingly common, but several pain points exist in practice:
43
+
44
+ 1. **Efficient Batch Processing**: How do you run LLM calls in batches efficiently? Our async implementation is 3X-100X faster than multi-thread/multi-process approaches.
45
+
46
+ 2. **API Reliability**: LLM APIs can be unstable, so we need robust retry mechanisms when calls get interrupted.
47
+
48
+ 3. **Long-Running Simulations**: During long-running LLM simulations, computers can crash and APIs can fail. Can we cache LLM API calls to avoid repeating completed work?
49
+
50
+ 4. **Output Validation**: LLM outputs often have format requirements. If the output isn't right, we need to retry with validation.
51
+
52
+ This package is designed to solve these exact pain points with async processing, intelligent caching, and comprehensive error handling. If there are some additional features you need, please post an issue.
32
53
 
33
54
  ## Features
34
55
 
@@ -67,6 +88,7 @@ poetry shell
67
88
 
68
89
  ### 1. Set up environment variables
69
90
 
91
+ **Option A: Environment Variables**
70
92
  ```bash
71
93
  # For OpenAI
72
94
  export OPENAI_API_KEY="your-openai-api-key"
@@ -75,6 +97,22 @@ export OPENAI_API_KEY="your-openai-api-key"
75
97
  export TOGETHER_API_KEY="your-together-api-key"
76
98
  ```
77
99
 
100
+ **Option B: .env File (Recommended for Development)**
101
+ ```python
102
+ # In your script, before importing llm_batch_helper
103
+ from dotenv import load_dotenv
104
+ load_dotenv() # Load from .env file
105
+
106
+ # Then use the package normally
107
+ from llm_batch_helper import LLMConfig, process_prompts_batch
108
+ ```
109
+
110
+ Create a `.env` file in your project:
111
+ ```
112
+ OPENAI_API_KEY=your-openai-api-key
113
+ TOGETHER_API_KEY=your-together-api-key
114
+ ```
115
+
78
116
  ### 2. Interactive Tutorial (Recommended)
79
117
 
80
118
  Check out the comprehensive Jupyter notebook [tutorial](https://github.com/TianyiPeng/LLM_batch_helper/blob/main/tutorials/llm_batch_helper_tutorial.ipynb).
@@ -85,8 +123,12 @@ The tutorial covers all features with interactive examples!
85
123
 
86
124
  ```python
87
125
  import asyncio
126
+ from dotenv import load_dotenv # Optional: for .env file support
88
127
  from llm_batch_helper import LLMConfig, process_prompts_batch
89
128
 
129
+ # Optional: Load environment variables from .env file
130
+ load_dotenv()
131
+
90
132
  async def main():
91
133
  # Create configuration
92
134
  config = LLMConfig(
@@ -0,0 +1,10 @@
1
+ llm_batch_helper/__init__.py,sha256=a9jaWNSp0pqLz3gB_mo9r1aYo9f1i-qtuYhJj8AOPHk,348
2
+ llm_batch_helper/cache.py,sha256=QUODQ1tPCvFThO3yvVOTcorcOrmN2dP5HLF1Y2O1bTQ,1276
3
+ llm_batch_helper/config.py,sha256=WcZKTD-Mtocsx1plS9x3hh6MstVmyxD-tyidGUatkPY,1327
4
+ llm_batch_helper/exceptions.py,sha256=59_f3jINUhKFble6HTp8pmtLSFE2MYLHWGclwaQKs28,296
5
+ llm_batch_helper/input_handlers.py,sha256=IadA732F1Rw0zcBok5hjZr32RUm8eTUOpvLsRuMvaE4,2877
6
+ llm_batch_helper/providers.py,sha256=cgccd_4D7J48ClmkigZ7KXOzTnBmaya8soDYF5IlPJs,10212
7
+ llm_batch_helper-0.1.6.dist-info/LICENSE,sha256=xx0jnfkXJvxRnG63LTGOxlggYnIysveWIZ6H3PNdCrQ,11357
8
+ llm_batch_helper-0.1.6.dist-info/METADATA,sha256=Msg55neTu6jvxLKW6hicJOp-k7Q6Edp8qdIs_AKfVkM,11262
9
+ llm_batch_helper-0.1.6.dist-info/WHEEL,sha256=b4K_helf-jlQoXBBETfwnf4B04YC67LOev0jo4fX5m8,88
10
+ llm_batch_helper-0.1.6.dist-info/RECORD,,
@@ -1,10 +0,0 @@
1
- llm_batch_helper/__init__.py,sha256=POB4Fodeltq96NbiaLh7YSEPwEu50Giz46V2qyVZZoY,348
2
- llm_batch_helper/cache.py,sha256=QUODQ1tPCvFThO3yvVOTcorcOrmN2dP5HLF1Y2O1bTQ,1276
3
- llm_batch_helper/config.py,sha256=WcZKTD-Mtocsx1plS9x3hh6MstVmyxD-tyidGUatkPY,1327
4
- llm_batch_helper/exceptions.py,sha256=59_f3jINUhKFble6HTp8pmtLSFE2MYLHWGclwaQKs28,296
5
- llm_batch_helper/input_handlers.py,sha256=IadA732F1Rw0zcBok5hjZr32RUm8eTUOpvLsRuMvaE4,2877
6
- llm_batch_helper/providers.py,sha256=aV6IbGfRqFoxCQ90yd3UsCqmyeOBMRC9YW8VVq6ghq8,10258
7
- llm_batch_helper-0.1.5.dist-info/LICENSE,sha256=xx0jnfkXJvxRnG63LTGOxlggYnIysveWIZ6H3PNdCrQ,11357
8
- llm_batch_helper-0.1.5.dist-info/METADATA,sha256=58ray3o9P37IjYcgXfPa_SS5YnQhz3M212zrxa0e3L0,8882
9
- llm_batch_helper-0.1.5.dist-info/WHEEL,sha256=b4K_helf-jlQoXBBETfwnf4B04YC67LOev0jo4fX5m8,88
10
- llm_batch_helper-0.1.5.dist-info/RECORD,,