tasks-prompts-chain 0.0.4__tar.gz → 0.1.0__tar.gz

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
@@ -0,0 +1,407 @@
1
+ Metadata-Version: 2.4
2
+ Name: tasks_prompts_chain
3
+ Version: 0.1.0
4
+ Summary: A Python library for creating and executing chains of prompts using multiple LLM providers with streaming support and template formatting.
5
+ Project-URL: Homepage, https://github.com/smirfolio/tasks_prompts_chain
6
+ Project-URL: Issues, https://github.com/smirfolio/tasks_prompts_chain/issues
7
+ Author-email: Samir Ben Sghaier <ben.sghaier.samir@gmail.com>
8
+ License-Expression: Apache-2.0
9
+ License-File: LICENSE
10
+ Classifier: Operating System :: OS Independent
11
+ Classifier: Programming Language :: Python :: 3
12
+ Requires-Python: >=3.8
13
+ Description-Content-Type: text/markdown
14
+
15
+ # TasksPromptsChain
16
+
17
+ A Mini Python library for creating and executing chains of prompts using multiple LLM providers with streaming support and output template formatting.
18
+
19
+ ## Features
20
+
21
+ - Sequential prompt chain execution
22
+ - Streaming responses
23
+ - Template-based output formatting
24
+ - System prompt support
25
+ - Placeholder replacement between prompts
26
+ - Multiple output formats (JSON, Markdown, CSV, Text)
27
+ - Async/await support
28
+ - Support for multiple LLM providers (OpenAI, Anthropic, Cerebras, etc.)
29
+ - Multi-model support - use different models for different prompts in the chain
30
+
31
+ ## Dependencies
32
+
33
+ Please install typing-extensions and the SDK for your preferred LLM providers:
34
+
35
+ For OpenAI:
36
+ ```bash
37
+ pip install typing-extensions
38
+ pip install openai
39
+ ```
40
+
41
+ For Anthropic:
42
+ ```bash
43
+ pip install typing-extensions
44
+ pip install anthropic
45
+ ```
46
+
47
+ For Cerebras:
48
+ ```bash
49
+ pip install typing-extensions
50
+ pip install cerebras
51
+ ```
52
+
53
+ To Install the library:
54
+ ```
55
+ pip install tasks_prompts_chain
56
+ ```
57
+
58
+ ## Installation from source code
59
+
60
+ ### For Users required from source gitHub repo
61
+ ```bash
62
+ pip install -r requirements/requirements.txt
63
+ ```
64
+
65
+ ### For Developers required from source gitHub repo
66
+ ```bash
67
+ pip install -r requirements/requirements.txt
68
+ pip install -r requirements/requirements-dev.txt
69
+ ```
70
+
71
+ ## Quick Start
72
+
73
+ ```python
74
+ from tasks_prompts_chain import TasksPromptsChain
75
+ from openai import AsyncOpenAI
76
+ from anthropic import AsyncAnthropic
77
+ from cerebras import AsyncCerebras
78
+
79
+ async def main():
80
+ # Initialize the chain with multiple LLM configurations
81
+ llm_configs = [
82
+ {
83
+ "llm_id": "gpt", # Unique identifier for this LLM
84
+ "llm_class": AsyncOpenAI, # LLM SDK class
85
+ "model_options": {
86
+ "model": "gpt-4o",
87
+ "api_key": "your-openai-api-key",
88
+ "temperature": 0.1,
89
+ "max_tokens": 4120,
90
+ }
91
+ },
92
+ {
93
+ "llm_id": "claude", # Unique identifier for this LLM
94
+ "llm_class": AsyncAnthropic, # LLM SDK class
95
+ "model_options": {
96
+ "model": "claude-3-sonnet-20240229",
97
+ "api_key": "your-anthropic-api-key",
98
+ "temperature": 0.1,
99
+ "max_tokens": 8192,
100
+ }
101
+ },
102
+ {
103
+ "llm_id": "llama", # Unique identifier for this LLM
104
+ "llm_class": AsyncCerebras, # LLM SDK class
105
+ "model_options": {
106
+ "model": "llama-3.3-70b",
107
+ "api_key": "your-cerebras-api-key",
108
+ "base_url": "https://api.cerebras.ai/v1",
109
+ "temperature": 0.1,
110
+ "max_tokens": 4120,
111
+ }
112
+ }
113
+ ]
114
+
115
+ chain = TasksPromptsChain(
116
+ llm_configs,
117
+ final_result_placeholder="design_result"
118
+ )
119
+
120
+ # Define your prompts - specify which LLM to use for each prompt
121
+ prompts = [
122
+ {
123
+ "prompt": "Create a design concept for a luxury chocolate bar",
124
+ "output_format": "TEXT",
125
+ "output_placeholder": "design_concept",
126
+ "llm_id": "gpt" # Use the GPT model for this prompt
127
+ },
128
+ {
129
+ "prompt": "Based on this concept: {{design_concept}}, suggest a color palette",
130
+ "output_format": "JSON",
131
+ "output_placeholder": "color_palette",
132
+ "llm_id": "claude" # Use the Claude model for this prompt
133
+ },
134
+ {
135
+ "prompt": "Based on the design and colors: {{design_concept}} and {{color_palette}}, suggest packaging materials",
136
+ "output_format": "MARKDOWN",
137
+ "output_placeholder": "packaging",
138
+ "llm_id": "llama" # Use the Cerebras model for this prompt
139
+ }
140
+ ]
141
+
142
+ # Stream the responses
143
+ async for chunk in chain.execute_chain(prompts):
144
+ print(chunk, end="", flush=True)
145
+
146
+ # Get specific results
147
+ design = chain.get_result("design_concept")
148
+ colors = chain.get_result("color_palette")
149
+ packaging = chain.get_result("packaging")
150
+ ```
151
+
152
+ ## Advanced Usage
153
+
154
+ ### Using System Prompts
155
+
156
+ ```python
157
+ chain = TasksPromptsChain(
158
+ llm_configs=[
159
+ {
160
+ "llm_id": "default_model",
161
+ "llm_class": AsyncOpenAI,
162
+ "model_options": {
163
+ "model": "gpt-4o",
164
+ "api_key": "your-openai-api-key",
165
+ "temperature": 0.1,
166
+ "max_tokens": 4120,
167
+ }
168
+ }
169
+ ],
170
+ final_result_placeholder="result",
171
+ system_prompt="You are a professional design expert specialized in luxury products",
172
+ system_apply_to_all_prompts=True
173
+ )
174
+ ```
175
+
176
+ ### Using Cerebras Models
177
+
178
+ ```python
179
+ from cerebras import AsyncCerebras
180
+
181
+ llm_configs = [
182
+ {
183
+ "llm_id": "cerebras",
184
+ "llm_class": AsyncCerebras,
185
+ "model_options": {
186
+ "model": "llama-3.3-70b",
187
+ "api_key": "your-cerebras-api-key",
188
+ "base_url": "https://api.cerebras.ai/v1",
189
+ "temperature": 0.1,
190
+ "max_tokens": 4120,
191
+ }
192
+ }
193
+ ]
194
+
195
+ chain = TasksPromptsChain(
196
+ llm_configs,
197
+ final_result_placeholder="result",
198
+ )
199
+ ```
200
+
201
+ ### Custom API Endpoint
202
+
203
+ ```python
204
+ llm_configs = [
205
+ {
206
+ "llm_id": "custom_endpoint",
207
+ "llm_class": AsyncOpenAI,
208
+ "model_options": {
209
+ "model": "your-custom-model",
210
+ "api_key": "your-api-key",
211
+ "base_url": "https://your-custom-endpoint.com/v1",
212
+ "temperature": 0.1,
213
+ "max_tokens": 4120,
214
+ }
215
+ }
216
+ ]
217
+
218
+ chain = TasksPromptsChain(
219
+ llm_configs,
220
+ final_result_placeholder="result",
221
+ )
222
+ ```
223
+
224
+ ### Using Templates
225
+
226
+ You must call this set method before the execution of the prompting query (chain.execute_chain(prompts))
227
+
228
+ ```python
229
+ # Set output template before execution
230
+ chain.template_output("""
231
+ <result>
232
+ <design>
233
+ ### Design Concept:
234
+ {{design_concept}}
235
+ </design>
236
+
237
+ <colors>
238
+ ### Color Palette:
239
+ {{color_palette}}
240
+ </colors>
241
+ </result>
242
+ """)
243
+ ```
244
+ then retrieves the final result within the template :
245
+
246
+ ```python
247
+ # print out the final result in the well formated template
248
+ print(chain.get_final_result_within_template())
249
+ ```
250
+
251
+
252
+ ## API Reference
253
+
254
+ ### TasksPromptsChain Class
255
+
256
+ #### Constructor Parameters
257
+
258
+ - `llm_configs` (List[Dict]): List of LLM configurations, each containing:
259
+ - `llm_id` (str): Unique identifier for this LLM configuration
260
+ - `llm_class`: The LLM class to use (e.g., `AsyncOpenAI`, `AsyncAnthropic`, `AsyncCerebras`)
261
+ - `model_options` (Dict): Configuration for the LLM:
262
+ - `model` (str): The model identifier
263
+ - `api_key` (str): Your API key for the LLM provider
264
+ - `temperature` (float): Temperature setting for response generation
265
+ - `max_tokens` (int): Maximum tokens in generated responses
266
+ - `base_url` (Optional[str]): Custom API endpoint URL
267
+ - `system_prompt` (Optional[str]): System prompt for context
268
+ - `final_result_placeholder` (str): Name for the final result placeholder
269
+ - `system_apply_to_all_prompts` (Optional[bool]): Apply system prompt to all prompts
270
+
271
+ #### Methods
272
+
273
+ - `execute_chain(prompts: List[Dict], streamout: bool = True) -> AsyncGenerator[str, None]`
274
+ - Executes the prompt chain and streams responses
275
+
276
+ - `template_output(template: str) -> None`
277
+ - Sets the output template format
278
+
279
+ - `get_final_result_within_template(self) -> Optional[str]`
280
+ - Retrieves the final query result with the defined template in template_output();
281
+
282
+ - `get_result(placeholder: str) -> Optional[str]`
283
+ - Retrieves a specific result by placeholder
284
+
285
+ ### Prompt Format
286
+
287
+ Each prompt in the chain can be defined as a dictionary:
288
+ ```python
289
+ {
290
+ "prompt": str, # The actual prompt text
291
+ "output_format": str, # "JSON", "MARKDOWN", "CSV", or "TEXT"
292
+ "output_placeholder": str, # Identifier for accessing this result
293
+ "llm_id": str # Optional: ID of the LLM to use for this prompt
294
+ }
295
+ ```
296
+
297
+ ## Supported LLM Providers
298
+
299
+ TasksPromptsChain currently supports the following LLM providers:
300
+
301
+ 1. **OpenAI** - via `AsyncOpenAI` from the `openai` package
302
+ 2. **Anthropic** - via `AsyncAnthropic` from the `anthropic` package
303
+ 3. **Cerebras** - via `AsyncCerebras` from the `cerebras` package
304
+
305
+ Each provider has different capabilities and models. The library adapts the API calls to work with each provider's specific requirements.
306
+
307
+ ## Error Handling
308
+
309
+ The library includes comprehensive error handling:
310
+ - Template validation
311
+ - API error handling
312
+ - Placeholder validation
313
+ - LLM validation (checks if specified LLM ID exists)
314
+
315
+ Errors are raised with descriptive messages indicating the specific issue and prompt number where the error occurred.
316
+
317
+ ## Best Practices
318
+
319
+ 1. Always set templates before executing the chain
320
+ 2. Use meaningful placeholder names
321
+ 3. Handle streaming responses appropriately
322
+ 4. Choose appropriate models for different types of tasks
323
+ 5. Use system prompts for consistent context
324
+ 6. Select the best provider for specific tasks:
325
+ - OpenAI is great for general purpose applications
326
+ - Anthropic (Claude) excels at longer contexts and complex reasoning
327
+ - Cerebras is excellent for high-performance AI tasks
328
+
329
+ ## How You Can Get Involved
330
+ ✅ Try out tasks_prompts_chain: Give our software a try in your own setup and let us know how it goes - your experience helps us improve!
331
+
332
+ ✅ Find a bug: Found something that doesn't work quite right? We'd appreciate your help in documenting it so we can fix it together.
333
+
334
+ ✅ Fixing Bugs: Even small code contributions make a big difference! Pick an issue that interests you and share your solution with us.
335
+
336
+ ✅ Share your thoughts: Have an idea that would make this project more useful? We're excited to hear your thoughts and explore new possibilities together!
337
+
338
+ Your contributions, big or small, truly matter to us. We're grateful for any help you can provide and look forward to welcoming you to our community!
339
+
340
+ ### Developer Contribution Workflow
341
+ 1. Fork the Repository: Create your own copy of the project by clicking the "Fork" button on our GitHub repository.
342
+ 2. Clone Your Fork:
343
+ ``` bash
344
+ git clone git@github.com:<your-username>/tasks_prompts_chain.git
345
+ cd tasks_prompts_chain/
346
+ ```
347
+ 3. Set Up Development Environment
348
+ ``` bash
349
+ # Create and activate a virtual environment
350
+ python3 -m venv .venv
351
+ source .venv/bin/activate # On Windows: .venv\Scripts\activate
352
+
353
+ # Install development dependencies
354
+ pip install -r requirements/requirements-dev.txt
355
+ ```
356
+ 4. Stay Updated
357
+ ```bash
358
+ # Add the upstream repository
359
+ git remote add upstream https://github.com/original-owner/tasks_prompts_chain.git
360
+
361
+ # Fetch latest changes from upstream
362
+ git fetch upstream
363
+ git merge upstream/main
364
+ ```
365
+ #### Making Changes
366
+ 1. Create a Feature Branch
367
+ ```bash
368
+ git checkout -b feature/your-feature-name
369
+ # or
370
+ git checkout -b bugfix/issue-you-are-fixing
371
+ ```
372
+ 2. Implement Your Changes
373
+ - Write tests for your changes when applicable
374
+ - Ensure existing tests pass with pytest
375
+ - Follow our code style guidelines
376
+
377
+ 3. Commit Your Changes
378
+ ```bash
379
+ git add .
380
+ git commit -m "Your descriptive commit message"
381
+ ```
382
+ 4. Push to Your Fork
383
+ ```bash
384
+ git push origin feature/your-feature-name
385
+ ```
386
+ 5. Create a Pull Request
387
+ 6. Code Review Process
388
+ - Maintainers will review your PR
389
+ - Address any requested changes
390
+ - Once approved, your contribution will be merged!
391
+
392
+ ## Release Notes
393
+
394
+ ### 0.1.0 - Breaking Changes
395
+
396
+ - **Complete API redesign**: The constructor now requires a list of LLM configurations instead of a single LLM class
397
+ - **Multi-model support**: Use different models for different prompts in the chain
398
+ - **Constructor changes**: Replaced `AsyncLLmAi` and `model_options` with `llm_configs`
399
+ - **New provider support**: Added official support for Cerebras models
400
+ - **Removed dependencies**: No longer directly depends on OpenAI SDK
401
+ - **Prompt configuration**: Added `llm_id` field to prompt dictionaries to specify which LLM to use
402
+
403
+ Users upgrading from version 0.0.x will need to modify their code to use the new API structure.
404
+
405
+ ## License
406
+
407
+ MIT License