tasks-prompts-chain 0.0.6__tar.gz → 0.1.1__tar.gz

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
@@ -0,0 +1,412 @@
1
+ Metadata-Version: 2.4
2
+ Name: tasks_prompts_chain
3
+ Version: 0.1.1
4
+ Summary: A Python library for creating and executing chains of prompts using multiple LLM providers with streaming support and template formatting.
5
+ Project-URL: Homepage, https://github.com/smirfolio/tasks_prompts_chain
6
+ Project-URL: Issues, https://github.com/smirfolio/tasks_prompts_chain/issues
7
+ Author-email: Samir Ben Sghaier <ben.sghaier.samir@gmail.com>
8
+ License-Expression: Apache-2.0
9
+ License-File: LICENSE
10
+ Classifier: Operating System :: OS Independent
11
+ Classifier: Programming Language :: Python :: 3
12
+ Requires-Python: >=3.8
13
+ Description-Content-Type: text/markdown
14
+
15
+ # TasksPromptsChain
16
+
17
+ A Mini Python library for creating and executing chains of prompts using multiple LLM providers with streaming support and output template formatting.
18
+
19
+ ## Features
20
+
21
+ - Sequential prompt chain execution
22
+ - Streaming responses
23
+ - Template-based output formatting
24
+ - System prompt support
25
+ - Placeholder replacement between prompts
26
+ - Stop placeholder string, that can be defined, so if the LLM responds with this placeholder, the chain prompt will interrupt, as an LLM error handler
27
+ - Multiple output formats (JSON, Markdown, CSV, Text)
28
+ - Async/await support
29
+ - Support for multiple LLM providers (OpenAI, Anthropic, Cerebras, etc.)
30
+ - Multi-model support - use different models for different prompts in the chain
31
+
32
+ ## Dependencies
33
+
34
+ Please install typing-extensions and the SDK for your preferred LLM providers:
35
+
36
+ For OpenAI:
37
+ ```bash
38
+ pip install typing-extensions
39
+ pip install openai
40
+ ```
41
+
42
+ For Anthropic:
43
+ ```bash
44
+ pip install typing-extensions
45
+ pip install anthropic
46
+ ```
47
+
48
+ For Cerebras:
49
+ ```bash
50
+ pip install typing-extensions
51
+ pip install cerebras
52
+ ```
53
+
54
+ To Install the library:
55
+ ```
56
+ pip install tasks_prompts_chain
57
+ ```
58
+
59
+ ## Installation from source code
60
+
61
+ ### For Users required from source gitHub repo
62
+ ```bash
63
+ pip install -r requirements/requirements.txt
64
+ ```
65
+
66
+ ### For Developers required from source gitHub repo
67
+ ```bash
68
+ pip install -r requirements/requirements.txt
69
+ pip install -r requirements/requirements-dev.txt
70
+ ```
71
+
72
+ ## Quick Start
73
+
74
+ ```python
75
+ from tasks_prompts_chain import TasksPromptsChain
76
+ from openai import AsyncOpenAI
77
+ from anthropic import AsyncAnthropic
78
+ from cerebras import AsyncCerebras
79
+
80
+ async def main():
81
+ # Initialize the chain with multiple LLM configurations
82
+ llm_configs = [
83
+ {
84
+ "llm_id": "gpt", # Unique identifier for this LLM
85
+ "llm_class": AsyncOpenAI, # LLM SDK class
86
+ "model_options": {
87
+ "model": "gpt-4o",
88
+ "api_key": "your-openai-api-key",
89
+ "temperature": 0.1,
90
+ "max_tokens": 4120,
91
+ }
92
+ },
93
+ {
94
+ "llm_id": "claude", # Unique identifier for this LLM
95
+ "llm_class": AsyncAnthropic, # LLM SDK class
96
+ "model_options": {
97
+ "model": "claude-3-sonnet-20240229",
98
+ "api_key": "your-anthropic-api-key",
99
+ "temperature": 0.1,
100
+ "max_tokens": 8192,
101
+ }
102
+ },
103
+ {
104
+ "llm_id": "llama", # Unique identifier for this LLM
105
+ "llm_class": AsyncCerebras, # LLM SDK class
106
+ "model_options": {
107
+ "model": "llama-3.3-70b",
108
+ "api_key": "your-cerebras-api-key",
109
+ "base_url": "https://api.cerebras.ai/v1",
110
+ "temperature": 0.1,
111
+ "max_tokens": 4120,
112
+ }
113
+ }
114
+ ]
115
+
116
+ chain = TasksPromptsChain(
117
+ llm_configs,
118
+ final_result_placeholder="design_result"
119
+ )
120
+
121
+ # Define your prompts - specify which LLM to use for each prompt
122
+ prompts = [
123
+ {
124
+ "prompt": "Create a design concept for a luxury chocolate bar, if not inspired respond with this string %%cant_be_inspired%% so the chain prompt query will be stopped",
125
+ "output_format": "TEXT",
126
+ "output_placeholder": "design_concept",
127
+ "llm_id": "gpt", # Use the GPT model for this prompt
128
+ "stop_placholder": "%%cant_be_inspired%%" # the stop placeholder string that will interrupt the prompt chain query
129
+ },
130
+ {
131
+ "prompt": "Based on this concept: {{design_concept}}, suggest a color palette",
132
+ "output_format": "JSON",
133
+ "output_placeholder": "color_palette",
134
+ "llm_id": "claude" # Use the Claude model for this prompt
135
+ },
136
+ {
137
+ "prompt": "Based on the design and colors: {{design_concept}} and {{color_palette}}, suggest packaging materials",
138
+ "output_format": "MARKDOWN",
139
+ "output_placeholder": "packaging",
140
+ "llm_id": "llama" # Use the Cerebras model for this prompt
141
+ }
142
+ ]
143
+
144
+ # Stream the responses
145
+ async for chunk in chain.execute_chain(prompts):
146
+ print(chunk, end="", flush=True)
147
+
148
+ # Get specific results
149
+ design = chain.get_result("design_concept")
150
+ colors = chain.get_result("color_palette")
151
+ packaging = chain.get_result("packaging")
152
+ ```
153
+
154
+ ## Advanced Usage
155
+
156
+ ### Using System Prompts
157
+
158
+ ```python
159
+ chain = TasksPromptsChain(
160
+ llm_configs=[
161
+ {
162
+ "llm_id": "default_model",
163
+ "llm_class": AsyncOpenAI,
164
+ "model_options": {
165
+ "model": "gpt-4o",
166
+ "api_key": "your-openai-api-key",
167
+ "temperature": 0.1,
168
+ "max_tokens": 4120,
169
+ }
170
+ }
171
+ ],
172
+ final_result_placeholder="result",
173
+ system_prompt="You are a professional design expert specialized in luxury products",
174
+ system_apply_to_all_prompts=True
175
+ )
176
+ ```
177
+
178
+ ### Using Cerebras Models
179
+
180
+ ```python
181
+ from cerebras import AsyncCerebras
182
+
183
+ llm_configs = [
184
+ {
185
+ "llm_id": "cerebras",
186
+ "llm_class": AsyncCerebras,
187
+ "model_options": {
188
+ "model": "llama-3.3-70b",
189
+ "api_key": "your-cerebras-api-key",
190
+ "base_url": "https://api.cerebras.ai/v1",
191
+ "temperature": 0.1,
192
+ "max_tokens": 4120,
193
+ }
194
+ }
195
+ ]
196
+
197
+ chain = TasksPromptsChain(
198
+ llm_configs,
199
+ final_result_placeholder="result",
200
+ )
201
+ ```
202
+
203
+ ### Custom API Endpoint
204
+
205
+ ```python
206
+ llm_configs = [
207
+ {
208
+ "llm_id": "custom_endpoint",
209
+ "llm_class": AsyncOpenAI,
210
+ "model_options": {
211
+ "model": "your-custom-model",
212
+ "api_key": "your-api-key",
213
+ "base_url": "https://your-custom-endpoint.com/v1",
214
+ "temperature": 0.1,
215
+ "max_tokens": 4120,
216
+ }
217
+ }
218
+ ]
219
+
220
+ chain = TasksPromptsChain(
221
+ llm_configs,
222
+ final_result_placeholder="result",
223
+ )
224
+ ```
225
+
226
+ ### Using Templates
227
+
228
+ You must call this set method before the execution of the prompting query (chain.execute_chain(prompts))
229
+
230
+ ```python
231
+ # Set output template before execution
232
+ chain.template_output("""
233
+ <result>
234
+ <design>
235
+ ### Design Concept:
236
+ {{design_concept}}
237
+ </design>
238
+
239
+ <colors>
240
+ ### Color Palette:
241
+ {{color_palette}}
242
+ </colors>
243
+ </result>
244
+ """)
245
+ ```
246
+ then retrieves the final result within the template :
247
+
248
+ ```python
249
+ # print out the final result in the well formated template
250
+ print(chain.get_final_result_within_template())
251
+ ```
252
+
253
+
254
+ ## API Reference
255
+
256
+ ### TasksPromptsChain Class
257
+
258
+ #### Constructor Parameters
259
+
260
+ - `llm_configs` (List[Dict]): List of LLM configurations, each containing:
261
+ - `llm_id` (str): Unique identifier for this LLM configuration
262
+ - `llm_class`: The LLM class to use (e.g., `AsyncOpenAI`, `AsyncAnthropic`, `AsyncCerebras`)
263
+ - `model_options` (Dict): Configuration for the LLM:
264
+ - `model` (str): The model identifier
265
+ - `api_key` (str): Your API key for the LLM provider
266
+ - `temperature` (float): Temperature setting for response generation
267
+ - `max_tokens` (int): Maximum tokens in generated responses
268
+ - `base_url` (Optional[str]): Custom API endpoint URL
269
+ - `system_prompt` (Optional[str]): System prompt for context
270
+ - `final_result_placeholder` (str): Name for the final result placeholder
271
+ - `system_apply_to_all_prompts` (Optional[bool]): Apply system prompt to all prompts
272
+
273
+ #### Methods
274
+
275
+ - `execute_chain(prompts: List[Dict], streamout: bool = True) -> AsyncGenerator[str, None]`
276
+ - Executes the prompt chain and streams responses
277
+
278
+ - `template_output(template: str) -> None`
279
+ - Sets the output template format
280
+
281
+ - `get_final_result_within_template(self) -> Optional[str]`
282
+ - Retrieves the final query result with the defined template in template_output();
283
+
284
+ - `get_result(placeholder: str) -> Optional[str]`
285
+ - Retrieves a specific result by placeholder
286
+
287
+ ### Prompt Format
288
+
289
+ Each prompt in the chain can be defined as a dictionary:
290
+ ```python
291
+ {
292
+ "prompt": str, # The actual prompt text
293
+ "output_format": str, # "JSON", "MARKDOWN", "CSV", or "TEXT"
294
+ "output_placeholder": str, # Identifier for accessing this result
295
+ "llm_id": str, # Optional: ID of the LLM to use for this prompt
296
+ "stop_placholder": str # Optional: The stop string placeholder that may interrupt the chaining prompt query, defined in a prompt, and may be returned by the LLM
297
+ }
298
+ ```
299
+
300
+ ## Supported LLM Providers
301
+
302
+ TasksPromptsChain currently supports the following LLM providers:
303
+
304
+ 1. **OpenAI** - via `AsyncOpenAI` from the `openai` package
305
+ 2. **Anthropic** - via `AsyncAnthropic` from the `anthropic` package
306
+ 3. **Cerebras** - via `AsyncCerebras` from the `cerebras` package
307
+
308
+ Each provider has different capabilities and models. The library adapts the API calls to work with each provider's specific requirements.
309
+
310
+ ## Error Handling
311
+
312
+ The library includes comprehensive error handling:
313
+ - Template validation
314
+ - API error handling
315
+ - Placeholder validation
316
+ - LLM validation (checks if specified LLM ID exists)
317
+ - stop_placholder to validate the LLM output and stop the chain prompt excution
318
+
319
+ Errors are raised with descriptive messages indicating the specific issue and prompt number where the error occurred.
320
+
321
+ ## Best Practices
322
+
323
+ 1. Always set templates before executing the chain
324
+ 2. Use meaningful placeholder names
325
+ 3. Implement a stop_placholder in each prompt, to catch the LLM error defined bad responses to stop the subsequent requests
326
+ 3. Handle streaming responses appropriately
327
+ 4. Choose appropriate models for different types of tasks
328
+ 5. Use system prompts for consistent context
329
+ 6. Select the best provider for specific tasks:
330
+ - OpenAI is great for general purpose applications
331
+ - Anthropic (Claude) excels at longer contexts and complex reasoning
332
+ - Cerebras is excellent for high-performance AI tasks
333
+
334
+ ## How You Can Get Involved
335
+ ✅ Try out tasks_prompts_chain: Give our software a try in your own setup and let us know how it goes - your experience helps us improve!
336
+
337
+ ✅ Find a bug: Found something that doesn't work quite right? We'd appreciate your help in documenting it so we can fix it together.
338
+
339
+ ✅ Fixing Bugs: Even small code contributions make a big difference! Pick an issue that interests you and share your solution with us.
340
+
341
+ ✅ Share your thoughts: Have an idea that would make this project more useful? We're excited to hear your thoughts and explore new possibilities together!
342
+
343
+ Your contributions, big or small, truly matter to us. We're grateful for any help you can provide and look forward to welcoming you to our community!
344
+
345
+ ### Developer Contribution Workflow
346
+ 1. Fork the Repository: Create your own copy of the project by clicking the "Fork" button on our GitHub repository.
347
+ 2. Clone Your Fork:
348
+ ``` bash
349
+ git clone git@github.com:<your-username>/tasks_prompts_chain.git
350
+ cd tasks_prompts_chain/
351
+ ```
352
+ 3. Set Up Development Environment
353
+ ``` bash
354
+ # Create and activate a virtual environment
355
+ python3 -m venv .venv
356
+ source .venv/bin/activate # On Windows: .venv\Scripts\activate
357
+
358
+ # Install development dependencies
359
+ pip install -r requirements/requirements-dev.txt
360
+ ```
361
+ 4. Stay Updated
362
+ ```bash
363
+ # Add the upstream repository
364
+ git remote add upstream https://github.com/original-owner/tasks_prompts_chain.git
365
+
366
+ # Fetch latest changes from upstream
367
+ git fetch upstream
368
+ git merge upstream/main
369
+ ```
370
+ #### Making Changes
371
+ 1. Create a Feature Branch
372
+ ```bash
373
+ git checkout -b feature/your-feature-name
374
+ # or
375
+ git checkout -b bugfix/issue-you-are-fixing
376
+ ```
377
+ 2. Implement Your Changes
378
+ - Write tests for your changes when applicable
379
+ - Ensure existing tests pass with pytest
380
+ - Follow our code style guidelines
381
+
382
+ 3. Commit Your Changes
383
+ ```bash
384
+ git add .
385
+ git commit -m "Your descriptive commit message"
386
+ ```
387
+ 4. Push to Your Fork
388
+ ```bash
389
+ git push origin feature/your-feature-name
390
+ ```
391
+ 5. Create a Pull Request
392
+ 6. Code Review Process
393
+ - Maintainers will review your PR
394
+ - Address any requested changes
395
+ - Once approved, your contribution will be merged!
396
+
397
+ ## Release Notes
398
+
399
+ ### 0.1.0 - Breaking Changes
400
+
401
+ - **Complete API redesign**: The constructor now requires a list of LLM configurations instead of a single LLM class
402
+ - **Multi-model support**: Use different models for different prompts in the chain
403
+ - **Constructor changes**: Replaced `AsyncLLmAi` and `model_options` with `llm_configs`
404
+ - **New provider support**: Added official support for Cerebras models
405
+ - **Removed dependencies**: No longer directly depends on OpenAI SDK
406
+ - **Prompt configuration**: Added `llm_id` field to prompt dictionaries to specify which LLM to use
407
+
408
+ Users upgrading from version 0.0.x will need to modify their code to use the new API structure.
409
+
410
+ ## License
411
+
412
+ MIT License