agenta 0.59.11__py3-none-any.whl → 0.59.12__py3-none-any.whl

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.

Potentially problematic release.


This version of agenta might be problematic. Click here for more details.

@@ -1,6 +1,6 @@
1
1
  Metadata-Version: 2.4
2
2
  Name: agenta
3
- Version: 0.59.11
3
+ Version: 0.59.12
4
4
  Summary: The SDK for agenta is an open-source LLMOps platform.
5
5
  Keywords: LLMOps,LLM,evaluation,prompt engineering
6
6
  Author: Mahmoud Mabrouk
@@ -127,11 +127,11 @@ Agenta is a platform for building production-grade LLM applications. It helps **
127
127
  Collaborate with Subject Matter Experts (SMEs) on prompt engineering and make sure nothing breaks in production.
128
128
 
129
129
  - **Interactive Playground**: Compare prompts side by side against your test cases
130
- - **Multi-Model Support**: Experiment with 50+ LLM models or [bring-your-own models](https://docs.agenta.ai/prompt-engineering/playground/adding-custom-providers?utm_source=github&utm_medium=referral&utm_campaign=readme)
130
+ - **Multi-Model Support**: Experiment with 50+ LLM models or [bring-your-own models](https://docs.agenta.ai/prompt-engineering/playground/custom-providers?utm_source=github&utm_medium=referral&utm_campaign=readme)
131
131
  - **Version Control**: Version prompts and configurations with branching and environments
132
132
  - **Complex Configurations**: Enable SMEs to collaborate on [complex configuration schemas](https://docs.agenta.ai/custom-workflows/overview?utm_source=github&utm_medium=referral&utm_campaign=readme) beyond simple prompts
133
133
 
134
- [Explore prompt management →](https://docs.agenta.ai/prompt-engineering/overview?utm_source=github&utm_medium=referral&utm_campaign=readme)
134
+ [Explore prompt management →](https://docs.agenta.ai/prompt-engineering/concepts?utm_source=github&utm_medium=referral&utm_campaign=readme)
135
135
 
136
136
  ### 📊 Evaluation & Testing
137
137
  Evaluate your LLM applications systematically with both human and automated feedback.
@@ -366,6 +366,6 @@ agenta/sdk/workflows/__init__.py,sha256=47DEQpj8HBSa-_TImW-5JCeuQeRkm5NMpJWZG3hS
366
366
  agenta/sdk/workflows/registry.py,sha256=4FRSeU4njMmP6xFCIteF5f_W6NVlqFTx1AM7hsaGAQk,975
367
367
  agenta/sdk/workflows/types.py,sha256=SjYeT8FWVgwaIIC8sI3fRjKERLEA_oxuBGvYSaFqNg8,11720
368
368
  agenta/sdk/workflows/utils.py,sha256=ILfY8DSBWLrdWIuKg6mq7rANwKiiY6sxEeFiBFhjLYM,413
369
- agenta-0.59.11.dist-info/METADATA,sha256=6EMAGsXPRZRxLgop3ib4AFvkTFIxkeMVsEPIebeSD20,31793
370
- agenta-0.59.11.dist-info/WHEEL,sha256=zp0Cn7JsFoX2ATtOhtaFYIiE2rmFAD4OcMhtUki8W3U,88
371
- agenta-0.59.11.dist-info/RECORD,,
369
+ agenta-0.59.12.dist-info/METADATA,sha256=EwEX0IJSewG8t3eyAx-ocqL4YJQJ7KIBQeX_Dw2vWn0,31786
370
+ agenta-0.59.12.dist-info/WHEEL,sha256=zp0Cn7JsFoX2ATtOhtaFYIiE2rmFAD4OcMhtUki8W3U,88
371
+ agenta-0.59.12.dist-info/RECORD,,