ragaai-catalyst 2.0.5__py3-none-any.whl → 2.0.6__py3-none-any.whl

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
@@ -0,0 +1,386 @@
1
+ Metadata-Version: 2.1
2
+ Name: ragaai_catalyst
3
+ Version: 2.0.6
4
+ Summary: RAGA AI CATALYST
5
+ Author-email: Kiran Scaria <kiran.scaria@raga.ai>, Kedar Gaikwad <kedar.gaikwad@raga.ai>, Dushyant Mahajan <dushyant.mahajan@raga.ai>, Siddhartha Kosti <siddhartha.kosti@raga.ai>, Ritika Goel <ritika.goel@raga.ai>, Vijay Chaurasia <vijay.chaurasia@raga.ai>
6
+ Requires-Python: >=3.9
7
+ Description-Content-Type: text/markdown
8
+ Requires-Dist: aiohttp>=3.10.2
9
+ Requires-Dist: opentelemetry-api==1.25.0
10
+ Requires-Dist: opentelemetry-sdk==1.25.0
11
+ Requires-Dist: opentelemetry-exporter-otlp-proto-grpc==1.25.0
12
+ Requires-Dist: opentelemetry-instrumentation==0.46b0
13
+ Requires-Dist: opentelemetry-instrumentation-fastapi==0.46b0
14
+ Requires-Dist: opentelemetry-instrumentation-asgi==0.46b0
15
+ Requires-Dist: opentelemetry-semantic-conventions==0.46b0
16
+ Requires-Dist: opentelemetry-util-http==0.46b0
17
+ Requires-Dist: opentelemetry-instrumentation-langchain~=0.24.0
18
+ Requires-Dist: opentelemetry-instrumentation-openai~=0.24.0
19
+ Requires-Dist: langchain-core>=0.2.11
20
+ Requires-Dist: langchain>=0.2.11
21
+ Requires-Dist: openai>=1.35.10
22
+ Requires-Dist: pandas>=2.1.1
23
+ Requires-Dist: groq>=0.11.0
24
+ Requires-Dist: PyPDF2>=3.0.1
25
+ Requires-Dist: google-generativeai>=0.8.2
26
+ Requires-Dist: Markdown>=3.7
27
+ Requires-Dist: litellm==1.51.1
28
+ Requires-Dist: tenacity==8.3.0
29
+ Requires-Dist: tqdm>=4.66.5
30
+ Requires-Dist: llama-index==0.10.0
31
+ Requires-Dist: pyopenssl==24.2.1
32
+ Provides-Extra: dev
33
+ Requires-Dist: pytest; extra == "dev"
34
+ Requires-Dist: pytest-cov; extra == "dev"
35
+ Requires-Dist: black; extra == "dev"
36
+ Requires-Dist: isort; extra == "dev"
37
+ Requires-Dist: mypy; extra == "dev"
38
+ Requires-Dist: flake8; extra == "dev"
39
+
40
+ # RagaAI Catalyst
41
+
42
+ RagaAI Catalyst is a comprehensive platform designed to enhance the management and optimization of LLM projects. It offers a wide range of features, including project management, dataset management, evaluation management, trace management, prompt management, synthetic data generation, and guardrail management. These functionalities enable you to efficiently evaluate, and safeguard your LLM applications.
43
+
44
+ ## Table of Contents
45
+
46
+ - [RagaAI Catalyst](#ragaai-catalyst)
47
+ - [Table of Contents](#table-of-contents)
48
+ - [Installation](#installation)
49
+ - [Configuration](#configuration)
50
+ - [Usage](#usage)
51
+ - [Project Management](#project-management)
52
+ - [Dataset Management](#dataset-management)
53
+ - [Evaluation Management](#evaluation)
54
+ - [Trace Management](#trace-management)
55
+ - [Prompt Management](#prompt-management)
56
+ - [Synthetic Data Generation](#synthetic-data-generation)
57
+ - [Guardrail Management](#guardrail-management)
58
+
59
+ ## Installation
60
+
61
+ To install RagaAI Catalyst, you can use pip:
62
+
63
+ ```bash
64
+ pip install ragaai-catalyst
65
+ ```
66
+
67
+ ## Configuration
68
+
69
+ Before using RagaAI Catalyst, you need to set up your credentials. You can do this by setting environment variables or passing them directly to the `RagaAICatalyst` class:
70
+
71
+ ```python
72
+ from ragaai_catalyst import RagaAICatalyst
73
+
74
+ catalyst = RagaAICatalyst(
75
+ access_key="YOUR_ACCESS_KEY",
76
+ secret_key="YOUR_SECRET_KEY",
77
+ base_url="BASE_URL"
78
+ )
79
+ ```
80
+ **Note**: Authetication to RagaAICatalyst is necessary to perform any operations below
81
+
82
+
83
+ ## Usage
84
+
85
+ ### Project Management
86
+
87
+ Create and manage projects using RagaAI Catalyst:
88
+
89
+ ```python
90
+ # Create a project
91
+ project = catalyst.create_project(
92
+ project_name="Test-RAG-App-1",
93
+ usecase="Chatbot"
94
+ )
95
+
96
+ # Get project usecases
97
+ catalyst.project_use_cases()
98
+
99
+ # List projects
100
+ projects = catalyst.list_projects()
101
+ print(projects)
102
+ ```
103
+
104
+ ### Dataset Management
105
+ Manage datasets efficiently for your projects:
106
+
107
+ ```py
108
+ from ragaai_catalyst import Dataset
109
+
110
+ # Initialize Dataset management for a specific project
111
+ dataset_manager = Dataset(project_name="project_name")
112
+
113
+ # List existing datasets
114
+ datasets = dataset_manager.list_datasets()
115
+ print("Existing Datasets:", datasets)
116
+
117
+ # Create a dataset from CSV
118
+ dataset_manager.create_from_csv(
119
+ csv_path='path/to/your.csv',
120
+ dataset_name='MyDataset',
121
+ schema_mapping={'column1': 'schema_element1', 'column2': 'schema_element2'}
122
+ )
123
+
124
+ # Get project schema mapping
125
+ dataset_manager.get_schema_mapping()
126
+
127
+ ```
128
+
129
+ For more detailed information on Dataset Management, including CSV schema handling and advanced usage, please refer to the [Dataset Management documentation](docs/dataset_management.md).
130
+
131
+
132
+ ### Evaluation
133
+
134
+ Create and manage metric evaluation of your RAG application:
135
+
136
+ ```python
137
+ from ragaai_catalyst import Evaluation
138
+
139
+ # Create an experiment
140
+ evaluation = Evaluation(
141
+ project_name="Test-RAG-App-1",
142
+ dataset_name="MyDataset",
143
+ )
144
+
145
+ # Get list of available metrics
146
+ evaluation.list_metrics()
147
+
148
+ # Add metrics to the experiment
149
+
150
+ schema_mapping={
151
+ 'Query': 'prompt',
152
+ 'response': 'response',
153
+ 'Context': 'context',
154
+ 'expectedResponse': 'expected_response'
155
+ }
156
+
157
+ # Add single metric
158
+ evaluation.add_metrics(
159
+ metrics=[
160
+ {"name": "Faithfulness", "config": {"model": "gpt-4o-mini", "provider": "openai", "threshold": {"gte": 0.232323}}, "column_name": "Faithfulness_v1", "schema_mapping": schema_mapping},
161
+
162
+ ]
163
+ )
164
+
165
+ # Add multiple metrics
166
+ evaluation.add_metrics(
167
+ metrics=[
168
+ {"name": "Faithfulness", "config": {"model": "gpt-4o-mini", "provider": "openai", "threshold": {"gte": 0.323}}, "column_name": "Faithfulness_gte", "schema_mapping": schema_mapping},
169
+ {"name": "Hallucination", "config": {"model": "gpt-4o-mini", "provider": "openai", "threshold": {"lte": 0.323}}, "column_name": "Hallucination_lte", "schema_mapping": schema_mapping},
170
+ {"name": "Hallucination", "config": {"model": "gpt-4o-mini", "provider": "openai", "threshold": {"eq": 0.323}}, "column_name": "Hallucination_eq", "schema_mapping": schema_mapping},
171
+ ]
172
+ )
173
+
174
+ # Get the status of the experiment
175
+ status = evaluation.get_status()
176
+ print("Experiment Status:", status)
177
+
178
+ # Get the results of the experiment
179
+ results = evaluation.get_results()
180
+ print("Experiment Results:", results)
181
+ ```
182
+
183
+
184
+
185
+ ### Trace Management
186
+
187
+ Record and analyze traces of your RAG application:
188
+
189
+ ```python
190
+ from ragaai_catalyst import Tracer
191
+
192
+ # Start a trace recording
193
+ tracer = Tracer(
194
+ project_name="Test-RAG-App-1",
195
+ dataset_name="tracer_dataset_name"
196
+ metadata={"key1": "value1", "key2": "value2"},
197
+ tracer_type="langchain",
198
+ pipeline={
199
+ "llm_model": "gpt-3.5-turbo",
200
+ "vector_store": "faiss",
201
+ "embed_model": "text-embedding-ada-002",
202
+ }
203
+ ).start()
204
+
205
+ # Your code here
206
+
207
+
208
+ # Stop the trace recording
209
+ tracer.stop()
210
+
211
+ # Get upload status
212
+ tracer.get_upload_status()
213
+ ```
214
+
215
+
216
+ ### Prompt Management
217
+
218
+ Manage and use prompts efficiently in your projects:
219
+
220
+ ```py
221
+ from ragaai_catalyst import PromptManager
222
+
223
+ # Initialize PromptManager
224
+ prompt_manager = PromptManager(project_name="Test-RAG-App-1")
225
+
226
+ # List available prompts
227
+ prompts = prompt_manager.list_prompts()
228
+ print("Available prompts:", prompts)
229
+
230
+ # Get default prompt by prompt_name
231
+ prompt_name = "your_prompt_name"
232
+ prompt = prompt_manager.get_prompt(prompt_name)
233
+
234
+ # Get specific version of prompt by prompt_name and version
235
+ prompt_name = "your_prompt_name"
236
+ version = "v1"
237
+ prompt = prompt_manager.get_prompt(prompt_name,version)
238
+
239
+ # Get variables in a prompt
240
+ variable = prompt.get_variables()
241
+ print("variable:",variable)
242
+
243
+ # Get prompt content
244
+ prompt_content = prompt.get_prompt_content()
245
+ print("prompt_content:", prompt_content)
246
+
247
+ # Compile the prompt with variables
248
+ compiled_prompt = prompt.compile(query="What's the weather?", context="sunny", llm_response="It's sunny today")
249
+ print("Compiled prompt:", compiled_prompt)
250
+
251
+ # implement compiled_prompt with openai
252
+ import openai
253
+ def get_openai_response(prompt):
254
+ client = openai.OpenAI()
255
+ response = client.chat.completions.create(
256
+ model="gpt-4o-mini",
257
+ messages=prompt
258
+ )
259
+ return response.choices[0].message.content
260
+ openai_response = get_openai_response(compiled_prompt)
261
+ print("openai_response:", openai_response)
262
+
263
+ # implement compiled_prompt with litellm
264
+ import litellm
265
+ def get_litellm_response(prompt):
266
+ response = litellm.completion(
267
+ model="gpt-4o-mini",
268
+ messages=prompt
269
+ )
270
+ return response.choices[0].message.content
271
+ litellm_response = get_litellm_response(compiled_prompt)
272
+ print("litellm_response:", litellm_response)
273
+
274
+ ```
275
+ For more detailed information on Prompt Management, please refer to the [Prompt Management documentation](docs/prompt_management.md).
276
+
277
+
278
+ ### Synthetic Data Generation
279
+
280
+ ```py
281
+ from ragaai_catalyst import SyntheticDataGeneration
282
+
283
+ # Initialize Synthetic Data Generation
284
+ sdg = SyntheticDataGeneration()
285
+
286
+ # Process your file
287
+ text = sdg.process_document(input_data="file_path")
288
+
289
+ # Generate results
290
+ result = sdg.generate_qna(text, question_type ='complex',model_config={"provider":"openai","model":"openai/gpt-3.5-turbo"},n=5)
291
+
292
+ print(result.head())
293
+
294
+ # Get supported Q&A types
295
+ sdg.get_supported_qna()
296
+
297
+ # Get supported providers
298
+ sdg.get_supported_providers()
299
+ ```
300
+
301
+
302
+
303
+ ### Guardrail Management
304
+
305
+ ```py
306
+ from ragaai_catalyst import GuardrailsManager
307
+
308
+ # Initialize Guardrails Manager
309
+ gdm = GuardrailsManager(project_name=project_name)
310
+
311
+ # Get list of Guardrails available
312
+ guardrails_list = gdm.list_guardrails()
313
+ print('guardrails_list:', guardrails_list)
314
+
315
+ # Get list of fail condition for guardrails
316
+ fail_conditions = gdm.list_fail_condition()
317
+ print('fail_conditions;', fail_conditions)
318
+
319
+ #Get list of deployment ids
320
+ deployment_list = gdm.list_deployment_ids()
321
+ print('deployment_list:', deployment_list)
322
+
323
+ # Get specific deployment id with guardrails information
324
+ deployment_id_detail = gdm.get_deployment(17)
325
+ print('deployment_id_detail:', deployment_id_detail)
326
+
327
+ # Add guardrails to a deployment id
328
+ guardrails_config = {"guardrailFailConditions": ["FAIL"],
329
+ "deploymentFailCondition": "ALL_FAIL",
330
+ "alternateResponse": "Your alternate response"}
331
+
332
+ guardrails = [
333
+ {
334
+ "displayName": "Response_Evaluator",
335
+ "name": "Response Evaluator",
336
+ "config":{
337
+ "mappings": [{
338
+ "schemaName": "Text",
339
+ "variableName": "Response"
340
+ }],
341
+ "params": {
342
+ "isActive": {"value": False},
343
+ "isHighRisk": {"value": True},
344
+ "threshold": {"eq": 0},
345
+ "competitors": {"value": ["Google","Amazon"]}
346
+ }
347
+ }
348
+ },
349
+ {
350
+ "displayName": "Regex_Check",
351
+ "name": "Regex Check",
352
+ "config":{
353
+ "mappings": [{
354
+ "schemaName": "Text",
355
+ "variableName": "Response"
356
+ }],
357
+ "params":{
358
+ "isActive": {"value": False},
359
+ "isHighRisk": {"value": True},
360
+ "threshold": {"lt1": 1}
361
+ }
362
+ }
363
+ }
364
+ ]
365
+
366
+ gdm.add_guardrails(deployment_id, guardrails, guardrails_config)
367
+
368
+
369
+ # Import GuardExecutor
370
+ from ragaai_catalyst import GuardExecutor
371
+
372
+ # Initialise GuardExecutor with required params and Evaluate
373
+ executor = GuardExecutor(deployment_id,gdm,field_map={'context':'document'})
374
+
375
+
376
+ message={'role':'user',
377
+ 'content':'What is the capital of France'
378
+ }
379
+ prompt_params={'document':' France'}
380
+
381
+ model_params = {'temperature':.7,'model':'gpt-4o-mini'}
382
+ llm_caller = 'litellm'
383
+
384
+ executor([message],prompt_params,model_params,llm_caller)
385
+
386
+ ```
@@ -0,0 +1,29 @@
1
+ ragaai_catalyst/__init__.py,sha256=BdIJ_UUre0uEnRTsLw_hE0C0muWk6XWNZqdVOel22R4,537
2
+ ragaai_catalyst/_version.py,sha256=JKt9KaVNOMVeGs8ojO6LvIZr7ZkMzNN-gCcvryy4x8E,460
3
+ ragaai_catalyst/dataset.py,sha256=j_vu3Xkp_8qYW0J9Qnn53Uyh98MsugqYl5zxhOv9EOg,10731
4
+ ragaai_catalyst/evaluation.py,sha256=EZGISQH6vGB7vP5OmUBRXmEqkqZPIYN_eEUUz1jTrJ8,20285
5
+ ragaai_catalyst/experiment.py,sha256=8KvqgJg5JVnt9ghhGDJvdb4mN7ETBX_E5gNxBT0Nsn8,19010
6
+ ragaai_catalyst/guard_executor.py,sha256=llPbE3DyVtrybojXknzBZj8-dtUrGBQwi9-ZiPJxGRo,3762
7
+ ragaai_catalyst/guardrails_manager.py,sha256=DILMOAASK57FH9BLq_8yC1AQzRJ8McMFLwCXgYwNAd4,11904
8
+ ragaai_catalyst/internal_api_completion.py,sha256=51YwXcas5NviC1wjr8EX5Y6BOyTbJ4FlKHM8gE46Wtk,2916
9
+ ragaai_catalyst/prompt_manager.py,sha256=XIqf42x2aE3-7oMlJ7RtvmD8kuZof9Dtv7YNbM1Q5OA,16413
10
+ ragaai_catalyst/proxy_call.py,sha256=CHxldeceZUaLU-to_hs_Kf1z_b2vHMssLS_cOBedu78,5499
11
+ ragaai_catalyst/ragaai_catalyst.py,sha256=FdqMzwuQLqS2-3JJDsTQ8uh2itllOxfPrRUjb8Kwmn0,17428
12
+ ragaai_catalyst/synthetic_data_generation.py,sha256=957UYz58uX13i8vn24rzZief5FgtfOEnEH7S8VtXtVw,19157
13
+ ragaai_catalyst/utils.py,sha256=TlhEFwLyRU690HvANbyoRycR3nQ67lxVUQoUOfTPYQ0,3772
14
+ ragaai_catalyst/tracers/__init__.py,sha256=NppmJhD3sQ5R1q6teaZLS7rULj08Gb6JT8XiPRIe_B0,49
15
+ ragaai_catalyst/tracers/llamaindex_callback.py,sha256=vPE7MieKjfwLrLUnnPs20Df0xNYqoCCj-Mt2NbiuiKU,14023
16
+ ragaai_catalyst/tracers/tracer.py,sha256=Y7eGoUDU1tAF3adccfn1ukE38zMs38azUKfO7hB4Zto,11300
17
+ ragaai_catalyst/tracers/exporters/__init__.py,sha256=kVA8zp05h3phu4e-iHSlnznp_PzMRczB7LphSsZgUjg,138
18
+ ragaai_catalyst/tracers/exporters/file_span_exporter.py,sha256=RgGteu-NVGprXKkynvyIO5yOjpbtA41R3W_NzCjnkwE,6445
19
+ ragaai_catalyst/tracers/exporters/raga_exporter.py,sha256=rQ5Wj71f2Ke3qLlV8KiWCskbGBR-ia_hlzDx86rPrEo,18188
20
+ ragaai_catalyst/tracers/instrumentators/__init__.py,sha256=FgnMQupoRTzmVsG9YKsLQera2Pfs-AluZv8CxwavoyQ,253
21
+ ragaai_catalyst/tracers/instrumentators/langchain.py,sha256=yMN0qVF0pUVk6R5M1vJoUXezDo1ejs4klCFRlE8x4vE,574
22
+ ragaai_catalyst/tracers/instrumentators/llamaindex.py,sha256=SMrRlR4xM7k9HK43hakE8rkrWHxMlmtmWD-AX6TeByc,416
23
+ ragaai_catalyst/tracers/instrumentators/openai.py,sha256=14R4KW9wQCR1xysLfsP_nxS7cqXrTPoD8En4MBAaZUU,379
24
+ ragaai_catalyst/tracers/utils/__init__.py,sha256=KeMaZtYaTojilpLv65qH08QmpYclfpacDA0U3wg6Ybw,64
25
+ ragaai_catalyst/tracers/utils/utils.py,sha256=ViygfJ7vZ7U0CTSA1lbxVloHp4NSlmfDzBRNCJuMhis,2374
26
+ ragaai_catalyst-2.0.6.dist-info/METADATA,sha256=d4b67EVd5cLv2-CXUt85-yPi137O0hgulcPipk37VgA,11239
27
+ ragaai_catalyst-2.0.6.dist-info/WHEEL,sha256=R06PA3UVYHThwHvxuRWMqaGcr-PuniXahwjmQRFMEkY,91
28
+ ragaai_catalyst-2.0.6.dist-info/top_level.txt,sha256=HpgsdRgEJMk8nqrU6qdCYk3di7MJkDL0B19lkc7dLfM,16
29
+ ragaai_catalyst-2.0.6.dist-info/RECORD,,
@@ -1,5 +1,5 @@
1
1
  Wheel-Version: 1.0
2
- Generator: setuptools (75.1.0)
2
+ Generator: setuptools (75.5.0)
3
3
  Root-Is-Purelib: true
4
4
  Tag: py3-none-any
5
5
 
@@ -1,228 +0,0 @@
1
- Metadata-Version: 2.1
2
- Name: ragaai_catalyst
3
- Version: 2.0.5
4
- Summary: RAGA AI CATALYST
5
- Author-email: Kiran Scaria <kiran.scaria@raga.ai>, Kedar Gaikwad <kedar.gaikwad@raga.ai>, Dushyant Mahajan <dushyant.mahajan@raga.ai>, Siddhartha Kosti <siddhartha.kosti@raga.ai>, Ritika Goel <ritika.goel@raga.ai>, Vijay Chaurasia <vijay.chaurasia@raga.ai>
6
- Requires-Python: >=3.9
7
- Description-Content-Type: text/markdown
8
- Requires-Dist: aiohttp>=3.10.2
9
- Requires-Dist: opentelemetry-api==1.25.0
10
- Requires-Dist: opentelemetry-sdk==1.25.0
11
- Requires-Dist: opentelemetry-exporter-otlp-proto-grpc==1.25.0
12
- Requires-Dist: opentelemetry-instrumentation==0.46b0
13
- Requires-Dist: opentelemetry-instrumentation-fastapi==0.46b0
14
- Requires-Dist: opentelemetry-instrumentation-asgi==0.46b0
15
- Requires-Dist: opentelemetry-semantic-conventions==0.46b0
16
- Requires-Dist: opentelemetry-util-http==0.46b0
17
- Requires-Dist: opentelemetry-instrumentation-langchain~=0.24.0
18
- Requires-Dist: opentelemetry-instrumentation-openai~=0.24.0
19
- Requires-Dist: langchain-core>=0.2.11
20
- Requires-Dist: langchain>=0.2.11
21
- Requires-Dist: openai>=1.35.10
22
- Requires-Dist: pandas>=2.1.1
23
- Requires-Dist: groq>=0.11.0
24
- Requires-Dist: PyPDF2>=3.0.1
25
- Requires-Dist: google-generativeai>=0.8.2
26
- Requires-Dist: Markdown>=3.7
27
- Requires-Dist: tenacity==8.3.0
28
- Provides-Extra: dev
29
- Requires-Dist: pytest; extra == "dev"
30
- Requires-Dist: pytest-cov; extra == "dev"
31
- Requires-Dist: black; extra == "dev"
32
- Requires-Dist: isort; extra == "dev"
33
- Requires-Dist: mypy; extra == "dev"
34
- Requires-Dist: flake8; extra == "dev"
35
-
36
- # RagaAI Catalyst
37
-
38
- RagaAI Catalyst is a powerful tool for managing and optimizing LLM projects. It provides functionalities for project management, trace recording, and experiment management, allowing you to fine-tune and evaluate your LLM applications effectively.
39
-
40
- ## Table of Contents
41
-
42
- - [RagaAI Catalyst](#ragaai-catalyst)
43
- - [Table of Contents](#table-of-contents)
44
- - [Installation](#installation)
45
- - [Configuration](#configuration)
46
- - [Usage](#usage)
47
- - [Project Management](#project-management)
48
- - [Trace Management](#trace-management)
49
- - [Experiment Management](#experiment-management)
50
- - [Dataset Management](#dataset-management)
51
- - [Prompt Management](#prompt-management)
52
-
53
- ## Installation
54
-
55
- To install RagaAI Catalyst, you can use pip:
56
-
57
- ```bash
58
- pip install ragaai-catalyst
59
- ```
60
-
61
- ## Configuration
62
-
63
- Before using RagaAI Catalyst, you need to set up your credentials. You can do this by setting environment variables or passing them directly to the `RagaAICatalyst` class:
64
-
65
- ```python
66
- from ragaai_catalyst import RagaAICatalyst
67
-
68
- catalyst = RagaAICatalyst(
69
- access_key="YOUR_ACCESS_KEY",
70
- secret_key="YOUR_SECRET_KEY",
71
- base_url="BASE_URL"
72
- )
73
- ```
74
- **Note**: Authetication to RagaAICatalyst is necessary to perform any operations below
75
-
76
-
77
- ## Usage
78
-
79
- ### Project Management
80
-
81
- Create and manage projects using RagaAI Catalyst:
82
-
83
- ```python
84
- # Create a project
85
- project = catalyst.create_project(
86
- project_name="Test-RAG-App-1",
87
- description="Description of the project"
88
- )
89
-
90
- # List projects
91
- projects = catalyst.list_projects()
92
- print(projects)
93
- ```
94
-
95
- ### Trace Management
96
-
97
- Record and analyze traces of your RAG application:
98
-
99
- ```python
100
- from ragaai_catalyst import Tracer
101
-
102
- # Start a trace recording
103
- tracer = Tracer(
104
- project_name="Test-RAG-App-1",
105
- metadata={"key1": "value1", "key2": "value2"},
106
- tracer_type="langchain",
107
- pipeline={
108
- "llm_model": "gpt-3.5-turbo",
109
- "vector_store": "faiss",
110
- "embed_model": "text-embedding-ada-002",
111
- }
112
- ).start()
113
-
114
- # Your code here
115
-
116
- # Stop the trace recording
117
- tracer.stop()
118
-
119
- # Alternatively, use a context manager
120
- with tracer.trace():
121
- # Your code here
122
- ```
123
-
124
- ### Experiment Management
125
-
126
- Create and manage experiments to evaluate your RAG application:
127
-
128
- ```python
129
- from ragaai_catalyst import Experiment
130
-
131
- # Create an experiment
132
- experiment_manager = Experiment(
133
- project_name="Test-RAG-App-1",
134
- experiment_name="Exp-01",
135
- experiment_description="Experiment Description",
136
- dataset_name="Dataset Created from UI",
137
- )
138
-
139
- # Add metrics to the experiment
140
- experiment_manager.add_metrics(
141
- metrics=[
142
- {"name": "hallucination", "config": {"model": "gpt-4o", "provider":"OpenAI"}}
143
- ]
144
- )
145
-
146
- # Add multiple metrics
147
- experiment_manager.add_metrics(
148
- metrics=[
149
- {"name": "hallucination", "config": {"model": "gpt-4o", "provider":"OpenAI"}},
150
- {"name": "hallucination", "config": {"model": "gpt-4", "provider":"OpenAI"}},
151
- {"name": "hallucination", "config": {"model": "gpt-3.5-turbo", "provider":"OpenAI"}}
152
- ]
153
- )
154
-
155
- # Get the status of the experiment
156
- status = experiment_manager.get_status()
157
- print("Experiment Status:", status)
158
-
159
- # Get the results of the experiment
160
- results = experiment_manager.get_results()
161
- print("Experiment Results:", results)
162
- ```
163
-
164
-
165
-
166
- ## Dataset Management
167
- Manage datasets efficiently for your projects:
168
-
169
- ```py
170
- from ragaai_catalyst import Dataset
171
-
172
- # Initialize Dataset management for a specific project
173
- dataset_manager = Dataset(project_name="project_name")
174
-
175
- # List existing datasets
176
- datasets = dataset_manager.list_datasets()
177
- print("Existing Datasets:", datasets)
178
-
179
- # Create a dataset from trace
180
- dataset_manager.create_from_trace(
181
- dataset_name='Test-dataset-1',
182
- filter_list=[
183
- {"name": "llm_model", "values": ["gpt-3.5-turbo", "gpt-4"]},
184
- {"name": "prompt_length", "lte": 27, "gte": 23}
185
- ]
186
- )
187
-
188
- # Create a dataset from CSV
189
- dataset_manager.create_from_csv(
190
- csv_path='path/to/your.csv',
191
- dataset_name='MyDataset',
192
- schema_mapping={'column1': 'schema_element1', 'column2': 'schema_element2'}
193
- )
194
- ```
195
-
196
- For more detailed information on Dataset Management, including CSV schema handling and advanced usage, please refer to the [Dataset Management documentation](docs/dataset_management.md).
197
-
198
- ## Prompt Management
199
-
200
- Manage and use prompts efficiently in your projects:
201
-
202
- ```py
203
- from ragaai_catalyst.prompt_manager import PromptManager
204
-
205
- # Initialize PromptManager
206
- prompt_manager = PromptManager("your-project-name")
207
-
208
- # List available prompts
209
- prompts = prompt_manager.list_prompts()
210
- print("Available prompts:", prompts)
211
-
212
- # Get a specific prompt
213
- prompt_name = "your_prompt_name"
214
- prompt = prompt_manager.get_prompt(prompt_name)
215
-
216
- # Compile a prompt with variables
217
- compiled_prompt = prompt.compile(query="What's the weather?", context="sunny", llm_response="It's sunny today")
218
- print("Compiled prompt:", compiled_prompt)
219
-
220
- # Get prompt parameters
221
- parameters = prompt.get_parameters()
222
- print("Prompt parameters:", parameters)
223
- ```
224
-
225
- For more detailed information on Prompt Management, please refer to the [Prompt Management documentation](docs/prompt_management.md).
226
-
227
-
228
-
@@ -1,25 +0,0 @@
1
- ragaai_catalyst/__init__.py,sha256=T0-X4yfIAe26-tWx6kLwNkKIjaFoQL2aNLIRp5wBG5w,424
2
- ragaai_catalyst/_version.py,sha256=JKt9KaVNOMVeGs8ojO6LvIZr7ZkMzNN-gCcvryy4x8E,460
3
- ragaai_catalyst/dataset.py,sha256=XjI06Exs6-64pQPQlky4mtcUllNMCgKP-bnM_t9EWkY,10920
4
- ragaai_catalyst/evaluation.py,sha256=PR7rMkvZ4km26B24sSc60GPNS0JkrUMIYo5CPEqX2Qw,19315
5
- ragaai_catalyst/experiment.py,sha256=8KvqgJg5JVnt9ghhGDJvdb4mN7ETBX_E5gNxBT0Nsn8,19010
6
- ragaai_catalyst/prompt_manager.py,sha256=ZMIHrmsnPMq20YfeNxWXLtrxnJyMcxpeJ8Uya7S5dUA,16411
7
- ragaai_catalyst/proxy_call.py,sha256=nlMdJCSW73sfN0fMbCbtIk6W992Nac5FJvcfNd6UDJk,5497
8
- ragaai_catalyst/ragaai_catalyst.py,sha256=5Q1VCE7P33DtjaOtVGRUgBL8dpDL9kjisWGIkOyX4nE,17426
9
- ragaai_catalyst/synthetic_data_generation.py,sha256=STpZF-a1mYT3GR4CGdDvhBdctf2ciSLyvDANqJxnQp8,12989
10
- ragaai_catalyst/utils.py,sha256=TlhEFwLyRU690HvANbyoRycR3nQ67lxVUQoUOfTPYQ0,3772
11
- ragaai_catalyst/tracers/__init__.py,sha256=NppmJhD3sQ5R1q6teaZLS7rULj08Gb6JT8XiPRIe_B0,49
12
- ragaai_catalyst/tracers/tracer.py,sha256=eaGJdLEIjadHpbWBXBl5AhMa2vL97SVjik4U1L8gros,9591
13
- ragaai_catalyst/tracers/exporters/__init__.py,sha256=kVA8zp05h3phu4e-iHSlnznp_PzMRczB7LphSsZgUjg,138
14
- ragaai_catalyst/tracers/exporters/file_span_exporter.py,sha256=RgGteu-NVGprXKkynvyIO5yOjpbtA41R3W_NzCjnkwE,6445
15
- ragaai_catalyst/tracers/exporters/raga_exporter.py,sha256=rQ5Wj71f2Ke3qLlV8KiWCskbGBR-ia_hlzDx86rPrEo,18188
16
- ragaai_catalyst/tracers/instrumentators/__init__.py,sha256=FgnMQupoRTzmVsG9YKsLQera2Pfs-AluZv8CxwavoyQ,253
17
- ragaai_catalyst/tracers/instrumentators/langchain.py,sha256=yMN0qVF0pUVk6R5M1vJoUXezDo1ejs4klCFRlE8x4vE,574
18
- ragaai_catalyst/tracers/instrumentators/llamaindex.py,sha256=SMrRlR4xM7k9HK43hakE8rkrWHxMlmtmWD-AX6TeByc,416
19
- ragaai_catalyst/tracers/instrumentators/openai.py,sha256=14R4KW9wQCR1xysLfsP_nxS7cqXrTPoD8En4MBAaZUU,379
20
- ragaai_catalyst/tracers/utils/__init__.py,sha256=KeMaZtYaTojilpLv65qH08QmpYclfpacDA0U3wg6Ybw,64
21
- ragaai_catalyst/tracers/utils/utils.py,sha256=ViygfJ7vZ7U0CTSA1lbxVloHp4NSlmfDzBRNCJuMhis,2374
22
- ragaai_catalyst-2.0.5.dist-info/METADATA,sha256=tWppjo0sERHjjugIOAWdwD1p05HO6T6N_E1KYd9G9hY,6625
23
- ragaai_catalyst-2.0.5.dist-info/WHEEL,sha256=GV9aMThwP_4oNCtvEC2ec3qUYutgWeAzklro_0m4WJQ,91
24
- ragaai_catalyst-2.0.5.dist-info/top_level.txt,sha256=HpgsdRgEJMk8nqrU6qdCYk3di7MJkDL0B19lkc7dLfM,16
25
- ragaai_catalyst-2.0.5.dist-info/RECORD,,