auditi 0.1.0__py3-none-any.whl

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
@@ -0,0 +1,703 @@
1
+ Metadata-Version: 2.4
2
+ Name: auditi
3
+ Version: 0.1.0
4
+ Summary: Trace, monitor, and evaluate AI agents and LLM applications with simple decorators
5
+ Project-URL: Repository, https://github.com/deduu/auditi
6
+ Project-URL: Issues, https://github.com/deduu/auditi/issues
7
+ Author: Dedy Ariansyah
8
+ License: MIT
9
+ License-File: LICENSE
10
+ Keywords: agents,ai,anthropic,claude,evaluation,gemini,gpt,langchain,llm,monitoring,observability,openai,tracing
11
+ Classifier: Development Status :: 3 - Alpha
12
+ Classifier: Framework :: AsyncIO
13
+ Classifier: Framework :: FastAPI
14
+ Classifier: Intended Audience :: Developers
15
+ Classifier: License :: OSI Approved :: MIT License
16
+ Classifier: Programming Language :: Python :: 3
17
+ Classifier: Programming Language :: Python :: 3.9
18
+ Classifier: Programming Language :: Python :: 3.10
19
+ Classifier: Programming Language :: Python :: 3.11
20
+ Classifier: Programming Language :: Python :: 3.12
21
+ Classifier: Topic :: Scientific/Engineering :: Artificial Intelligence
22
+ Classifier: Topic :: Software Development :: Libraries :: Python Modules
23
+ Requires-Python: >=3.9
24
+ Requires-Dist: httpx>=0.25.0
25
+ Requires-Dist: pydantic>=2.0.0
26
+ Provides-Extra: dev
27
+ Requires-Dist: black>=23.0.0; extra == 'dev'
28
+ Requires-Dist: mypy>=1.0.0; extra == 'dev'
29
+ Requires-Dist: pytest-asyncio>=0.23.0; extra == 'dev'
30
+ Requires-Dist: pytest-cov>=4.0.0; extra == 'dev'
31
+ Requires-Dist: pytest>=7.0.0; extra == 'dev'
32
+ Requires-Dist: ruff>=0.1.0; extra == 'dev'
33
+ Description-Content-Type: text/markdown
34
+
35
+ # Auditi Python SDK
36
+
37
+ Official Python SDK for [Auditi](https://auditi.dev) - AI/LLM Evaluation and Monitoring Platform.
38
+
39
+ **Trace, monitor, and evaluate your AI agents with minimal code changes.**
40
+
41
+ ## Features
42
+
43
+ - 🎯 **Simple Decorators** - Add tracing with just `@trace_agent`, `@trace_tool`, and `@trace_llm`
44
+ - 🔄 **Async & Sync Support** - Works with both synchronous and asynchronous functions
45
+ - 🤖 **Multi-Provider** - Auto-detects OpenAI, Anthropic, and Google Gemini models
46
+ - 💰 **Cost Tracking** - Automatic token usage and cost calculation
47
+ - 🔍 **Standalone Traces** - Simple LLM calls don't need agent wrappers
48
+ - 🛠️ **Custom Evaluators** - Implement your own evaluation logic
49
+ - 🚀 **Production Ready** - FastAPI, LangChain, and framework integrations
50
+
51
+ ## Installation
52
+
53
+ ```bash
54
+ pip install auditi
55
+ ```
56
+
57
+ Or install from source:
58
+
59
+ ```bash
60
+ git clone https://github.com/deduu/auditi
61
+ cd auditi
62
+ pip install -e .
63
+ ```
64
+
65
+ ## Quick Start
66
+
67
+ ### 1. Initialize the SDK
68
+
69
+ ```python
70
+ import auditi
71
+
72
+ # For production
73
+ auditi.init(api_key="your-api-key", base_url="https://api.auditi.dev")
74
+
75
+ # For local development (prints to console)
76
+ auditi.init() # Uses localhost:8000 by default
77
+ ```
78
+
79
+ ### 2. Trace Your Agent
80
+
81
+ ```python
82
+ from auditi import trace_agent, trace_tool, trace_llm
83
+ import openai
84
+
85
+ @trace_agent(name="customer_support")
86
+ def customer_support_agent(user_message: str, user_id: str = None):
87
+ """Your existing agent - just add the decorator!"""
88
+
89
+ # Fetch user context
90
+ context = get_user_context(user_id)
91
+
92
+ # Search knowledge base
93
+ docs = search_knowledge_base(user_message)
94
+
95
+ # Generate response
96
+ response = call_openai(user_message, context, docs)
97
+
98
+ return response
99
+
100
+
101
+ @trace_tool(name="search_kb")
102
+ def search_knowledge_base(query: str):
103
+ """Tool calls are automatically captured as spans."""
104
+ results = vector_db.similarity_search(query, k=5)
105
+ return results
106
+
107
+
108
+ @trace_llm(model="gpt-4o")
109
+ def call_openai(message: str, context: dict, docs: list):
110
+ """LLM calls capture usage metrics and costs."""
111
+ response = openai.chat.completions.create(
112
+ model="gpt-4o",
113
+ messages=[
114
+ {"role": "system", "content": f"Context: {context}"},
115
+ {"role": "user", "content": message}
116
+ ]
117
+ )
118
+ return response.choices[0].message.content
119
+ ```
120
+
121
+ ### 3. View Traces in Auditi Dashboard
122
+
123
+ That's it! Every call to `customer_support_agent()` will:
124
+ - ✅ Capture user input and assistant output
125
+ - ✅ Track all tool calls and LLM calls as spans
126
+ - ✅ Calculate token usage and costs
127
+ - ✅ Send to Auditi for evaluation and monitoring
128
+
129
+ ## Usage Patterns
130
+
131
+ ### Pattern 1: Simple LLM Calls (Standalone)
132
+
133
+ For simple chatbots or single LLM calls, you don't need `@trace_agent`:
134
+
135
+ ```python
136
+ @trace_llm(standalone=True)
137
+ def simple_chat(prompt: str):
138
+ """Creates its own trace automatically - no agent wrapper needed!"""
139
+ response = openai.chat.completions.create(
140
+ model="gpt-4o",
141
+ messages=[{"role": "user", "content": prompt}]
142
+ )
143
+ return response.choices[0].message.content
144
+
145
+ # This creates a complete trace automatically
146
+ result = simple_chat("What is the capital of France?")
147
+ ```
148
+
149
+ ### Pattern 2: Complex Agents with Tools
150
+
151
+ For multi-step agentic workflows:
152
+
153
+ ```python
154
+ @trace_agent(name="research_assistant")
155
+ def research_assistant(query: str, user_id: str):
156
+ """Main agent creates ONE trace that captures all spans."""
157
+
158
+ # Step 1: Web search (creates a tool span)
159
+ search_results = web_search(query)
160
+
161
+ # Step 2: Generate initial response (creates an LLM span)
162
+ initial_response = generate_response(query, search_results)
163
+
164
+ # Step 3: Reflect on quality (creates another LLM span)
165
+ quality_score = evaluate_response(initial_response)
166
+
167
+ # Step 4: Refine if needed (creates another LLM span)
168
+ if quality_score < 0.7:
169
+ final_response = refine_response(initial_response, search_results)
170
+ else:
171
+ final_response = initial_response
172
+
173
+ return final_response
174
+
175
+
176
+ @trace_tool("web_search")
177
+ def web_search(query: str):
178
+ # Search implementation
179
+ return results
180
+
181
+
182
+ @trace_llm(model="gpt-4o")
183
+ def generate_response(query: str, context: list):
184
+ # LLM call
185
+ return response
186
+
187
+
188
+ @trace_llm(model="gpt-4o")
189
+ def evaluate_response(text: str):
190
+ # Another LLM call for reflection
191
+ return score
192
+ ```
193
+
194
+ ### Pattern 3: Embeddings and Retrieval
195
+
196
+ Embedding and retrieval operations are always standalone by default:
197
+
198
+ ```python
199
+ @trace_embedding()
200
+ def embed_text(text: str):
201
+ """Creates a standalone trace for embedding."""
202
+ response = openai.embeddings.create(
203
+ input=text,
204
+ model="text-embedding-3-small"
205
+ )
206
+ return response.data[0].embedding
207
+
208
+
209
+ @trace_retrieval("vector_search")
210
+ def search_docs(query: str):
211
+ """Creates a standalone trace for retrieval."""
212
+ embedding = embed_text(query)
213
+ results = vector_db.similarity_search(embedding, k=5)
214
+ return results
215
+ ```
216
+
217
+ ### Pattern 4: RAG Pipeline
218
+
219
+ Combining all patterns in a full RAG workflow:
220
+
221
+ ```python
222
+ @trace_agent(name="rag_assistant")
223
+ def rag_query(question: str):
224
+ """Full RAG pipeline - all steps captured as spans."""
225
+
226
+ # Embedding step (creates span)
227
+ query_embedding = embed_query(question)
228
+
229
+ # Retrieval step (creates span)
230
+ docs = retrieve_docs(query_embedding)
231
+
232
+ # LLM step (creates span)
233
+ answer = generate_answer(question, docs)
234
+
235
+ return answer
236
+
237
+
238
+ @trace_embedding()
239
+ def embed_query(text: str):
240
+ # Embedding logic
241
+ return embedding
242
+
243
+
244
+ @trace_retrieval("doc_search")
245
+ def retrieve_docs(embedding: list):
246
+ # Vector search logic
247
+ return documents
248
+
249
+
250
+ @trace_llm(model="gpt-4o")
251
+ def generate_answer(question: str, context: list):
252
+ # LLM generation
253
+ return answer
254
+ ```
255
+
256
+ ## Integration Examples
257
+
258
+ ### FastAPI Integration
259
+
260
+ ```python
261
+ from fastapi import FastAPI
262
+ from auditi import trace_agent, trace_tool, trace_llm
263
+
264
+ app = FastAPI()
265
+
266
+ @app.post("/chat")
267
+ async def chat_endpoint(message: str, user_id: str):
268
+ response = await process_chat(message, user_id)
269
+ return {"response": response}
270
+
271
+
272
+ @trace_agent(name="chat_agent")
273
+ async def process_chat(message: str, user_id: str):
274
+ """Async agent - fully supported!"""
275
+ context = await fetch_user_context(user_id)
276
+ kb_results = await search_knowledge_base(message)
277
+ response = await call_llm(message, context, kb_results)
278
+ return response
279
+
280
+
281
+ @trace_tool("fetch_context")
282
+ async def fetch_user_context(user_id: str):
283
+ # Async tool call
284
+ return context
285
+
286
+
287
+ @trace_llm(model="gpt-4o")
288
+ async def call_llm(message: str, context: dict, docs: list):
289
+ # Async LLM call
290
+ return response
291
+ ```
292
+
293
+ ### LangChain Integration
294
+
295
+ ```python
296
+ from langchain.agents import AgentExecutor, create_openai_functions_agent
297
+ from auditi import trace_agent, trace_tool
298
+
299
+ @trace_agent(name="langchain_agent")
300
+ def run_langchain_agent(query: str):
301
+ """Wrap your LangChain execution."""
302
+ agent_executor = create_agent()
303
+ result = agent_executor.invoke({"input": query})
304
+ return result["output"]
305
+
306
+
307
+ @trace_tool("vector_search")
308
+ def vector_search(query: str):
309
+ """Individual tools can be traced too."""
310
+ return vectorstore.similarity_search(query)
311
+ ```
312
+
313
+ ## Custom Evaluators
314
+
315
+ Implement custom evaluation logic to assess trace quality:
316
+
317
+ ```python
318
+ from auditi import BaseEvaluator, EvaluationResult, TraceInput
319
+
320
+ class ResponseQualityEvaluator(BaseEvaluator):
321
+ def evaluate(self, trace: TraceInput) -> EvaluationResult:
322
+ """Evaluate response quality based on custom criteria."""
323
+
324
+ # Access trace data
325
+ user_input = trace.user_input
326
+ assistant_output = trace.assistant_output
327
+ spans = trace.spans
328
+
329
+ # Your evaluation logic
330
+ score = self._calculate_quality_score(assistant_output)
331
+
332
+ # Return evaluation result
333
+ if score >= 0.8:
334
+ status = "pass"
335
+ reason = "High quality response"
336
+ elif score >= 0.6:
337
+ status = "pass"
338
+ reason = "Acceptable quality"
339
+ else:
340
+ status = "fail"
341
+ reason = "Low quality response - needs improvement"
342
+
343
+ return EvaluationResult(
344
+ status=status,
345
+ score=score,
346
+ reason=reason
347
+ )
348
+
349
+ def _calculate_quality_score(self, text: str) -> float:
350
+ # Your scoring logic here
351
+ return 0.85
352
+
353
+
354
+ # Use with trace_agent
355
+ @trace_agent(name="assistant", evaluator=ResponseQualityEvaluator())
356
+ def my_agent(message: str):
357
+ response = generate_response(message)
358
+ return response
359
+ ```
360
+
361
+ ## Multi-Provider Support
362
+
363
+ Auditi automatically detects and handles multiple LLM providers:
364
+
365
+ ```python
366
+ # OpenAI
367
+ @trace_llm(model="gpt-4o")
368
+ def call_openai(prompt: str):
369
+ response = openai.chat.completions.create(...)
370
+ return response # Auto-extracts usage from response.usage
371
+
372
+
373
+ # Anthropic Claude
374
+ @trace_llm(model="claude-sonnet-4-5-20250929")
375
+ def call_anthropic(prompt: str):
376
+ response = anthropic.messages.create(...)
377
+ return response # Auto-extracts from response.usage
378
+
379
+
380
+ # Google Gemini
381
+ @trace_llm(model="gemini-2.0-flash-exp")
382
+ def call_google(prompt: str):
383
+ response = genai.generate_content(...)
384
+ return response # Auto-extracts from response.usage_metadata
385
+ ```
386
+
387
+ **Supported Providers:**
388
+ - ✅ OpenAI (GPT-4, GPT-4o, GPT-3.5, etc.)
389
+ - ✅ Anthropic (Claude 3.5, Claude 3, Claude 2)
390
+ - ✅ Google (Gemini Pro, Gemini Flash)
391
+ - ✅ Auto-detection from model names and response structures
392
+ - ✅ Automatic cost calculation with up-to-date pricing
393
+
394
+ ## Configuration
395
+
396
+ ### Environment Variables
397
+
398
+ ```bash
399
+ # Enable debug logging
400
+ export AUDITI_DEBUG=true
401
+
402
+ # Set API key
403
+ export AUDITI_API_KEY=your-api-key
404
+
405
+ # Set base URL
406
+ export AUDITI_BASE_URL=https://api.auditi.dev
407
+ ```
408
+
409
+ ### Programmatic Configuration
410
+
411
+ ```python
412
+ import auditi
413
+
414
+ # Production setup
415
+ auditi.init(
416
+ api_key="your-api-key",
417
+ base_url="https://api.auditi.dev"
418
+ )
419
+
420
+ # Development setup (prints traces to console)
421
+ from auditi.transport import DebugTransport
422
+
423
+ auditi.init(
424
+ transport=DebugTransport() # Prints to console instead of sending
425
+ )
426
+ ```
427
+
428
+ ## API Reference
429
+
430
+ ### Decorators
431
+
432
+ #### `@trace_agent(name=None, user_id=None, evaluator=None)`
433
+
434
+ Trace a top-level agent function. Creates a complete trace with user input, assistant output, and all spans.
435
+
436
+ **Parameters:**
437
+ - `name` (str, optional): Custom name for the agent
438
+ - `user_id` (str, optional): User identifier
439
+ - `evaluator` (BaseEvaluator, optional): Custom evaluator instance
440
+
441
+ **Returns:** The decorated function's return value becomes `assistant_output`
442
+
443
+ #### `@trace_tool(name=None, standalone=False)`
444
+
445
+ Trace a tool/function call within an agent.
446
+
447
+ **Parameters:**
448
+ - `name` (str, optional): Custom name for the tool
449
+ - `standalone` (bool): If True, creates a standalone trace when not inside `@trace_agent`
450
+
451
+ #### `@trace_llm(name=None, model=None, standalone=False)`
452
+
453
+ Trace an LLM call within an agent.
454
+
455
+ **Parameters:**
456
+ - `name` (str, optional): Custom name for the LLM call
457
+ - `model` (str, optional): Model name (auto-detected from response if not provided)
458
+ - `standalone` (bool): If True, creates a standalone trace when not inside `@trace_agent`
459
+
460
+ #### `@trace_embedding(name=None, model=None)`
461
+
462
+ Trace an embedding operation. Always creates a standalone trace when not inside `@trace_agent`.
463
+
464
+ **Parameters:**
465
+ - `name` (str, optional): Custom name for the embedding operation
466
+ - `model` (str, optional): Model name (auto-detected if not provided)
467
+
468
+ #### `@trace_retrieval(name=None)`
469
+
470
+ Trace a retrieval/search operation. Always creates a standalone trace when not inside `@trace_agent`.
471
+
472
+ **Parameters:**
473
+ - `name` (str, optional): Custom name for the retrieval operation
474
+
475
+ ### Types
476
+
477
+ #### `TraceInput`
478
+
479
+ Complete trace data model.
480
+
481
+ **Fields:**
482
+ - `trace_id` (str): Unique trace identifier
483
+ - `name` (str): Agent name
484
+ - `user_input` (str): User's message
485
+ - `assistant_output` (str): Agent's response
486
+ - `user_id` (str, optional): User identifier
487
+ - `conversation_id` (str, optional): Conversation/session identifier
488
+ - `spans` (List[SpanInput]): List of spans (tools, LLM calls)
489
+ - `input_tokens` (int): Total input tokens
490
+ - `output_tokens` (int): Total output tokens
491
+ - `total_tokens` (int): Total tokens
492
+ - `cost` (float): Total cost in USD
493
+ - `processing_time` (float): Total processing time in seconds
494
+ - `metadata` (dict, optional): Additional metadata
495
+ - `timestamp` (datetime): Trace timestamp
496
+
497
+ #### `SpanInput`
498
+
499
+ Individual span within a trace.
500
+
501
+ **Fields:**
502
+ - `span_id` (str): Unique span identifier
503
+ - `name` (str): Span name
504
+ - `span_type` (str): Type: "tool", "llm", "embedding", "retrieval"
505
+ - `inputs` (dict): Input parameters
506
+ - `outputs` (Any): Output value
507
+ - `input_tokens` (int, optional): Input tokens (for LLM spans)
508
+ - `output_tokens` (int, optional): Output tokens (for LLM spans)
509
+ - `total_tokens` (int, optional): Total tokens
510
+ - `cost` (float, optional): Cost in USD
511
+ - `model` (str, optional): Model name
512
+ - `processing_time` (float, optional): Processing time in seconds
513
+ - `timestamp` (datetime): Span timestamp
514
+
515
+ #### `EvaluationResult`
516
+
517
+ Evaluation result data.
518
+
519
+ **Fields:**
520
+ - `status` (str): "pass" or "fail"
521
+ - `score` (float, optional): Evaluation score
522
+ - `reason` (str, optional): Explanation
523
+ - `metadata` (dict, optional): Additional evaluation data
524
+
525
+ ### Transport
526
+
527
+ #### `SyncHttpTransport(api_key, base_url)`
528
+
529
+ Default synchronous HTTP transport.
530
+
531
+ **Parameters:**
532
+ - `api_key` (str): API key for authentication
533
+ - `base_url` (str): Base URL of Auditi API
534
+
535
+ #### `DebugTransport()`
536
+
537
+ Debug transport that prints traces to console. Useful for local development.
538
+
539
+ ### Context Management
540
+
541
+ ```python
542
+ from auditi.context import (
543
+ get_current_trace,
544
+ set_current_trace,
545
+ get_current_span,
546
+ set_context,
547
+ get_context
548
+ )
549
+
550
+ # Get current trace (if inside @trace_agent)
551
+ trace = get_current_trace()
552
+
553
+ # Get current span (if inside @trace_tool/@trace_llm)
554
+ span = get_current_span()
555
+
556
+ # Set global context (available across all traces)
557
+ set_context({"environment": "production", "version": "1.0"})
558
+ ```
559
+
560
+ ## Development
561
+
562
+ ### Setup
563
+
564
+ ```bash
565
+ # Clone the repository
566
+ git clone https://github.com/deduu/auditi
567
+ cd auditi
568
+
569
+ # Install dev dependencies
570
+ pip install -e ".[dev]"
571
+ ```
572
+
573
+ ### Running Tests
574
+
575
+ ```bash
576
+ # Run all tests
577
+ pytest
578
+
579
+ # Run with coverage
580
+ pytest --cov=auditi --cov-report=html
581
+
582
+ # Run specific test file
583
+ pytest tests/test_decorators.py -v
584
+
585
+ # Run async tests
586
+ pytest tests/test_decorators.py -k async
587
+ ```
588
+
589
+ ### Code Quality
590
+
591
+ ```bash
592
+ # Format code
593
+ black auditi/
594
+
595
+ # Lint code
596
+ ruff auditi/
597
+
598
+ # Type check
599
+ mypy auditi/
600
+ ```
601
+
602
+ ### Project Structure
603
+
604
+ ```
605
+ auditi/
606
+ ├── __init__.py # Package initialization
607
+ ├── client.py # SDK client and initialization
608
+ ├── context.py # Context management for traces/spans
609
+ ├── decorators.py # Core decorators (@trace_agent, etc.)
610
+ ├── evaluator.py # Base evaluator class
611
+ ├── events.py # Event types for streaming
612
+ ├── transport.py # Transport layer (HTTP, Debug)
613
+ ├── providers/ # LLM provider abstractions
614
+ │ ├── __init__.py
615
+ │ ├── base.py # Base provider interface
616
+ │ ├── openai.py # OpenAI provider
617
+ │ ├── anthropic.py # Anthropic provider
618
+ │ ├── google.py # Google provider
619
+ │ └── registry.py # Provider auto-detection
620
+ └── types/
621
+ ├── __init__.py
622
+ └── api_types.py # Pydantic models for API types
623
+ ```
624
+
625
+ ## Examples
626
+
627
+ The `examples/` directory contains complete working examples:
628
+
629
+ - `01_basic_integration.py` - Simple chatbot integration
630
+ - `02_fastapi_integration.py` - Production FastAPI integration
631
+ - `03_langchain_integration.py` - LangChain agent integration
632
+ - `04_simple_llm_traces.py` - Standalone LLM call tracing
633
+ - `05_embedding_traces.py` - Embedding and retrieval tracing
634
+
635
+ Run any example:
636
+
637
+ ```bash
638
+ # Enable debug output
639
+ export AUDITI_DEBUG=true
640
+
641
+ # Run example
642
+ python examples/01_basic_integration.py
643
+ ```
644
+
645
+ ## Troubleshooting
646
+
647
+ ### Traces Not Appearing
648
+
649
+ 1. **Check initialization:**
650
+ ```python
651
+ import auditi
652
+ auditi.init(api_key="your-key", base_url="https://api.auditi.dev")
653
+ ```
654
+
655
+ 2. **Enable debug logging:**
656
+ ```bash
657
+ export AUDITI_DEBUG=true
658
+ python your_script.py
659
+ ```
660
+
661
+ 3. **Verify decorator order:**
662
+ - `@trace_agent` should be the outermost decorator
663
+ - `@trace_tool` and `@trace_llm` should be inside functions called by the agent
664
+
665
+ ### Missing Usage Metrics
666
+
667
+ Make sure your LLM call returns the full response object:
668
+
669
+ ```python
670
+ @trace_llm(model="gpt-4o")
671
+ def call_openai(prompt: str):
672
+ response = openai.chat.completions.create(...)
673
+ return response # ✅ Return full response, not just .choices[0].message.content
674
+ ```
675
+
676
+ ### Async Functions Not Working
677
+
678
+ Both sync and async functions are supported. Make sure to use `await`:
679
+
680
+ ```python
681
+ @trace_agent(name="async_agent")
682
+ async def my_agent(message: str):
683
+ result = await async_llm_call(message)
684
+ return result
685
+ ```
686
+
687
+ ## Contributing
688
+
689
+ Contributions are welcome! Please feel free to submit a Pull Request.
690
+
691
+ 1. Fork the repository
692
+ 2. Create your feature branch (`git checkout -b feature/amazing-feature`)
693
+ 3. Commit your changes (`git commit -m 'Add amazing feature'`)
694
+ 4. Push to the branch (`git push origin feature/amazing-feature`)
695
+ 5. Open a Pull Request
696
+
697
+ ## License
698
+
699
+ MIT License - see [LICENSE](LICENSE) file for details.
700
+
701
+ ## Links
702
+
703
+ - GitHub: [https://github.com/deduu/auditi](https://github.com/deduu/auditi)
@@ -0,0 +1,19 @@
1
+ auditi/__init__.py,sha256=UFAo0c3uO1_extt9yazYUS6XGwBeswD-Bb9utjT3zDw,1258
2
+ auditi/client.py,sha256=2prRkKde4Hm1usyGJlA2nUvqmL_-fxhq4-9g5nO98m4,2179
3
+ auditi/context.py,sha256=ooDa1497Nax0EwwydarK4aK34Pi8nlMa1RW_CegVUKU,2050
4
+ auditi/decorators.py,sha256=ZeQdQDFgDJ-LWG8tLjAnBTWTy7nKOLjH2-WjSmi3GuE,58291
5
+ auditi/evaluator.py,sha256=TC20LDMlOeaorvfvZI9VJ75JLhvKqIDEXFcToyJmBpk,1291
6
+ auditi/events.py,sha256=w4UAG0AoqrVAmdrx5LIKzOsYB_Hpojb6A-hY_Xv7EGs,6478
7
+ auditi/transport.py,sha256=KT3YSYz2sizG4aWua9H6h49n1WN8ht9Tk4wqlDJj25Y,2448
8
+ auditi/providers/__init__.py,sha256=LRnpvFvvNOv81sXp8MLZnGOFJwZfoqKJxnfGNfEsrdg,1447
9
+ auditi/providers/anthropic.py,sha256=tdfSMVPLDJ4tYC0eqWb9atUpYPDXpTjWkiJRN11XBQM,4868
10
+ auditi/providers/base.py,sha256=1-q9Y7jAivErjB98jEbMSxHhWjziztWZJCjEFltDxt8,4478
11
+ auditi/providers/google.py,sha256=faYZNlP-guERXFLvaBw48ZYIt8mzupwf75p0--_v3dQ,6917
12
+ auditi/providers/openai.py,sha256=RNVQ--blF5YRD2paBkOylAWSC67BcSOvbOYTJS16RE8,5075
13
+ auditi/providers/registry.py,sha256=hSvhqvMTlFt0JjV7rBu5J-UxsvYZflxPyqddhude5JI,5001
14
+ auditi/types/__init__.py,sha256=8o0vlEn6jTxGgL7dhEGXAMNw1NUX0gVrqyp6YcKGTT4,228
15
+ auditi/types/api_types.py,sha256=Qu57YnfFffrKJwjfGeYib-_Hbd4DKxvf-D1CKmGFE2g,2737
16
+ auditi-0.1.0.dist-info/METADATA,sha256=xJIDtvB1s5vZpGScHnlH093OznckDRmB_OYH3MQhW0w,18911
17
+ auditi-0.1.0.dist-info/WHEEL,sha256=WLgqFyCfm_KASv4WHyYy0P3pM_m7J5L9k2skdKLirC8,87
18
+ auditi-0.1.0.dist-info/licenses/LICENSE,sha256=vYb7Htb7bfnI-Fo1U3rMUQVD-UvJd6MWV_oMXE4WeQE,1084
19
+ auditi-0.1.0.dist-info/RECORD,,