agentreplay 0.1.2__tar.gz → 0.4.2__tar.gz
This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
- agentreplay-0.4.2/PKG-INFO +1083 -0
- agentreplay-0.4.2/README.md +984 -0
- {agentreplay-0.1.2 → agentreplay-0.4.2}/pyproject.toml +16 -11
- agentreplay-0.4.2/src/agentreplay/__init__.py +223 -0
- {agentreplay-0.1.2 → agentreplay-0.4.2}/src/agentreplay/context.py +133 -0
- agentreplay-0.4.2/src/agentreplay/decorators.py +541 -0
- {agentreplay-0.1.2 → agentreplay-0.4.2}/src/agentreplay/install_pth.py +6 -2
- agentreplay-0.4.2/src/agentreplay/privacy.py +452 -0
- agentreplay-0.4.2/src/agentreplay/sdk.py +578 -0
- agentreplay-0.4.2/src/agentreplay/wrappers.py +522 -0
- agentreplay-0.4.2/src/agentreplay.egg-info/PKG-INFO +1083 -0
- {agentreplay-0.1.2 → agentreplay-0.4.2}/src/agentreplay.egg-info/SOURCES.txt +4 -0
- {agentreplay-0.1.2 → agentreplay-0.4.2}/src/agentreplay.egg-info/requires.txt +10 -5
- agentreplay-0.1.2/PKG-INFO +0 -285
- agentreplay-0.1.2/README.md +0 -191
- agentreplay-0.1.2/src/agentreplay/__init__.py +0 -81
- agentreplay-0.1.2/src/agentreplay.egg-info/PKG-INFO +0 -285
- {agentreplay-0.1.2 → agentreplay-0.4.2}/LICENSE +0 -0
- {agentreplay-0.1.2 → agentreplay-0.4.2}/setup.cfg +0 -0
- {agentreplay-0.1.2 → agentreplay-0.4.2}/setup.py +0 -0
- {agentreplay-0.1.2 → agentreplay-0.4.2}/src/agentreplay/auto_instrument/__init__.py +0 -0
- {agentreplay-0.1.2 → agentreplay-0.4.2}/src/agentreplay/auto_instrument/openai.py +0 -0
- {agentreplay-0.1.2 → agentreplay-0.4.2}/src/agentreplay/batching.py +0 -0
- {agentreplay-0.1.2 → agentreplay-0.4.2}/src/agentreplay/bootstrap.py +0 -0
- {agentreplay-0.1.2 → agentreplay-0.4.2}/src/agentreplay/circuit_breaker.py +0 -0
- {agentreplay-0.1.2 → agentreplay-0.4.2}/src/agentreplay/client.py +0 -0
- {agentreplay-0.1.2 → agentreplay-0.4.2}/src/agentreplay/config.py +0 -0
- {agentreplay-0.1.2 → agentreplay-0.4.2}/src/agentreplay/env_config.py +0 -0
- {agentreplay-0.1.2 → agentreplay-0.4.2}/src/agentreplay/env_init.py +0 -0
- {agentreplay-0.1.2 → agentreplay-0.4.2}/src/agentreplay/exceptions.py +0 -0
- {agentreplay-0.1.2 → agentreplay-0.4.2}/src/agentreplay/genai.py +0 -0
- {agentreplay-0.1.2 → agentreplay-0.4.2}/src/agentreplay/genai_conventions.py +0 -0
- {agentreplay-0.1.2 → agentreplay-0.4.2}/src/agentreplay/langchain_tracer.py +0 -0
- {agentreplay-0.1.2 → agentreplay-0.4.2}/src/agentreplay/models.py +0 -0
- {agentreplay-0.1.2 → agentreplay-0.4.2}/src/agentreplay/otel_bridge.py +0 -0
- {agentreplay-0.1.2 → agentreplay-0.4.2}/src/agentreplay/patch.py +0 -0
- {agentreplay-0.1.2 → agentreplay-0.4.2}/src/agentreplay/propagation.py +0 -0
- {agentreplay-0.1.2 → agentreplay-0.4.2}/src/agentreplay/py.typed +0 -0
- {agentreplay-0.1.2 → agentreplay-0.4.2}/src/agentreplay/retry.py +0 -0
- {agentreplay-0.1.2 → agentreplay-0.4.2}/src/agentreplay/sampling.py +0 -0
- {agentreplay-0.1.2 → agentreplay-0.4.2}/src/agentreplay/session.py +0 -0
- {agentreplay-0.1.2 → agentreplay-0.4.2}/src/agentreplay/sitecustomize.py +0 -0
- {agentreplay-0.1.2 → agentreplay-0.4.2}/src/agentreplay/span.py +0 -0
- {agentreplay-0.1.2 → agentreplay-0.4.2}/src/agentreplay/unified.py +0 -0
- {agentreplay-0.1.2 → agentreplay-0.4.2}/src/agentreplay.egg-info/dependency_links.txt +0 -0
- {agentreplay-0.1.2 → agentreplay-0.4.2}/src/agentreplay.egg-info/entry_points.txt +0 -0
- {agentreplay-0.1.2 → agentreplay-0.4.2}/src/agentreplay.egg-info/top_level.txt +0 -0
|
@@ -0,0 +1,1083 @@
|
|
|
1
|
+
Metadata-Version: 2.4
|
|
2
|
+
Name: agentreplay
|
|
3
|
+
Version: 0.4.2
|
|
4
|
+
Summary: Agentreplay is a high-performance observability and tracing platform for LLM agents and AI apps, combining semantic search, specialized evals (RAGAS, G-Eval, toxicity), and Git-like versioning of prompts and responses.
|
|
5
|
+
Author-email: Sushanth <sushanth@agentreplay.dev>
|
|
6
|
+
License: Apache-2.0
|
|
7
|
+
Project-URL: Homepage, https://agentreplay.dev
|
|
8
|
+
Project-URL: Documentation, https://agentreplay.dev
|
|
9
|
+
Project-URL: Repository, https://github.com/agentreplay/agentreplay
|
|
10
|
+
Project-URL: Issues, https://github.com/agentreplay/agentreplay/issues
|
|
11
|
+
Project-URL: Changelog, https://github.com/agentreplay/agentreplay/blob/main/CHANGELOG.md
|
|
12
|
+
Keywords: llm,agents,tracing,observability,ai,opentelemetry,ragas,evaluation,prompts,semantic-search,langchain,openai
|
|
13
|
+
Classifier: Development Status :: 4 - Beta
|
|
14
|
+
Classifier: Intended Audience :: Developers
|
|
15
|
+
Classifier: License :: OSI Approved :: Apache Software License
|
|
16
|
+
Classifier: Programming Language :: Python :: 3
|
|
17
|
+
Classifier: Programming Language :: Python :: 3.8
|
|
18
|
+
Classifier: Programming Language :: Python :: 3.9
|
|
19
|
+
Classifier: Programming Language :: Python :: 3.10
|
|
20
|
+
Classifier: Programming Language :: Python :: 3.11
|
|
21
|
+
Classifier: Programming Language :: Python :: 3.12
|
|
22
|
+
Classifier: Topic :: Software Development :: Libraries :: Python Modules
|
|
23
|
+
Classifier: Topic :: System :: Monitoring
|
|
24
|
+
Classifier: Typing :: Typed
|
|
25
|
+
Requires-Python: >=3.8
|
|
26
|
+
Description-Content-Type: text/markdown
|
|
27
|
+
License-File: LICENSE
|
|
28
|
+
Requires-Dist: pydantic>=2.0.0
|
|
29
|
+
Requires-Dist: httpx>=0.24.0
|
|
30
|
+
Provides-Extra: otel
|
|
31
|
+
Requires-Dist: opentelemetry-api>=1.20.0; extra == "otel"
|
|
32
|
+
Requires-Dist: opentelemetry-sdk>=1.20.0; extra == "otel"
|
|
33
|
+
Requires-Dist: opentelemetry-exporter-otlp-proto-grpc>=1.20.0; extra == "otel"
|
|
34
|
+
Requires-Dist: opentelemetry-instrumentation>=0.41b0; extra == "otel"
|
|
35
|
+
Requires-Dist: opentelemetry-instrumentation-openai>=0.48.0; extra == "otel"
|
|
36
|
+
Provides-Extra: dev
|
|
37
|
+
Requires-Dist: pytest>=7.0.0; extra == "dev"
|
|
38
|
+
Requires-Dist: pytest-asyncio>=0.21.0; extra == "dev"
|
|
39
|
+
Requires-Dist: pytest-cov>=4.0.0; extra == "dev"
|
|
40
|
+
Requires-Dist: black>=23.0.0; extra == "dev"
|
|
41
|
+
Requires-Dist: mypy>=1.0.0; extra == "dev"
|
|
42
|
+
Requires-Dist: ruff>=0.1.0; extra == "dev"
|
|
43
|
+
Requires-Dist: bandit>=1.7.0; extra == "dev"
|
|
44
|
+
Requires-Dist: types-requests>=2.31.0; extra == "dev"
|
|
45
|
+
Provides-Extra: langchain
|
|
46
|
+
Requires-Dist: langchain>=0.3.0; extra == "langchain"
|
|
47
|
+
Requires-Dist: langchain-openai>=0.2.0; extra == "langchain"
|
|
48
|
+
Provides-Extra: langgraph
|
|
49
|
+
Requires-Dist: langgraph>=0.2.0; extra == "langgraph"
|
|
50
|
+
Requires-Dist: langchain>=0.3.0; extra == "langgraph"
|
|
51
|
+
Provides-Extra: llamaindex
|
|
52
|
+
Requires-Dist: llama-index>=0.11.0; extra == "llamaindex"
|
|
53
|
+
Requires-Dist: llama-index-core>=0.11.0; extra == "llamaindex"
|
|
54
|
+
Provides-Extra: llamaindex-workflows
|
|
55
|
+
Requires-Dist: llama-index>=0.11.0; extra == "llamaindex-workflows"
|
|
56
|
+
Requires-Dist: llama-index-workflows>=0.2.0; extra == "llamaindex-workflows"
|
|
57
|
+
Provides-Extra: openai-agents
|
|
58
|
+
Requires-Dist: openai-agents>=0.5.0; extra == "openai-agents"
|
|
59
|
+
Provides-Extra: autogen
|
|
60
|
+
Requires-Dist: autogen-agentchat>=0.4.0; extra == "autogen"
|
|
61
|
+
Requires-Dist: autogen-core>=0.4.0; extra == "autogen"
|
|
62
|
+
Provides-Extra: semantic-kernel
|
|
63
|
+
Requires-Dist: semantic-kernel>=1.0.0; extra == "semantic-kernel"
|
|
64
|
+
Provides-Extra: crewai
|
|
65
|
+
Requires-Dist: crewai>=0.1.0; extra == "crewai"
|
|
66
|
+
Provides-Extra: smolagents
|
|
67
|
+
Requires-Dist: smolagents>=1.0.0; extra == "smolagents"
|
|
68
|
+
Provides-Extra: pydantic-ai
|
|
69
|
+
Requires-Dist: pydantic-ai>=1.0.0; extra == "pydantic-ai"
|
|
70
|
+
Provides-Extra: strands
|
|
71
|
+
Requires-Dist: strands-sdk>=1.0.0; extra == "strands"
|
|
72
|
+
Provides-Extra: google-adk
|
|
73
|
+
Requires-Dist: google-adk>=1.0.0; extra == "google-adk"
|
|
74
|
+
Provides-Extra: all-frameworks
|
|
75
|
+
Requires-Dist: langchain>=0.3.0; extra == "all-frameworks"
|
|
76
|
+
Requires-Dist: langchain-openai>=0.2.0; extra == "all-frameworks"
|
|
77
|
+
Requires-Dist: langgraph>=0.2.0; extra == "all-frameworks"
|
|
78
|
+
Requires-Dist: llama-index>=0.11.0; extra == "all-frameworks"
|
|
79
|
+
Requires-Dist: openai-agents>=0.5.0; extra == "all-frameworks"
|
|
80
|
+
Requires-Dist: autogen-agentchat>=0.4.0; extra == "all-frameworks"
|
|
81
|
+
Requires-Dist: semantic-kernel>=1.0.0; extra == "all-frameworks"
|
|
82
|
+
Requires-Dist: crewai>=0.1.0; extra == "all-frameworks"
|
|
83
|
+
Requires-Dist: smolagents>=1.0.0; extra == "all-frameworks"
|
|
84
|
+
Requires-Dist: pydantic-ai>=1.0.0; extra == "all-frameworks"
|
|
85
|
+
Provides-Extra: all
|
|
86
|
+
Requires-Dist: langchain>=0.3.0; extra == "all"
|
|
87
|
+
Requires-Dist: langchain-openai>=0.2.0; extra == "all"
|
|
88
|
+
Requires-Dist: langgraph>=0.2.0; extra == "all"
|
|
89
|
+
Requires-Dist: llama-index>=0.11.0; extra == "all"
|
|
90
|
+
Requires-Dist: openai-agents>=0.5.0; extra == "all"
|
|
91
|
+
Requires-Dist: autogen-agentchat>=0.4.0; extra == "all"
|
|
92
|
+
Requires-Dist: semantic-kernel>=1.0.0; extra == "all"
|
|
93
|
+
Requires-Dist: crewai>=0.1.0; extra == "all"
|
|
94
|
+
Requires-Dist: smolagents>=1.0.0; extra == "all"
|
|
95
|
+
Requires-Dist: pydantic-ai>=1.0.0; extra == "all"
|
|
96
|
+
Requires-Dist: pytest>=7.0.0; extra == "all"
|
|
97
|
+
Requires-Dist: pytest-asyncio>=0.21.0; extra == "all"
|
|
98
|
+
Dynamic: license-file
|
|
99
|
+
|
|
100
|
+
# Agentreplay Python SDK
|
|
101
|
+
|
|
102
|
+
[](https://badge.fury.io/py/agentreplay)
|
|
103
|
+
[](https://www.python.org/downloads/)
|
|
104
|
+
[](https://opensource.org/licenses/Apache-2.0)
|
|
105
|
+
[](https://github.com/agentreplay/agentreplay/actions/workflows/ci-python.yaml)
|
|
106
|
+
|
|
107
|
+
**The observability platform for LLM agents and AI applications.** Trace every LLM call, tool invocation, and agent step with minimal code changes.
|
|
108
|
+
|
|
109
|
+
---
|
|
110
|
+
|
|
111
|
+
## ✨ Features
|
|
112
|
+
|
|
113
|
+
| Feature | Description |
|
|
114
|
+
|---------|-------------|
|
|
115
|
+
| 🚀 **Zero-Config Setup** | Works out of the box with environment variables |
|
|
116
|
+
| 🎯 **One-Liner Instrumentation** | Wrap OpenAI/Anthropic clients in one line |
|
|
117
|
+
| 🔧 **Decorator-Based Tracing** | `@traceable` for any function |
|
|
118
|
+
| 🔄 **Async Native** | Full support for async/await patterns |
|
|
119
|
+
| 🔒 **Privacy First** | Built-in PII redaction and scrubbing |
|
|
120
|
+
| 📊 **Token Tracking** | Automatic token usage capture |
|
|
121
|
+
| 🌐 **Framework Agnostic** | Works with LangChain, LlamaIndex, CrewAI, etc. |
|
|
122
|
+
| ⚡ **Batched Transport** | Efficient background sending with retry |
|
|
123
|
+
|
|
124
|
+
---
|
|
125
|
+
|
|
126
|
+
## 📦 Installation
|
|
127
|
+
|
|
128
|
+
```bash
|
|
129
|
+
# Basic installation (minimal dependencies)
|
|
130
|
+
pip install agentreplay
|
|
131
|
+
|
|
132
|
+
# With OpenTelemetry support
|
|
133
|
+
pip install agentreplay[otel]
|
|
134
|
+
|
|
135
|
+
# With LangChain integration
|
|
136
|
+
pip install agentreplay[langchain]
|
|
137
|
+
|
|
138
|
+
# With LangGraph integration
|
|
139
|
+
pip install agentreplay[langgraph]
|
|
140
|
+
|
|
141
|
+
# With LlamaIndex integration
|
|
142
|
+
pip install agentreplay[llamaindex]
|
|
143
|
+
|
|
144
|
+
# Full installation (all integrations)
|
|
145
|
+
pip install agentreplay[otel,langchain,langgraph,llamaindex]
|
|
146
|
+
```
|
|
147
|
+
|
|
148
|
+
---
|
|
149
|
+
|
|
150
|
+
## 🚀 Quick Start
|
|
151
|
+
|
|
152
|
+
### 1. Set Environment Variables
|
|
153
|
+
|
|
154
|
+
```bash
|
|
155
|
+
export AGENTREPLAY_API_KEY="your-api-key"
|
|
156
|
+
export AGENTREPLAY_PROJECT_ID="my-project"
|
|
157
|
+
# Optional
|
|
158
|
+
export AGENTREPLAY_BASE_URL="https://api.agentreplay.io"
|
|
159
|
+
```
|
|
160
|
+
|
|
161
|
+
### 2. Initialize and Trace
|
|
162
|
+
|
|
163
|
+
```python
|
|
164
|
+
import agentreplay
|
|
165
|
+
|
|
166
|
+
# Initialize (reads from env vars automatically)
|
|
167
|
+
agentreplay.init()
|
|
168
|
+
|
|
169
|
+
# Trace any function with a simple decorator
|
|
170
|
+
@agentreplay.traceable
|
|
171
|
+
def my_ai_function(query: str) -> str:
|
|
172
|
+
# Your AI logic here
|
|
173
|
+
return f"Response to: {query}"
|
|
174
|
+
|
|
175
|
+
# Call your function - it's automatically traced!
|
|
176
|
+
result = my_ai_function("What is the capital of France?")
|
|
177
|
+
|
|
178
|
+
# Ensure all traces are sent before exit
|
|
179
|
+
agentreplay.flush()
|
|
180
|
+
```
|
|
181
|
+
|
|
182
|
+
That's it! Your function calls are now being traced and sent to Agentreplay.
|
|
183
|
+
|
|
184
|
+
---
|
|
185
|
+
|
|
186
|
+
## 🔧 Core API Reference
|
|
187
|
+
|
|
188
|
+
### Initialization
|
|
189
|
+
|
|
190
|
+
```python
|
|
191
|
+
import agentreplay
|
|
192
|
+
|
|
193
|
+
# Option 1: Environment variables (recommended for production)
|
|
194
|
+
agentreplay.init()
|
|
195
|
+
|
|
196
|
+
# Option 2: Explicit configuration
|
|
197
|
+
agentreplay.init(
|
|
198
|
+
api_key="your-api-key",
|
|
199
|
+
project_id="my-project",
|
|
200
|
+
base_url="https://api.agentreplay.io",
|
|
201
|
+
|
|
202
|
+
# Optional settings
|
|
203
|
+
tenant_id="default", # Multi-tenant identifier
|
|
204
|
+
agent_id="default", # Default agent ID
|
|
205
|
+
enabled=True, # Set False to disable in tests
|
|
206
|
+
capture_input=True, # Capture function inputs
|
|
207
|
+
capture_output=True, # Capture function outputs
|
|
208
|
+
batch_size=100, # Batch size before sending
|
|
209
|
+
flush_interval=5.0, # Auto-flush interval in seconds
|
|
210
|
+
debug=False, # Enable debug logging
|
|
211
|
+
)
|
|
212
|
+
|
|
213
|
+
# Check if SDK is initialized
|
|
214
|
+
from agentreplay.sdk import is_initialized
|
|
215
|
+
if is_initialized():
|
|
216
|
+
print("SDK is ready!")
|
|
217
|
+
|
|
218
|
+
# Get current configuration
|
|
219
|
+
config = agentreplay.sdk.get_config()
|
|
220
|
+
print(f"Project: {config.project_id}")
|
|
221
|
+
```
|
|
222
|
+
|
|
223
|
+
---
|
|
224
|
+
|
|
225
|
+
## 🎯 The `@traceable` Decorator
|
|
226
|
+
|
|
227
|
+
The primary and most Pythonic way to instrument your code:
|
|
228
|
+
|
|
229
|
+
### Basic Usage
|
|
230
|
+
|
|
231
|
+
```python
|
|
232
|
+
from agentreplay import traceable
|
|
233
|
+
|
|
234
|
+
# Just add the decorator - that's it!
|
|
235
|
+
@traceable
|
|
236
|
+
def process_query(query: str) -> str:
|
|
237
|
+
return call_llm(query)
|
|
238
|
+
|
|
239
|
+
# Works with any function signature
|
|
240
|
+
@traceable
|
|
241
|
+
def complex_function(data: dict, *, optional_param: str = "default") -> list:
|
|
242
|
+
return process(data, optional_param)
|
|
243
|
+
```
|
|
244
|
+
|
|
245
|
+
### With Options
|
|
246
|
+
|
|
247
|
+
```python
|
|
248
|
+
from agentreplay import traceable, SpanKind
|
|
249
|
+
|
|
250
|
+
# Custom span name and kind
|
|
251
|
+
@traceable(name="openai_chat", kind=SpanKind.LLM)
|
|
252
|
+
def call_openai(messages: list) -> str:
|
|
253
|
+
return openai_client.chat.completions.create(
|
|
254
|
+
model="gpt-4",
|
|
255
|
+
messages=messages
|
|
256
|
+
)
|
|
257
|
+
|
|
258
|
+
# Disable input capture for sensitive data
|
|
259
|
+
@traceable(capture_input=False)
|
|
260
|
+
def authenticate(password: str) -> bool:
|
|
261
|
+
return verify_password(password)
|
|
262
|
+
|
|
263
|
+
# Disable output capture
|
|
264
|
+
@traceable(capture_output=False)
|
|
265
|
+
def get_secret() -> str:
|
|
266
|
+
return fetch_secret_from_vault()
|
|
267
|
+
|
|
268
|
+
# Add static metadata
|
|
269
|
+
@traceable(metadata={"version": "2.0", "model": "gpt-4", "team": "ml"})
|
|
270
|
+
def enhanced_query(query: str) -> str:
|
|
271
|
+
return process(query)
|
|
272
|
+
```
|
|
273
|
+
|
|
274
|
+
### Async Functions
|
|
275
|
+
|
|
276
|
+
Full async/await support - no changes needed:
|
|
277
|
+
|
|
278
|
+
```python
|
|
279
|
+
import asyncio
|
|
280
|
+
from agentreplay import traceable, SpanKind
|
|
281
|
+
|
|
282
|
+
@traceable(kind=SpanKind.LLM)
|
|
283
|
+
async def async_llm_call(prompt: str) -> str:
|
|
284
|
+
response = await openai_client.chat.completions.create(
|
|
285
|
+
model="gpt-4",
|
|
286
|
+
messages=[{"role": "user", "content": prompt}]
|
|
287
|
+
)
|
|
288
|
+
return response.choices[0].message.content
|
|
289
|
+
|
|
290
|
+
@traceable
|
|
291
|
+
async def process_batch(queries: list[str]) -> list[str]:
|
|
292
|
+
# All concurrent calls are traced with proper parent-child relationships
|
|
293
|
+
tasks = [async_llm_call(q) for q in queries]
|
|
294
|
+
return await asyncio.gather(*tasks)
|
|
295
|
+
|
|
296
|
+
# Run
|
|
297
|
+
results = asyncio.run(process_batch(["query1", "query2", "query3"]))
|
|
298
|
+
```
|
|
299
|
+
|
|
300
|
+
---
|
|
301
|
+
|
|
302
|
+
## 📐 Context Manager: `trace()`
|
|
303
|
+
|
|
304
|
+
For more control over span attributes and timing:
|
|
305
|
+
|
|
306
|
+
```python
|
|
307
|
+
from agentreplay import trace, SpanKind
|
|
308
|
+
|
|
309
|
+
def complex_operation(query: str) -> dict:
|
|
310
|
+
with trace("process_query", kind=SpanKind.CHAIN) as span:
|
|
311
|
+
# Set input data
|
|
312
|
+
span.set_input({"query": query, "timestamp": time.time()})
|
|
313
|
+
|
|
314
|
+
# Nested span for document retrieval
|
|
315
|
+
with trace("retrieve_documents", kind=SpanKind.RETRIEVER) as retriever_span:
|
|
316
|
+
docs = vector_db.search(query, top_k=5)
|
|
317
|
+
retriever_span.set_output({"document_count": len(docs)})
|
|
318
|
+
retriever_span.set_attribute("vector_db", "pinecone")
|
|
319
|
+
|
|
320
|
+
# Nested span for LLM generation
|
|
321
|
+
with trace("generate_response", kind=SpanKind.LLM) as llm_span:
|
|
322
|
+
llm_span.set_model("gpt-4", provider="openai")
|
|
323
|
+
response = generate_response(query, docs)
|
|
324
|
+
llm_span.set_token_usage(
|
|
325
|
+
prompt_tokens=150,
|
|
326
|
+
completion_tokens=200,
|
|
327
|
+
total_tokens=350
|
|
328
|
+
)
|
|
329
|
+
llm_span.set_output({"response_length": len(response)})
|
|
330
|
+
|
|
331
|
+
# Add events for debugging
|
|
332
|
+
span.add_event("processing_complete", {"doc_count": len(docs)})
|
|
333
|
+
|
|
334
|
+
# Set final output
|
|
335
|
+
span.set_output({"response": response, "sources": len(docs)})
|
|
336
|
+
|
|
337
|
+
return {"response": response, "sources": docs}
|
|
338
|
+
```
|
|
339
|
+
|
|
340
|
+
### Manual Span Control
|
|
341
|
+
|
|
342
|
+
For cases where you need explicit control:
|
|
343
|
+
|
|
344
|
+
```python
|
|
345
|
+
from agentreplay import start_span, SpanKind
|
|
346
|
+
|
|
347
|
+
def long_running_operation():
|
|
348
|
+
span = start_span("background_job", kind=SpanKind.TOOL)
|
|
349
|
+
span.set_input({"job_type": "data_sync"})
|
|
350
|
+
|
|
351
|
+
try:
|
|
352
|
+
# Long running work...
|
|
353
|
+
for i in range(100):
|
|
354
|
+
process_item(i)
|
|
355
|
+
if i % 10 == 0:
|
|
356
|
+
span.add_event("progress", {"completed": i})
|
|
357
|
+
|
|
358
|
+
span.set_output({"items_processed": 100})
|
|
359
|
+
|
|
360
|
+
except Exception as e:
|
|
361
|
+
span.set_error(e)
|
|
362
|
+
raise
|
|
363
|
+
|
|
364
|
+
finally:
|
|
365
|
+
span.end() # Always call end()
|
|
366
|
+
```
|
|
367
|
+
|
|
368
|
+
---
|
|
369
|
+
|
|
370
|
+
## 🔌 LLM Client Wrappers
|
|
371
|
+
|
|
372
|
+
### OpenAI (Recommended)
|
|
373
|
+
|
|
374
|
+
One line to instrument all OpenAI calls:
|
|
375
|
+
|
|
376
|
+
```python
|
|
377
|
+
from openai import OpenAI
|
|
378
|
+
from agentreplay import init, wrap_openai, flush
|
|
379
|
+
|
|
380
|
+
init()
|
|
381
|
+
|
|
382
|
+
# Wrap the client - all calls are now traced automatically!
|
|
383
|
+
client = wrap_openai(OpenAI())
|
|
384
|
+
|
|
385
|
+
# Use normally - tracing happens in the background
|
|
386
|
+
response = client.chat.completions.create(
|
|
387
|
+
model="gpt-4",
|
|
388
|
+
messages=[
|
|
389
|
+
{"role": "system", "content": "You are a helpful assistant."},
|
|
390
|
+
{"role": "user", "content": "Explain quantum computing in simple terms."}
|
|
391
|
+
],
|
|
392
|
+
temperature=0.7,
|
|
393
|
+
)
|
|
394
|
+
|
|
395
|
+
print(response.choices[0].message.content)
|
|
396
|
+
|
|
397
|
+
# Embeddings are traced too
|
|
398
|
+
embedding = client.embeddings.create(
|
|
399
|
+
model="text-embedding-ada-002",
|
|
400
|
+
input="Hello world"
|
|
401
|
+
)
|
|
402
|
+
|
|
403
|
+
flush()
|
|
404
|
+
```
|
|
405
|
+
|
|
406
|
+
### Async OpenAI
|
|
407
|
+
|
|
408
|
+
```python
|
|
409
|
+
from openai import AsyncOpenAI
|
|
410
|
+
from agentreplay import init, wrap_openai
|
|
411
|
+
|
|
412
|
+
init()
|
|
413
|
+
|
|
414
|
+
# Works with async client too
|
|
415
|
+
async_client = wrap_openai(AsyncOpenAI())
|
|
416
|
+
|
|
417
|
+
async def main():
|
|
418
|
+
response = await async_client.chat.completions.create(
|
|
419
|
+
model="gpt-4",
|
|
420
|
+
messages=[{"role": "user", "content": "Hello!"}]
|
|
421
|
+
)
|
|
422
|
+
return response.choices[0].message.content
|
|
423
|
+
```
|
|
424
|
+
|
|
425
|
+
### Anthropic
|
|
426
|
+
|
|
427
|
+
```python
|
|
428
|
+
from anthropic import Anthropic
|
|
429
|
+
from agentreplay import init, wrap_anthropic, flush
|
|
430
|
+
|
|
431
|
+
init()
|
|
432
|
+
|
|
433
|
+
# Wrap the Anthropic client
|
|
434
|
+
client = wrap_anthropic(Anthropic())
|
|
435
|
+
|
|
436
|
+
# Use normally
|
|
437
|
+
message = client.messages.create(
|
|
438
|
+
model="claude-3-opus-20240229",
|
|
439
|
+
max_tokens=1024,
|
|
440
|
+
messages=[
|
|
441
|
+
{"role": "user", "content": "Explain the theory of relativity."}
|
|
442
|
+
]
|
|
443
|
+
)
|
|
444
|
+
|
|
445
|
+
print(message.content[0].text)
|
|
446
|
+
flush()
|
|
447
|
+
```
|
|
448
|
+
|
|
449
|
+
### Disable Content Capture
|
|
450
|
+
|
|
451
|
+
For privacy-sensitive applications:
|
|
452
|
+
|
|
453
|
+
```python
|
|
454
|
+
# Don't capture message content, only metadata
|
|
455
|
+
client = wrap_openai(OpenAI(), capture_content=False)
|
|
456
|
+
|
|
457
|
+
# Traces will still include:
|
|
458
|
+
# - Model name
|
|
459
|
+
# - Token counts
|
|
460
|
+
# - Latency
|
|
461
|
+
# - Error information
|
|
462
|
+
# But NOT the actual messages or responses
|
|
463
|
+
```
|
|
464
|
+
|
|
465
|
+
---
|
|
466
|
+
|
|
467
|
+
## 🏷️ Context Management
|
|
468
|
+
|
|
469
|
+
### Global Context
|
|
470
|
+
|
|
471
|
+
Set context that applies to ALL subsequent traces:
|
|
472
|
+
|
|
473
|
+
```python
|
|
474
|
+
from agentreplay import set_context, get_global_context, clear_context
|
|
475
|
+
|
|
476
|
+
# Set user context (persists until cleared)
|
|
477
|
+
set_context(
|
|
478
|
+
user_id="user-123",
|
|
479
|
+
session_id="session-456",
|
|
480
|
+
agent_id="support-bot",
|
|
481
|
+
)
|
|
482
|
+
|
|
483
|
+
# Add more context later
|
|
484
|
+
set_context(
|
|
485
|
+
environment="production",
|
|
486
|
+
version="1.2.0",
|
|
487
|
+
region="us-west-2",
|
|
488
|
+
)
|
|
489
|
+
|
|
490
|
+
# Get current global context
|
|
491
|
+
context = get_global_context()
|
|
492
|
+
print(context) # {'user_id': 'user-123', 'session_id': 'session-456', ...}
|
|
493
|
+
|
|
494
|
+
# Clear all context
|
|
495
|
+
clear_context()
|
|
496
|
+
```
|
|
497
|
+
|
|
498
|
+
### Request-Scoped Context
|
|
499
|
+
|
|
500
|
+
For web applications with per-request context:
|
|
501
|
+
|
|
502
|
+
```python
|
|
503
|
+
from agentreplay import with_context
|
|
504
|
+
|
|
505
|
+
async def handle_api_request(request):
|
|
506
|
+
# Context only applies within this block
|
|
507
|
+
with with_context(
|
|
508
|
+
user_id=request.user_id,
|
|
509
|
+
request_id=request.headers.get("X-Request-ID"),
|
|
510
|
+
ip_address=request.client_ip,
|
|
511
|
+
):
|
|
512
|
+
# All traces here include this context
|
|
513
|
+
result = await process_request(request)
|
|
514
|
+
|
|
515
|
+
# Context automatically cleared after block
|
|
516
|
+
return result
|
|
517
|
+
|
|
518
|
+
# Async version works too
|
|
519
|
+
async def async_handler(request):
|
|
520
|
+
async with with_context(user_id=request.user_id):
|
|
521
|
+
return await async_process(request)
|
|
522
|
+
```
|
|
523
|
+
|
|
524
|
+
### Multi-Agent Context
|
|
525
|
+
|
|
526
|
+
For multi-agent systems like CrewAI or AutoGen:
|
|
527
|
+
|
|
528
|
+
```python
|
|
529
|
+
from agentreplay import AgentContext
|
|
530
|
+
|
|
531
|
+
def run_multi_agent_workflow():
|
|
532
|
+
# Each agent gets its own context
|
|
533
|
+
with AgentContext(
|
|
534
|
+
agent_id="researcher",
|
|
535
|
+
session_id="workflow-123",
|
|
536
|
+
workflow_id="content-creation",
|
|
537
|
+
):
|
|
538
|
+
# All LLM calls here are tagged with agent_id="researcher"
|
|
539
|
+
research_results = researcher_agent.run()
|
|
540
|
+
|
|
541
|
+
with AgentContext(
|
|
542
|
+
agent_id="writer",
|
|
543
|
+
session_id="workflow-123", # Same session
|
|
544
|
+
workflow_id="content-creation",
|
|
545
|
+
):
|
|
546
|
+
# These calls are tagged with agent_id="writer"
|
|
547
|
+
article = writer_agent.run(research_results)
|
|
548
|
+
|
|
549
|
+
with AgentContext(
|
|
550
|
+
agent_id="editor",
|
|
551
|
+
session_id="workflow-123",
|
|
552
|
+
workflow_id="content-creation",
|
|
553
|
+
):
|
|
554
|
+
final_article = editor_agent.run(article)
|
|
555
|
+
|
|
556
|
+
return final_article
|
|
557
|
+
```
|
|
558
|
+
|
|
559
|
+
---
|
|
560
|
+
|
|
561
|
+
## 🔒 Privacy & Data Redaction
|
|
562
|
+
|
|
563
|
+
### Configure Privacy Settings
|
|
564
|
+
|
|
565
|
+
```python
|
|
566
|
+
from agentreplay import configure_privacy
|
|
567
|
+
|
|
568
|
+
configure_privacy(
|
|
569
|
+
# Enable built-in patterns for common PII
|
|
570
|
+
use_builtin_patterns=True, # Emails, credit cards, SSNs, phones, API keys
|
|
571
|
+
|
|
572
|
+
# Add custom regex patterns
|
|
573
|
+
redact_patterns=[
|
|
574
|
+
r"secret-\w+", # Custom secret format
|
|
575
|
+
r"internal-id-\d+", # Internal IDs
|
|
576
|
+
r"password:\s*\S+", # Password fields
|
|
577
|
+
],
|
|
578
|
+
|
|
579
|
+
# Completely scrub these JSON paths
|
|
580
|
+
scrub_paths=[
|
|
581
|
+
"input.password",
|
|
582
|
+
"input.credentials.api_key",
|
|
583
|
+
"output.user.ssn",
|
|
584
|
+
"metadata.internal_token",
|
|
585
|
+
],
|
|
586
|
+
|
|
587
|
+
# Hash PII instead of replacing with [REDACTED]
|
|
588
|
+
# Allows tracking unique values without exposing data
|
|
589
|
+
hash_pii=True,
|
|
590
|
+
hash_salt="your-secret-salt-here",
|
|
591
|
+
|
|
592
|
+
# Custom replacement text
|
|
593
|
+
redacted_text="[SENSITIVE]",
|
|
594
|
+
)
|
|
595
|
+
```
|
|
596
|
+
|
|
597
|
+
### Built-in Redaction Patterns
|
|
598
|
+
|
|
599
|
+
The SDK includes patterns for:
|
|
600
|
+
|
|
601
|
+
| Type | Example | Redacted As |
|
|
602
|
+
|------|---------|-------------|
|
|
603
|
+
| Email | user@example.com | [REDACTED] |
|
|
604
|
+
| Credit Card | 4111-1111-1111-1111 | [REDACTED] |
|
|
605
|
+
| SSN | 123-45-6789 | [REDACTED] |
|
|
606
|
+
| Phone (US) | +1-555-123-4567 | [REDACTED] |
|
|
607
|
+
| Phone (Intl) | +44-20-1234-5678 | [REDACTED] |
|
|
608
|
+
| API Key | sk-proj-abc123... | [REDACTED] |
|
|
609
|
+
| Bearer Token | Bearer eyJ... | [REDACTED] |
|
|
610
|
+
| JWT | eyJhbG... | [REDACTED] |
|
|
611
|
+
| IP Address | 192.168.1.1 | [REDACTED] |
|
|
612
|
+
|
|
613
|
+
### Manual Redaction
|
|
614
|
+
|
|
615
|
+
```python
|
|
616
|
+
from agentreplay import redact_payload, redact_string, hash_pii
|
|
617
|
+
|
|
618
|
+
# Redact an entire payload
|
|
619
|
+
data = {
|
|
620
|
+
"user": {
|
|
621
|
+
"email": "john@example.com",
|
|
622
|
+
"phone": "+1-555-123-4567",
|
|
623
|
+
},
|
|
624
|
+
"message": "My credit card is 4111-1111-1111-1111",
|
|
625
|
+
"api_key": "sk-proj-abcdefghijk",
|
|
626
|
+
}
|
|
627
|
+
|
|
628
|
+
safe_data = redact_payload(data)
|
|
629
|
+
# Result:
|
|
630
|
+
# {
|
|
631
|
+
# "user": {
|
|
632
|
+
# "email": "[REDACTED]",
|
|
633
|
+
# "phone": "[REDACTED]",
|
|
634
|
+
# },
|
|
635
|
+
# "message": "My credit card is [REDACTED]",
|
|
636
|
+
# "api_key": "[REDACTED]",
|
|
637
|
+
# }
|
|
638
|
+
|
|
639
|
+
# Redact a single string
|
|
640
|
+
safe_message = redact_string("Contact me at user@example.com")
|
|
641
|
+
# "Contact me at [REDACTED]"
|
|
642
|
+
|
|
643
|
+
# Hash for consistent anonymization (same input = same hash)
|
|
644
|
+
user_hash = hash_pii("user@example.com")
|
|
645
|
+
# "[HASH:a1b2c3d4]"
|
|
646
|
+
|
|
647
|
+
# Useful for analytics without exposing PII
|
|
648
|
+
print(f"User {user_hash} performed action")
|
|
649
|
+
```
|
|
650
|
+
|
|
651
|
+
### Temporary Privacy Context
|
|
652
|
+
|
|
653
|
+
```python
|
|
654
|
+
from agentreplay.privacy import privacy_context
|
|
655
|
+
|
|
656
|
+
# Add extra redaction rules for a specific block
|
|
657
|
+
with privacy_context(
|
|
658
|
+
redact_patterns=[r"internal-token-\w+"],
|
|
659
|
+
scrub_paths=["metadata.debug_info"],
|
|
660
|
+
):
|
|
661
|
+
# These rules only apply within this block
|
|
662
|
+
result = process_sensitive_data(data)
|
|
663
|
+
|
|
664
|
+
# Original rules restored after block
|
|
665
|
+
```
|
|
666
|
+
|
|
667
|
+
---
|
|
668
|
+
|
|
669
|
+
## 📊 Span Kinds
|
|
670
|
+
|
|
671
|
+
Use semantic span kinds for better visualization and filtering:
|
|
672
|
+
|
|
673
|
+
```python
|
|
674
|
+
from agentreplay import SpanKind
|
|
675
|
+
|
|
676
|
+
# Available span kinds
|
|
677
|
+
SpanKind.CHAIN # Orchestration, workflows, pipelines
|
|
678
|
+
SpanKind.LLM # LLM API calls (OpenAI, Anthropic, etc.)
|
|
679
|
+
SpanKind.TOOL # Tool/function calls, actions
|
|
680
|
+
SpanKind.RETRIEVER # Vector DB search, document retrieval
|
|
681
|
+
SpanKind.EMBEDDING # Embedding generation
|
|
682
|
+
SpanKind.GUARDRAIL # Safety checks, content filtering
|
|
683
|
+
SpanKind.CACHE # Cache operations
|
|
684
|
+
SpanKind.HTTP # HTTP requests
|
|
685
|
+
SpanKind.DB # Database queries
|
|
686
|
+
```
|
|
687
|
+
|
|
688
|
+
Example usage:
|
|
689
|
+
|
|
690
|
+
```python
|
|
691
|
+
from agentreplay import traceable, SpanKind
|
|
692
|
+
|
|
693
|
+
@traceable(kind=SpanKind.RETRIEVER)
|
|
694
|
+
def search_documents(query: str) -> list:
|
|
695
|
+
return vector_db.similarity_search(query, k=5)
|
|
696
|
+
|
|
697
|
+
@traceable(kind=SpanKind.LLM)
|
|
698
|
+
def generate_answer(query: str, docs: list) -> str:
|
|
699
|
+
return llm.generate(query, context=docs)
|
|
700
|
+
|
|
701
|
+
@traceable(kind=SpanKind.CHAIN)
|
|
702
|
+
def rag_pipeline(query: str) -> str:
|
|
703
|
+
docs = search_documents(query)
|
|
704
|
+
return generate_answer(query, docs)
|
|
705
|
+
```
|
|
706
|
+
|
|
707
|
+
---
|
|
708
|
+
|
|
709
|
+
## ⚙️ Lifecycle Management
|
|
710
|
+
|
|
711
|
+
### Flushing Traces
|
|
712
|
+
|
|
713
|
+
Always ensure traces are sent before your application exits:
|
|
714
|
+
|
|
715
|
+
```python
|
|
716
|
+
import agentreplay
|
|
717
|
+
|
|
718
|
+
agentreplay.init()
|
|
719
|
+
|
|
720
|
+
# Your application code...
|
|
721
|
+
|
|
722
|
+
# Option 1: Manual flush with timeout
|
|
723
|
+
agentreplay.flush(timeout=10.0) # Wait up to 10 seconds
|
|
724
|
+
|
|
725
|
+
# Option 2: Full graceful shutdown
|
|
726
|
+
agentreplay.shutdown(timeout=30.0) # Flush and cleanup
|
|
727
|
+
|
|
728
|
+
# Option 3: Auto-registered (init() registers atexit handler automatically)
|
|
729
|
+
# Traces are flushed on normal program exit
|
|
730
|
+
```
|
|
731
|
+
|
|
732
|
+
### Serverless / AWS Lambda
|
|
733
|
+
|
|
734
|
+
**Critical**: Always flush explicitly before the function returns!
|
|
735
|
+
|
|
736
|
+
```python
|
|
737
|
+
import agentreplay
|
|
738
|
+
|
|
739
|
+
agentreplay.init()
|
|
740
|
+
|
|
741
|
+
@agentreplay.traceable
|
|
742
|
+
def process_event(event):
|
|
743
|
+
# Your logic here
|
|
744
|
+
return {"processed": True}
|
|
745
|
+
|
|
746
|
+
def lambda_handler(event, context):
|
|
747
|
+
try:
|
|
748
|
+
result = process_event(event)
|
|
749
|
+
return {
|
|
750
|
+
"statusCode": 200,
|
|
751
|
+
"body": json.dumps(result)
|
|
752
|
+
}
|
|
753
|
+
finally:
|
|
754
|
+
# CRITICAL: Flush before Lambda freezes
|
|
755
|
+
agentreplay.flush(timeout=5.0)
|
|
756
|
+
```
|
|
757
|
+
|
|
758
|
+
### FastAPI / Starlette
|
|
759
|
+
|
|
760
|
+
```python
|
|
761
|
+
from fastapi import FastAPI
|
|
762
|
+
from contextlib import asynccontextmanager
|
|
763
|
+
import agentreplay
|
|
764
|
+
|
|
765
|
+
@asynccontextmanager
|
|
766
|
+
async def lifespan(app: FastAPI):
|
|
767
|
+
# Startup
|
|
768
|
+
agentreplay.init()
|
|
769
|
+
yield
|
|
770
|
+
# Shutdown
|
|
771
|
+
agentreplay.shutdown(timeout=10.0)
|
|
772
|
+
|
|
773
|
+
app = FastAPI(lifespan=lifespan)
|
|
774
|
+
|
|
775
|
+
@app.post("/chat")
|
|
776
|
+
async def chat(request: ChatRequest):
|
|
777
|
+
# Traces are sent automatically in background
|
|
778
|
+
return await process_chat(request)
|
|
779
|
+
```
|
|
780
|
+
|
|
781
|
+
### Diagnostics
|
|
782
|
+
|
|
783
|
+
```python
|
|
784
|
+
import agentreplay
|
|
785
|
+
|
|
786
|
+
# Get SDK statistics
|
|
787
|
+
stats = agentreplay.get_stats()
|
|
788
|
+
print(f"Spans sent: {stats.get('spans_sent', 0)}")
|
|
789
|
+
print(f"Spans pending: {stats.get('spans_pending', 0)}")
|
|
790
|
+
print(f"Errors: {stats.get('errors', 0)}")
|
|
791
|
+
print(f"Batches sent: {stats.get('batches_sent', 0)}")
|
|
792
|
+
|
|
793
|
+
# Health check - verify backend connectivity
|
|
794
|
+
if agentreplay.ping():
|
|
795
|
+
print("✅ Backend is reachable")
|
|
796
|
+
else:
|
|
797
|
+
print("❌ Cannot reach backend")
|
|
798
|
+
```
|
|
799
|
+
|
|
800
|
+
---
|
|
801
|
+
|
|
802
|
+
## 🔗 Framework Integrations
|
|
803
|
+
|
|
804
|
+
### LangChain
|
|
805
|
+
|
|
806
|
+
```python
|
|
807
|
+
from langchain_openai import ChatOpenAI
|
|
808
|
+
from langchain.schema import HumanMessage
|
|
809
|
+
import agentreplay
|
|
810
|
+
|
|
811
|
+
agentreplay.init()
|
|
812
|
+
|
|
813
|
+
@agentreplay.traceable(name="langchain_qa", kind=agentreplay.SpanKind.CHAIN)
|
|
814
|
+
def answer_question(question: str) -> str:
|
|
815
|
+
llm = ChatOpenAI(model="gpt-4", temperature=0)
|
|
816
|
+
response = llm.invoke([HumanMessage(content=question)])
|
|
817
|
+
return response.content
|
|
818
|
+
|
|
819
|
+
result = answer_question("What is machine learning?")
|
|
820
|
+
agentreplay.flush()
|
|
821
|
+
```
|
|
822
|
+
|
|
823
|
+
### LangGraph
|
|
824
|
+
|
|
825
|
+
```python
|
|
826
|
+
from langgraph.graph import StateGraph, END
|
|
827
|
+
import agentreplay
|
|
828
|
+
|
|
829
|
+
agentreplay.init()
|
|
830
|
+
|
|
831
|
+
@agentreplay.traceable(name="agent_node", kind=agentreplay.SpanKind.LLM)
|
|
832
|
+
def agent_node(state):
|
|
833
|
+
# Agent logic
|
|
834
|
+
response = llm.invoke(state["messages"])
|
|
835
|
+
return {"messages": state["messages"] + [response]}
|
|
836
|
+
|
|
837
|
+
@agentreplay.traceable(name="tool_node", kind=agentreplay.SpanKind.TOOL)
|
|
838
|
+
def tool_node(state):
|
|
839
|
+
# Tool execution
|
|
840
|
+
result = execute_tool(state["tool_call"])
|
|
841
|
+
return {"messages": state["messages"] + [result]}
|
|
842
|
+
|
|
843
|
+
# Build graph with traced nodes
|
|
844
|
+
workflow = StateGraph(State)
|
|
845
|
+
workflow.add_node("agent", agent_node)
|
|
846
|
+
workflow.add_node("tools", tool_node)
|
|
847
|
+
# ... rest of graph definition
|
|
848
|
+
```
|
|
849
|
+
|
|
850
|
+
### CrewAI
|
|
851
|
+
|
|
852
|
+
```python
|
|
853
|
+
from crewai import Agent, Task, Crew
|
|
854
|
+
import agentreplay
|
|
855
|
+
|
|
856
|
+
agentreplay.init()
|
|
857
|
+
|
|
858
|
+
# Wrap the LLM client
|
|
859
|
+
wrapped_llm = agentreplay.wrap_openai(OpenAI())
|
|
860
|
+
|
|
861
|
+
# Track each agent with context
|
|
862
|
+
with agentreplay.AgentContext(agent_id="researcher", workflow_id="article-creation"):
|
|
863
|
+
researcher = Agent(
|
|
864
|
+
role="Senior Researcher",
|
|
865
|
+
goal="Find comprehensive information",
|
|
866
|
+
llm=wrapped_llm,
|
|
867
|
+
)
|
|
868
|
+
|
|
869
|
+
with agentreplay.AgentContext(agent_id="writer", workflow_id="article-creation"):
|
|
870
|
+
writer = Agent(
|
|
871
|
+
role="Content Writer",
|
|
872
|
+
goal="Write engaging articles",
|
|
873
|
+
llm=wrapped_llm,
|
|
874
|
+
)
|
|
875
|
+
|
|
876
|
+
# Run the crew
|
|
877
|
+
crew = Crew(agents=[researcher, writer], tasks=[research_task, write_task])
|
|
878
|
+
result = crew.kickoff()
|
|
879
|
+
|
|
880
|
+
agentreplay.flush()
|
|
881
|
+
```
|
|
882
|
+
|
|
883
|
+
### LlamaIndex
|
|
884
|
+
|
|
885
|
+
```python
|
|
886
|
+
from llama_index.core import VectorStoreIndex, SimpleDirectoryReader
|
|
887
|
+
import agentreplay
|
|
888
|
+
|
|
889
|
+
agentreplay.init()
|
|
890
|
+
|
|
891
|
+
@agentreplay.traceable(name="index_documents", kind=agentreplay.SpanKind.EMBEDDING)
|
|
892
|
+
def build_index(directory: str):
|
|
893
|
+
documents = SimpleDirectoryReader(directory).load_data()
|
|
894
|
+
return VectorStoreIndex.from_documents(documents)
|
|
895
|
+
|
|
896
|
+
@agentreplay.traceable(name="query_index", kind=agentreplay.SpanKind.RETRIEVER)
|
|
897
|
+
def query(index, question: str):
|
|
898
|
+
query_engine = index.as_query_engine()
|
|
899
|
+
return query_engine.query(question)
|
|
900
|
+
|
|
901
|
+
index = build_index("./documents")
|
|
902
|
+
response = query(index, "What is the main topic?")
|
|
903
|
+
agentreplay.flush()
|
|
904
|
+
```
|
|
905
|
+
|
|
906
|
+
---
|
|
907
|
+
|
|
908
|
+
## 🌐 Environment Variables
|
|
909
|
+
|
|
910
|
+
| Variable | Description | Default |
|
|
911
|
+
|----------|-------------|---------|
|
|
912
|
+
| `AGENTREPLAY_API_KEY` | API key for authentication | **Required** |
|
|
913
|
+
| `AGENTREPLAY_PROJECT_ID` | Project identifier | **Required** |
|
|
914
|
+
| `AGENTREPLAY_BASE_URL` | API base URL | `https://api.agentreplay.io` |
|
|
915
|
+
| `AGENTREPLAY_TENANT_ID` | Tenant identifier | `default` |
|
|
916
|
+
| `AGENTREPLAY_AGENT_ID` | Default agent ID | `default` |
|
|
917
|
+
| `AGENTREPLAY_ENABLED` | Enable/disable tracing | `true` |
|
|
918
|
+
| `AGENTREPLAY_DEBUG` | Enable debug logging | `false` |
|
|
919
|
+
| `AGENTREPLAY_BATCH_SIZE` | Spans per batch | `100` |
|
|
920
|
+
| `AGENTREPLAY_FLUSH_INTERVAL` | Auto-flush interval (seconds) | `5.0` |
|
|
921
|
+
| `AGENTREPLAY_CAPTURE_INPUT` | Capture function inputs | `true` |
|
|
922
|
+
| `AGENTREPLAY_CAPTURE_OUTPUT` | Capture function outputs | `true` |
|
|
923
|
+
|
|
924
|
+
---
|
|
925
|
+
|
|
926
|
+
## 🧪 Testing
|
|
927
|
+
|
|
928
|
+
### Disable Tracing in Tests
|
|
929
|
+
|
|
930
|
+
```python
|
|
931
|
+
import agentreplay
|
|
932
|
+
import pytest
|
|
933
|
+
|
|
934
|
+
@pytest.fixture(autouse=True)
|
|
935
|
+
def disable_tracing():
|
|
936
|
+
"""Disable tracing for all tests."""
|
|
937
|
+
agentreplay.init(enabled=False)
|
|
938
|
+
yield
|
|
939
|
+
agentreplay.reset()
|
|
940
|
+
|
|
941
|
+
def test_my_function():
|
|
942
|
+
# Tracing is disabled, no network calls
|
|
943
|
+
result = my_traced_function("test")
|
|
944
|
+
assert result == expected
|
|
945
|
+
```
|
|
946
|
+
|
|
947
|
+
Or use environment variable:
|
|
948
|
+
|
|
949
|
+
```bash
|
|
950
|
+
AGENTREPLAY_ENABLED=false pytest
|
|
951
|
+
```
|
|
952
|
+
|
|
953
|
+
### Mock the SDK
|
|
954
|
+
|
|
955
|
+
```python
|
|
956
|
+
from unittest.mock import patch
|
|
957
|
+
|
|
958
|
+
def test_with_mock():
|
|
959
|
+
with patch('agentreplay.flush'):
|
|
960
|
+
# flush() won't actually send data
|
|
961
|
+
result = my_function()
|
|
962
|
+
```
|
|
963
|
+
|
|
964
|
+
---
|
|
965
|
+
|
|
966
|
+
## 📚 Complete API Reference
|
|
967
|
+
|
|
968
|
+
### Top-Level Functions
|
|
969
|
+
|
|
970
|
+
| Function | Description |
|
|
971
|
+
|----------|-------------|
|
|
972
|
+
| `init(**config)` | Initialize the SDK with configuration |
|
|
973
|
+
| `flush(timeout=None)` | Send all pending traces |
|
|
974
|
+
| `shutdown(timeout=None)` | Graceful shutdown with flush |
|
|
975
|
+
| `reset()` | Reset SDK state completely |
|
|
976
|
+
| `get_stats()` | Get diagnostic statistics |
|
|
977
|
+
| `ping()` | Check backend connectivity |
|
|
978
|
+
|
|
979
|
+
### Decorators & Tracing
|
|
980
|
+
|
|
981
|
+
| Function | Description |
|
|
982
|
+
|----------|-------------|
|
|
983
|
+
| `@traceable` | Decorator for function tracing |
|
|
984
|
+
| `@observe` | Alias for `@traceable` (Langfuse-style) |
|
|
985
|
+
| `trace(name, **opts)` | Context manager for creating spans |
|
|
986
|
+
| `start_span(name, **opts)` | Create a manual span |
|
|
987
|
+
| `get_current_span()` | Get the currently active span |
|
|
988
|
+
|
|
989
|
+
### Client Wrappers
|
|
990
|
+
|
|
991
|
+
| Function | Description |
|
|
992
|
+
|----------|-------------|
|
|
993
|
+
| `wrap_openai(client, **opts)` | Wrap OpenAI client |
|
|
994
|
+
| `wrap_anthropic(client, **opts)` | Wrap Anthropic client |
|
|
995
|
+
| `wrap_method(obj, method, **opts)` | Wrap any method |
|
|
996
|
+
|
|
997
|
+
### Context Management
|
|
998
|
+
|
|
999
|
+
| Function | Description |
|
|
1000
|
+
|----------|-------------|
|
|
1001
|
+
| `set_context(**ctx)` | Set global context |
|
|
1002
|
+
| `get_global_context()` | Get current global context |
|
|
1003
|
+
| `clear_context()` | Clear all global context |
|
|
1004
|
+
| `with_context(**ctx)` | Scoped context manager |
|
|
1005
|
+
| `AgentContext(...)` | Class-based agent context |
|
|
1006
|
+
|
|
1007
|
+
### Privacy
|
|
1008
|
+
|
|
1009
|
+
| Function | Description |
|
|
1010
|
+
|----------|-------------|
|
|
1011
|
+
| `configure_privacy(**opts)` | Configure redaction settings |
|
|
1012
|
+
| `redact_payload(data)` | Redact sensitive data from dict |
|
|
1013
|
+
| `redact_string(text)` | Redact patterns from string |
|
|
1014
|
+
| `hash_pii(value, salt=None)` | Hash PII for anonymization |
|
|
1015
|
+
| `add_pattern(regex)` | Add redaction pattern at runtime |
|
|
1016
|
+
| `add_scrub_path(path)` | Add scrub path at runtime |
|
|
1017
|
+
|
|
1018
|
+
### ActiveSpan Methods
|
|
1019
|
+
|
|
1020
|
+
| Method | Description |
|
|
1021
|
+
|--------|-------------|
|
|
1022
|
+
| `set_input(data)` | Set span input data |
|
|
1023
|
+
| `set_output(data)` | Set span output data |
|
|
1024
|
+
| `set_attribute(key, value)` | Set a single attribute |
|
|
1025
|
+
| `set_attributes(dict)` | Set multiple attributes |
|
|
1026
|
+
| `add_event(name, attrs)` | Add a timestamped event |
|
|
1027
|
+
| `set_error(exception)` | Record an error |
|
|
1028
|
+
| `set_token_usage(...)` | Set LLM token counts |
|
|
1029
|
+
| `set_model(model, provider)` | Set model information |
|
|
1030
|
+
| `end()` | End the span |
|
|
1031
|
+
|
|
1032
|
+
---
|
|
1033
|
+
|
|
1034
|
+
## 🤝 Contributing
|
|
1035
|
+
|
|
1036
|
+
We welcome contributions! See [CONTRIBUTING.md](../../CONTRIBUTING.md) for guidelines.
|
|
1037
|
+
|
|
1038
|
+
```bash
|
|
1039
|
+
# Clone the repository
|
|
1040
|
+
git clone https://github.com/agentreplay/agentreplay.git
|
|
1041
|
+
cd agentreplay/sdks/python
|
|
1042
|
+
|
|
1043
|
+
# Create virtual environment
|
|
1044
|
+
python -m venv .venv
|
|
1045
|
+
source .venv/bin/activate # or `.venv\Scripts\activate` on Windows
|
|
1046
|
+
|
|
1047
|
+
# Install in development mode
|
|
1048
|
+
pip install -e ".[dev]"
|
|
1049
|
+
|
|
1050
|
+
# Run tests
|
|
1051
|
+
pytest tests/ -v
|
|
1052
|
+
|
|
1053
|
+
# Run linter
|
|
1054
|
+
ruff check src/
|
|
1055
|
+
|
|
1056
|
+
# Run type checker
|
|
1057
|
+
mypy src/agentreplay
|
|
1058
|
+
|
|
1059
|
+
# Run formatter
|
|
1060
|
+
ruff format src/
|
|
1061
|
+
```
|
|
1062
|
+
|
|
1063
|
+
---
|
|
1064
|
+
|
|
1065
|
+
## 📄 License
|
|
1066
|
+
|
|
1067
|
+
Apache 2.0 - see [LICENSE](../../LICENSE) for details.
|
|
1068
|
+
|
|
1069
|
+
---
|
|
1070
|
+
|
|
1071
|
+
## 🔗 Links
|
|
1072
|
+
|
|
1073
|
+
- 📖 [Documentation](https://docs.agentreplay.io)
|
|
1074
|
+
- 💻 [GitHub Repository](https://github.com/agentreplay/agentreplay)
|
|
1075
|
+
- 📦 [PyPI Package](https://pypi.org/project/agentreplay/)
|
|
1076
|
+
- 💬 [Discord Community](https://discord.gg/agentreplay)
|
|
1077
|
+
- 🐦 [Twitter](https://twitter.com/agentreplay)
|
|
1078
|
+
|
|
1079
|
+
---
|
|
1080
|
+
|
|
1081
|
+
<p align="center">
|
|
1082
|
+
Made with ❤️ by the Agentreplay team
|
|
1083
|
+
</p>
|