agenttrace-ai 0.1.1__tar.gz

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
@@ -0,0 +1,21 @@
1
+ MIT License
2
+
3
+ Copyright (c) 2026 AgentTrace Contributors
4
+
5
+ Permission is hereby granted, free of charge, to any person obtaining a copy
6
+ of this software and associated documentation files (the "Software"), to deal
7
+ in the Software without restriction, including without limitation the rights
8
+ to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
9
+ copies of the Software, and to permit persons to whom the Software is
10
+ furnished to do so, subject to the following conditions:
11
+
12
+ The above copyright notice and this permission notice shall be included in all
13
+ copies or substantial portions of the Software.
14
+
15
+ THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
16
+ IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
17
+ FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
18
+ AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
19
+ LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
20
+ OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
21
+ SOFTWARE.
@@ -0,0 +1 @@
1
+ recursive-include agenttrace/static *
@@ -0,0 +1,274 @@
1
+ Metadata-Version: 2.4
2
+ Name: agenttrace-ai
3
+ Version: 0.1.1
4
+ Summary: Zero-config local-first visual debugging and auto-evaluation for LLM agents.
5
+ Author: AgentTrace Contributors
6
+ License: MIT
7
+ Project-URL: Homepage, https://github.com/CURSED-ME/AgentTrace
8
+ Project-URL: Issues, https://github.com/CURSED-ME/AgentTrace/issues
9
+ Keywords: llm,agent,debugging,observability,opentelemetry,tracing
10
+ Classifier: Development Status :: 4 - Beta
11
+ Classifier: Intended Audience :: Developers
12
+ Classifier: License :: OSI Approved :: MIT License
13
+ Classifier: Programming Language :: Python :: 3
14
+ Classifier: Programming Language :: Python :: 3.9
15
+ Classifier: Programming Language :: Python :: 3.10
16
+ Classifier: Programming Language :: Python :: 3.11
17
+ Classifier: Programming Language :: Python :: 3.12
18
+ Classifier: Programming Language :: Python :: 3.13
19
+ Classifier: Topic :: Software Development :: Debuggers
20
+ Classifier: Topic :: Software Development :: Libraries :: Python Modules
21
+ Requires-Python: >=3.9
22
+ Description-Content-Type: text/markdown
23
+ License-File: LICENSE
24
+ Requires-Dist: fastapi>=0.100.0
25
+ Requires-Dist: uvicorn>=0.23.0
26
+ Requires-Dist: pydantic>=2.0.0
27
+ Requires-Dist: opentelemetry-api>=1.20.0
28
+ Requires-Dist: opentelemetry-sdk>=1.20.0
29
+ Provides-Extra: openai
30
+ Requires-Dist: openai>=1.0.0; extra == "openai"
31
+ Requires-Dist: opentelemetry-instrumentation-openai>=0.1.0; extra == "openai"
32
+ Provides-Extra: judge
33
+ Requires-Dist: groq>=0.9.0; extra == "judge"
34
+ Provides-Extra: langchain
35
+ Requires-Dist: langchain-core>=0.2.0; extra == "langchain"
36
+ Provides-Extra: all
37
+ Requires-Dist: openai>=1.0.0; extra == "all"
38
+ Requires-Dist: opentelemetry-instrumentation-openai>=0.1.0; extra == "all"
39
+ Requires-Dist: groq>=0.9.0; extra == "all"
40
+ Requires-Dist: langchain-core>=0.2.0; extra == "all"
41
+ Dynamic: license-file
42
+
43
+ <div align="center">
44
+
45
+ # šŸ” AgentTrace
46
+
47
+ **Zero-config visual debugging and auto-evaluation for LLM agents.**
48
+
49
+ [![License: MIT](https://img.shields.io/badge/License-MIT-blue.svg)](LICENSE)
50
+ [![Python 3.9+](https://img.shields.io/badge/python-3.9+-blue.svg)](https://www.python.org/downloads/)
51
+ [![OpenTelemetry](https://img.shields.io/badge/OpenTelemetry-native-blueviolet)](https://opentelemetry.io/)
52
+
53
+ *One import. Zero config. Instant visual timeline of every LLM call, tool execution, and crash your agent makes.*
54
+
55
+ </div>
56
+
57
+ ---
58
+
59
+ ## The Problem
60
+
61
+ You build an AI agent. It calls an LLM, uses tools, chains prompts together. Then it hallucinates, loops infinitely, or silently drops context — and you have **no idea where it went wrong.**
62
+
63
+ Every other observability tool requires accounts, API keys, cloud dashboards, and framework-specific setup. You just want to **see what happened.**
64
+
65
+ ## The Solution
66
+
67
+ ```python
68
+ import agenttrace.auto # ← That's it. One line.
69
+
70
+ # ... your existing agent code runs normally ...
71
+ # When it finishes, a local dashboard opens automatically at localhost:8000
72
+ ```
73
+
74
+ AgentTrace intercepts every LLM call, tool execution, and unhandled crash — then serves a beautiful local timeline you can replay step-by-step.
75
+
76
+ ---
77
+
78
+ ## ✨ Features
79
+
80
+ ### šŸŖ„ True Zero-Config
81
+ Add `import agenttrace.auto` to the top of your script. No API keys, no accounts, no cloud. Works with **OpenAI**, **Groq**, **LangChain**, and **CrewAI** out of the box.
82
+
83
+ ### 🧠 Smart Auto-Judge
84
+ AgentTrace doesn't just *show* you what happened — it *tells you what went wrong:*
85
+
86
+ | Evaluation | How It Works | Cost |
87
+ |---|---|---|
88
+ | šŸ” **Loop Detection** | Flags 3+ identical consecutive tool calls | Free (pure Python) |
89
+ | šŸ’° **Cost Anomaly** | Flags steps using >2x average tokens | Free (pure Python) |
90
+ | ā±ļø **Latency Regression** | Flags steps >3x slower than average | Free (pure Python) |
91
+ | šŸ”§ **Tool Misuse** | Detects wrong arguments or failed tool calls | LLM-powered (optional) |
92
+ | šŸ“ **Instruction Drift** | Detects when LLM ignores the system prompt | LLM-powered (optional) |
93
+
94
+ > LLM-powered checks require a free [Groq API key](https://console.groq.com). Install with `pip install "agenttrace-ai[judge]"`.
95
+
96
+ ### ā–¶ļø Trace Replay
97
+ Press **Play** and watch your agent's execution animate step-by-step — like a video recording of its thought process. Drag the scrubber to jump to any moment. Flagged steps pulse red.
98
+
99
+ ### šŸ’„ Crash Detection
100
+ If your agent throws an unhandled exception, AgentTrace catches it and logs the full traceback as a trace step — so you never lose debugging data.
101
+
102
+ ### šŸ”Œ Framework Support
103
+ | Framework | Status | Setup Required |
104
+ |---|---|---|
105
+ | OpenAI SDK | āœ… Native | `pip install "agenttrace-ai[openai]"` |
106
+ | Groq SDK | āœ… Native | `pip install "agenttrace-ai[openai]"` |
107
+ | LangChain | āœ… Adapter | None (auto-detected) |
108
+ | CrewAI | āœ… Adapter | None (auto-detected) |
109
+
110
+ ---
111
+
112
+ ## šŸš€ Quickstart
113
+
114
+ ### Install
115
+
116
+ ```bash
117
+ # Core (works with LangChain out of the box)
118
+ pip install agenttrace-ai
119
+
120
+ # With OpenAI/Groq support
121
+ pip install "agenttrace-ai[openai]"
122
+
123
+ # With everything (OpenAI + Auto-Judge + LangChain)
124
+ pip install "agenttrace-ai[all]"
125
+ ```
126
+
127
+ ### Basic Usage (OpenAI / Groq)
128
+
129
+ ```python
130
+ import agenttrace.auto # ← Add this one line
131
+ import openai
132
+
133
+ client = openai.OpenAI()
134
+ response = client.chat.completions.create(
135
+ model="gpt-4",
136
+ messages=[{"role": "user", "content": "What is the capital of France?"}]
137
+ )
138
+ print(response.choices[0].message.content)
139
+ # Dashboard opens automatically at http://localhost:8000 when your script finishes
140
+ ```
141
+
142
+ ### LangChain (Zero-Config)
143
+
144
+ ```python
145
+ import agenttrace.auto # ← Same one line
146
+ from langchain_openai import ChatOpenAI
147
+ from langchain_core.prompts import ChatPromptTemplate
148
+
149
+ llm = ChatOpenAI(model="gpt-4")
150
+ prompt = ChatPromptTemplate.from_messages([
151
+ ("system", "You are a helpful assistant."),
152
+ ("human", "{input}")
153
+ ])
154
+
155
+ chain = prompt | llm
156
+ result = chain.invoke({"input": "Explain quantum computing"})
157
+ # All LLM calls automatically appear in the AgentTrace dashboard
158
+ ```
159
+
160
+ ### Custom Tool Tracking
161
+
162
+ ```python
163
+ from agenttrace import track_tool, track_agent
164
+
165
+ @track_tool
166
+ def search_database(query: str) -> str:
167
+ return db.search(query)
168
+
169
+ @track_agent
170
+ def my_agent(task: str) -> str:
171
+ data = search_database(task)
172
+ return llm.complete(f"Answer based on: {data}")
173
+ ```
174
+
175
+ ---
176
+
177
+ ## šŸ—ļø Architecture
178
+
179
+ ```
180
+ Your Agent Script
181
+ │
182
+ ā–¼
183
+ import agenttrace.auto
184
+ │
185
+ ā”œā”€ā”€ā”€ OpenTelemetry TracerProvider
186
+ │ │
187
+ │ ā”œā”€ā”€ OpenAI Instrumentor (optional)
188
+ │ ā”œā”€ā”€ LangChain Callback Adapter
189
+ │ └── CrewAI Callback Adapter
190
+ │ │
191
+ │ ā–¼
192
+ │ AgentTraceExporter → SQLite (.agenttrace.db)
193
+ │
194
+ ā”œā”€ā”€ā”€ sys.excepthook → Crash capture
195
+ │
196
+ └─── atexit → FastAPI Server (localhost:8000)
197
+ │
198
+ ā”œā”€ā”€ /api/traces
199
+ ā”œā”€ā”€ /api/trace/{id}
200
+ └── React Dashboard (Vite + Tailwind)
201
+ ```
202
+
203
+ ### Key Design Decisions
204
+ - **OpenTelemetry** for instrumentation (industry standard, not fragile monkey-patching)
205
+ - **SQLite with WAL mode** for zero-config persistence that survives crashes
206
+ - **`contextvars`** for thread-safe multi-agent isolation
207
+ - **Pre-compiled React UI** bundled inside the Python package
208
+
209
+ ---
210
+
211
+ ## šŸ“ Project Structure
212
+
213
+ ```
214
+ agenttrace/
215
+ ā”œā”€ā”€ auto.py # Zero-config entry point (import this)
216
+ ā”œā”€ā”€ exporter.py # OTel SpanExporter → SQLite
217
+ ā”œā”€ā”€ judge.py # Smart Auto-Judge engine (5 eval types)
218
+ ā”œā”€ā”€ models.py # Pydantic data models
219
+ ā”œā”€ā”€ storage.py # SQLite with WAL mode
220
+ ā”œā”€ā”€ server.py # FastAPI dashboard server
221
+ ā”œā”€ā”€ decorators.py # @track_tool, @track_agent
222
+ ā”œā”€ā”€ utils.py # Payload truncation
223
+ ā”œā”€ā”€ integrations/
224
+ │ ā”œā”€ā”€ langchain.py # LangChain callback adapter
225
+ │ └── crewai.py # CrewAI callback adapter
226
+ └── static/ # Pre-compiled React dashboard
227
+ ```
228
+
229
+ ---
230
+
231
+ ## āš™ļø Configuration
232
+
233
+ | Environment Variable | Default | Description |
234
+ |---|---|---|
235
+ | `GROQ_API_KEY` | — | Required for LLM-powered judge evaluations |
236
+ | `AGENTTRACE_DB_PATH` | `.agenttrace.db` | Custom database file path |
237
+ | `AGENTTRACE_FULL_PAYLOAD` | `0` | Set to `1` to disable payload truncation |
238
+ | `AGENTTRACE_MAX_CONTENT` | `500` | Max characters before truncation |
239
+
240
+ ---
241
+
242
+ ## šŸ¤ Contributing
243
+
244
+ We welcome contributions! Here's how to set up the dev environment:
245
+
246
+ ```bash
247
+ git clone https://github.com/CURSED-ME/AgentTrace.git
248
+ cd AgentTrace
249
+ pip install -e ".[all]"
250
+
251
+ # Frontend development
252
+ cd ui
253
+ npm install
254
+ npm run dev # Dev server with hot reload
255
+ npm run build # Compile to agenttrace/static/
256
+ ```
257
+
258
+ See [.env.example](.env.example) for required environment variables.
259
+
260
+ ---
261
+
262
+ ## šŸ“„ License
263
+
264
+ MIT License — see [LICENSE](LICENSE) for details.
265
+
266
+ ---
267
+
268
+ <div align="center">
269
+
270
+ **Built with ā¤ļø for the agent builder community.**
271
+
272
+ *If AgentTrace helped you debug an agent, give us a ⭐ on GitHub!*
273
+
274
+ </div>
@@ -0,0 +1,232 @@
1
+ <div align="center">
2
+
3
+ # šŸ” AgentTrace
4
+
5
+ **Zero-config visual debugging and auto-evaluation for LLM agents.**
6
+
7
+ [![License: MIT](https://img.shields.io/badge/License-MIT-blue.svg)](LICENSE)
8
+ [![Python 3.9+](https://img.shields.io/badge/python-3.9+-blue.svg)](https://www.python.org/downloads/)
9
+ [![OpenTelemetry](https://img.shields.io/badge/OpenTelemetry-native-blueviolet)](https://opentelemetry.io/)
10
+
11
+ *One import. Zero config. Instant visual timeline of every LLM call, tool execution, and crash your agent makes.*
12
+
13
+ </div>
14
+
15
+ ---
16
+
17
+ ## The Problem
18
+
19
+ You build an AI agent. It calls an LLM, uses tools, chains prompts together. Then it hallucinates, loops infinitely, or silently drops context — and you have **no idea where it went wrong.**
20
+
21
+ Every other observability tool requires accounts, API keys, cloud dashboards, and framework-specific setup. You just want to **see what happened.**
22
+
23
+ ## The Solution
24
+
25
+ ```python
26
+ import agenttrace.auto # ← That's it. One line.
27
+
28
+ # ... your existing agent code runs normally ...
29
+ # When it finishes, a local dashboard opens automatically at localhost:8000
30
+ ```
31
+
32
+ AgentTrace intercepts every LLM call, tool execution, and unhandled crash — then serves a beautiful local timeline you can replay step-by-step.
33
+
34
+ ---
35
+
36
+ ## ✨ Features
37
+
38
+ ### šŸŖ„ True Zero-Config
39
+ Add `import agenttrace.auto` to the top of your script. No API keys, no accounts, no cloud. Works with **OpenAI**, **Groq**, **LangChain**, and **CrewAI** out of the box.
40
+
41
+ ### 🧠 Smart Auto-Judge
42
+ AgentTrace doesn't just *show* you what happened — it *tells you what went wrong:*
43
+
44
+ | Evaluation | How It Works | Cost |
45
+ |---|---|---|
46
+ | šŸ” **Loop Detection** | Flags 3+ identical consecutive tool calls | Free (pure Python) |
47
+ | šŸ’° **Cost Anomaly** | Flags steps using >2x average tokens | Free (pure Python) |
48
+ | ā±ļø **Latency Regression** | Flags steps >3x slower than average | Free (pure Python) |
49
+ | šŸ”§ **Tool Misuse** | Detects wrong arguments or failed tool calls | LLM-powered (optional) |
50
+ | šŸ“ **Instruction Drift** | Detects when LLM ignores the system prompt | LLM-powered (optional) |
51
+
52
+ > LLM-powered checks require a free [Groq API key](https://console.groq.com). Install with `pip install "agenttrace-ai[judge]"`.
53
+
54
+ ### ā–¶ļø Trace Replay
55
+ Press **Play** and watch your agent's execution animate step-by-step — like a video recording of its thought process. Drag the scrubber to jump to any moment. Flagged steps pulse red.
56
+
57
+ ### šŸ’„ Crash Detection
58
+ If your agent throws an unhandled exception, AgentTrace catches it and logs the full traceback as a trace step — so you never lose debugging data.
59
+
60
+ ### šŸ”Œ Framework Support
61
+ | Framework | Status | Setup Required |
62
+ |---|---|---|
63
+ | OpenAI SDK | āœ… Native | `pip install "agenttrace-ai[openai]"` |
64
+ | Groq SDK | āœ… Native | `pip install "agenttrace-ai[openai]"` |
65
+ | LangChain | āœ… Adapter | None (auto-detected) |
66
+ | CrewAI | āœ… Adapter | None (auto-detected) |
67
+
68
+ ---
69
+
70
+ ## šŸš€ Quickstart
71
+
72
+ ### Install
73
+
74
+ ```bash
75
+ # Core (works with LangChain out of the box)
76
+ pip install agenttrace-ai
77
+
78
+ # With OpenAI/Groq support
79
+ pip install "agenttrace-ai[openai]"
80
+
81
+ # With everything (OpenAI + Auto-Judge + LangChain)
82
+ pip install "agenttrace-ai[all]"
83
+ ```
84
+
85
+ ### Basic Usage (OpenAI / Groq)
86
+
87
+ ```python
88
+ import agenttrace.auto # ← Add this one line
89
+ import openai
90
+
91
+ client = openai.OpenAI()
92
+ response = client.chat.completions.create(
93
+ model="gpt-4",
94
+ messages=[{"role": "user", "content": "What is the capital of France?"}]
95
+ )
96
+ print(response.choices[0].message.content)
97
+ # Dashboard opens automatically at http://localhost:8000 when your script finishes
98
+ ```
99
+
100
+ ### LangChain (Zero-Config)
101
+
102
+ ```python
103
+ import agenttrace.auto # ← Same one line
104
+ from langchain_openai import ChatOpenAI
105
+ from langchain_core.prompts import ChatPromptTemplate
106
+
107
+ llm = ChatOpenAI(model="gpt-4")
108
+ prompt = ChatPromptTemplate.from_messages([
109
+ ("system", "You are a helpful assistant."),
110
+ ("human", "{input}")
111
+ ])
112
+
113
+ chain = prompt | llm
114
+ result = chain.invoke({"input": "Explain quantum computing"})
115
+ # All LLM calls automatically appear in the AgentTrace dashboard
116
+ ```
117
+
118
+ ### Custom Tool Tracking
119
+
120
+ ```python
121
+ from agenttrace import track_tool, track_agent
122
+
123
+ @track_tool
124
+ def search_database(query: str) -> str:
125
+ return db.search(query)
126
+
127
+ @track_agent
128
+ def my_agent(task: str) -> str:
129
+ data = search_database(task)
130
+ return llm.complete(f"Answer based on: {data}")
131
+ ```
132
+
133
+ ---
134
+
135
+ ## šŸ—ļø Architecture
136
+
137
+ ```
138
+ Your Agent Script
139
+ │
140
+ ā–¼
141
+ import agenttrace.auto
142
+ │
143
+ ā”œā”€ā”€ā”€ OpenTelemetry TracerProvider
144
+ │ │
145
+ │ ā”œā”€ā”€ OpenAI Instrumentor (optional)
146
+ │ ā”œā”€ā”€ LangChain Callback Adapter
147
+ │ └── CrewAI Callback Adapter
148
+ │ │
149
+ │ ā–¼
150
+ │ AgentTraceExporter → SQLite (.agenttrace.db)
151
+ │
152
+ ā”œā”€ā”€ā”€ sys.excepthook → Crash capture
153
+ │
154
+ └─── atexit → FastAPI Server (localhost:8000)
155
+ │
156
+ ā”œā”€ā”€ /api/traces
157
+ ā”œā”€ā”€ /api/trace/{id}
158
+ └── React Dashboard (Vite + Tailwind)
159
+ ```
160
+
161
+ ### Key Design Decisions
162
+ - **OpenTelemetry** for instrumentation (industry standard, not fragile monkey-patching)
163
+ - **SQLite with WAL mode** for zero-config persistence that survives crashes
164
+ - **`contextvars`** for thread-safe multi-agent isolation
165
+ - **Pre-compiled React UI** bundled inside the Python package
166
+
167
+ ---
168
+
169
+ ## šŸ“ Project Structure
170
+
171
+ ```
172
+ agenttrace/
173
+ ā”œā”€ā”€ auto.py # Zero-config entry point (import this)
174
+ ā”œā”€ā”€ exporter.py # OTel SpanExporter → SQLite
175
+ ā”œā”€ā”€ judge.py # Smart Auto-Judge engine (5 eval types)
176
+ ā”œā”€ā”€ models.py # Pydantic data models
177
+ ā”œā”€ā”€ storage.py # SQLite with WAL mode
178
+ ā”œā”€ā”€ server.py # FastAPI dashboard server
179
+ ā”œā”€ā”€ decorators.py # @track_tool, @track_agent
180
+ ā”œā”€ā”€ utils.py # Payload truncation
181
+ ā”œā”€ā”€ integrations/
182
+ │ ā”œā”€ā”€ langchain.py # LangChain callback adapter
183
+ │ └── crewai.py # CrewAI callback adapter
184
+ └── static/ # Pre-compiled React dashboard
185
+ ```
186
+
187
+ ---
188
+
189
+ ## āš™ļø Configuration
190
+
191
+ | Environment Variable | Default | Description |
192
+ |---|---|---|
193
+ | `GROQ_API_KEY` | — | Required for LLM-powered judge evaluations |
194
+ | `AGENTTRACE_DB_PATH` | `.agenttrace.db` | Custom database file path |
195
+ | `AGENTTRACE_FULL_PAYLOAD` | `0` | Set to `1` to disable payload truncation |
196
+ | `AGENTTRACE_MAX_CONTENT` | `500` | Max characters before truncation |
197
+
198
+ ---
199
+
200
+ ## šŸ¤ Contributing
201
+
202
+ We welcome contributions! Here's how to set up the dev environment:
203
+
204
+ ```bash
205
+ git clone https://github.com/CURSED-ME/AgentTrace.git
206
+ cd AgentTrace
207
+ pip install -e ".[all]"
208
+
209
+ # Frontend development
210
+ cd ui
211
+ npm install
212
+ npm run dev # Dev server with hot reload
213
+ npm run build # Compile to agenttrace/static/
214
+ ```
215
+
216
+ See [.env.example](.env.example) for required environment variables.
217
+
218
+ ---
219
+
220
+ ## šŸ“„ License
221
+
222
+ MIT License — see [LICENSE](LICENSE) for details.
223
+
224
+ ---
225
+
226
+ <div align="center">
227
+
228
+ **Built with ā¤ļø for the agent builder community.**
229
+
230
+ *If AgentTrace helped you debug an agent, give us a ⭐ on GitHub!*
231
+
232
+ </div>
@@ -0,0 +1,4 @@
1
+ from .decorators import track_tool, track_agent
2
+
3
+ __version__ = "0.1.0"
4
+ __all__ = ["track_tool", "track_agent"]
@@ -0,0 +1,104 @@
1
+ import atexit
2
+ import threading
3
+ import time
4
+ import sys
5
+ import traceback
6
+ from .models import TraceStep, StepMetrics, StepEvaluation
7
+ from .storage import add_step
8
+
9
+ _original_excepthook = sys.excepthook
10
+
11
+
12
+ def crash_handler(exc_type, exc_value, exc_traceback):
13
+ err_msg = "".join(traceback.format_exception(exc_type, exc_value, exc_traceback))
14
+
15
+ # Add a special trace step for the crash
16
+ add_step(
17
+ TraceStep(
18
+ type="server_crash",
19
+ name="Unhandled Exception",
20
+ inputs={
21
+ "type": exc_type.__name__
22
+ if hasattr(exc_type, "__name__")
23
+ else str(exc_type),
24
+ "value": str(exc_value),
25
+ },
26
+ outputs={"traceback": err_msg},
27
+ metrics=StepMetrics(),
28
+ evaluation=StepEvaluation(
29
+ status="error", reasoning="Script terminated unexpectedly."
30
+ ),
31
+ )
32
+ )
33
+
34
+ # Call the original hook
35
+ _original_excepthook(exc_type, exc_value, exc_traceback)
36
+
37
+
38
+ def _run_server():
39
+ import uvicorn
40
+ from .server import app
41
+
42
+ print(
43
+ "\n✨ AgentTrace: Run complete! Dashboard opened at http://localhost:8000. (Press Ctrl+C to close server and return to terminal)\n"
44
+ )
45
+
46
+ def open_browser():
47
+ time.sleep(1.5)
48
+ import webbrowser
49
+
50
+ try:
51
+ webbrowser.open("http://localhost:8000")
52
+ except Exception:
53
+ pass
54
+
55
+ t = threading.Thread(target=open_browser)
56
+ t.daemon = True
57
+ t.start()
58
+
59
+ try:
60
+ uvicorn.run(app, host="127.0.0.1", port=8000, log_level="warning")
61
+ except OSError:
62
+ print(
63
+ "AgentTrace: Dashboard is already running on port 8000 in another process."
64
+ )
65
+ except Exception as e:
66
+ print(f"AgentTrace: Dashboard failed to start: {e}")
67
+
68
+
69
+ def init():
70
+ # Set up OpenTelemetry
71
+ try:
72
+ from opentelemetry import trace
73
+ from opentelemetry.sdk.trace import TracerProvider
74
+ from opentelemetry.sdk.trace.export import SimpleSpanProcessor
75
+ from .exporter import AgentTraceExporter
76
+
77
+ provider = TracerProvider()
78
+ provider.add_span_processor(SimpleSpanProcessor(AgentTraceExporter()))
79
+ trace.set_tracer_provider(provider)
80
+
81
+ # Auto-instrument OpenTelemetry for OpenAI (also catches Groq since Groq is an OpenAI SDK wrapper)
82
+ try:
83
+ from opentelemetry.instrumentation.openai import OpenAIInstrumentor
84
+
85
+ OpenAIInstrumentor().instrument()
86
+ except ImportError:
87
+ pass # openai extra not installed, skip native OpenAI instrumentation
88
+
89
+ # Auto-register external frameworks
90
+ from .integrations import auto_register
91
+
92
+ auto_register()
93
+ except ImportError as e:
94
+ import warnings
95
+
96
+ warnings.warn(
97
+ f"AgentTrace: OTel dependencies missing ({e}). Native LLM tracing disabled."
98
+ )
99
+
100
+ sys.excepthook = crash_handler
101
+ atexit.register(_run_server)
102
+
103
+
104
+ init()