ai-agent-inspector 1.0.0__py3-none-any.whl
This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
- agent_inspector/__init__.py +148 -0
- agent_inspector/cli.py +532 -0
- agent_inspector/ui/static/app.css +630 -0
- agent_inspector/ui/static/app.js +379 -0
- agent_inspector/ui/templates/index.html +441 -0
- ai_agent_inspector-1.0.0.dist-info/METADATA +1094 -0
- ai_agent_inspector-1.0.0.dist-info/RECORD +11 -0
- ai_agent_inspector-1.0.0.dist-info/WHEEL +5 -0
- ai_agent_inspector-1.0.0.dist-info/entry_points.txt +2 -0
- ai_agent_inspector-1.0.0.dist-info/licenses/LICENSE +21 -0
- ai_agent_inspector-1.0.0.dist-info/top_level.txt +1 -0
|
@@ -0,0 +1,1094 @@
|
|
|
1
|
+
Metadata-Version: 2.4
|
|
2
|
+
Name: ai-agent-inspector
|
|
3
|
+
Version: 1.0.0
|
|
4
|
+
Summary: Framework-agnostic observability for AI agents
|
|
5
|
+
Author-email: Agent Inspector Team <team@agentinspector.dev>
|
|
6
|
+
License: MIT
|
|
7
|
+
Project-URL: Homepage, https://github.com/koladilip/ai-agent-inspector
|
|
8
|
+
Project-URL: Documentation, https://github.com/koladilip/ai-agent-inspector#readme
|
|
9
|
+
Project-URL: Repository, https://github.com/koladilip/ai-agent-inspector
|
|
10
|
+
Project-URL: Bug Tracker, https://github.com/koladilip/ai-agent-inspector/issues
|
|
11
|
+
Keywords: observability,tracing,ai,agents,langchain,llm,monitoring,debugging,instrumentation
|
|
12
|
+
Classifier: Development Status :: 4 - Beta
|
|
13
|
+
Classifier: Intended Audience :: Developers
|
|
14
|
+
Classifier: Topic :: Software Development :: Libraries :: Python Modules
|
|
15
|
+
Classifier: License :: OSI Approved :: MIT License
|
|
16
|
+
Classifier: Programming Language :: Python :: 3
|
|
17
|
+
Classifier: Programming Language :: Python :: 3.9
|
|
18
|
+
Classifier: Programming Language :: Python :: 3.10
|
|
19
|
+
Classifier: Programming Language :: Python :: 3.11
|
|
20
|
+
Classifier: Programming Language :: Python :: 3.12
|
|
21
|
+
Classifier: Topic :: Scientific/Engineering :: Artificial Intelligence
|
|
22
|
+
Requires-Python: >=3.9
|
|
23
|
+
Description-Content-Type: text/markdown
|
|
24
|
+
License-File: LICENSE
|
|
25
|
+
Requires-Dist: fastapi>=0.104.0
|
|
26
|
+
Requires-Dist: uvicorn[standard]>=0.24.0
|
|
27
|
+
Requires-Dist: cryptography>=41.0.0
|
|
28
|
+
Requires-Dist: jinja2>=3.1.0
|
|
29
|
+
Requires-Dist: python-dotenv>=1.0.0
|
|
30
|
+
Provides-Extra: langchain
|
|
31
|
+
Requires-Dist: langchain>=0.1.0; extra == "langchain"
|
|
32
|
+
Provides-Extra: otel
|
|
33
|
+
Requires-Dist: opentelemetry-api>=1.20.0; extra == "otel"
|
|
34
|
+
Requires-Dist: opentelemetry-sdk>=1.20.0; extra == "otel"
|
|
35
|
+
Requires-Dist: opentelemetry-exporter-otlp-proto-http>=1.20.0; extra == "otel"
|
|
36
|
+
Provides-Extra: dev
|
|
37
|
+
Requires-Dist: pytest>=7.4.0; extra == "dev"
|
|
38
|
+
Requires-Dist: pytest-asyncio>=0.21.0; extra == "dev"
|
|
39
|
+
Requires-Dist: pytest-cov>=4.1.0; extra == "dev"
|
|
40
|
+
Requires-Dist: httpx>=0.25.0; extra == "dev"
|
|
41
|
+
Requires-Dist: black>=23.0.0; extra == "dev"
|
|
42
|
+
Requires-Dist: flake8>=6.0.0; extra == "dev"
|
|
43
|
+
Requires-Dist: mypy>=1.7.0; extra == "dev"
|
|
44
|
+
Provides-Extra: all
|
|
45
|
+
Requires-Dist: ai-agent-inspector[dev,langchain]; extra == "all"
|
|
46
|
+
Dynamic: license-file
|
|
47
|
+
|
|
48
|
+
<div align="center">
|
|
49
|
+
|
|
50
|
+
# 🔍 Agent Inspector
|
|
51
|
+
|
|
52
|
+
**Framework-agnostic observability for AI agents**
|
|
53
|
+
|
|
54
|
+
A lightweight, non-blocking tracing system for monitoring and debugging AI agent reasoning, tool usage, and execution flow.
|
|
55
|
+
|
|
56
|
+
[](https://www.python.org/downloads/)
|
|
57
|
+
[](LICENSE)
|
|
58
|
+
[]()
|
|
59
|
+
|
|
60
|
+
</div>
|
|
61
|
+
|
|
62
|
+
---
|
|
63
|
+
|
|
64
|
+
## ⚡ Get started (60 seconds)
|
|
65
|
+
|
|
66
|
+
```bash
|
|
67
|
+
pip install ai-agent-inspector
|
|
68
|
+
# or from source: git clone <repo> && cd ai-agent-inspector && pip install -e .
|
|
69
|
+
```
|
|
70
|
+
|
|
71
|
+
```python
|
|
72
|
+
from agent_inspector import trace
|
|
73
|
+
|
|
74
|
+
with trace.run("my_first_trace"):
|
|
75
|
+
trace.llm(model="gpt-4", prompt="Hi", response="Hello!")
|
|
76
|
+
trace.final(answer="Done.")
|
|
77
|
+
```
|
|
78
|
+
|
|
79
|
+
```bash
|
|
80
|
+
agent-inspector server # or: python -m agent_inspector.cli server
|
|
81
|
+
```
|
|
82
|
+
|
|
83
|
+
Open **http://localhost:8000/ui/** to see the run. For configuration, examples, and API details, read on.
|
|
84
|
+
|
|
85
|
+
---
|
|
86
|
+
|
|
87
|
+
## 📋 Table of Contents
|
|
88
|
+
|
|
89
|
+
- [Overview](#overview)
|
|
90
|
+
- [Features](#features)
|
|
91
|
+
- [Installation](#installation)
|
|
92
|
+
- [Quick Start](#quick-start)
|
|
93
|
+
- [Architecture](#architecture)
|
|
94
|
+
- [Configuration](#configuration)
|
|
95
|
+
- [Usage Examples](#usage-examples)
|
|
96
|
+
- [API Documentation](#api-documentation)
|
|
97
|
+
- [Framework Adapters](#framework-adapters)
|
|
98
|
+
- [Development](#development)
|
|
99
|
+
- [Contributing](#contributing)
|
|
100
|
+
- [License](#license)
|
|
101
|
+
|
|
102
|
+
---
|
|
103
|
+
|
|
104
|
+
## Overview
|
|
105
|
+
|
|
106
|
+
**Agent Inspector** answers the question: *"Why did my agent behave this way?"*
|
|
107
|
+
|
|
108
|
+
Unlike traditional logging or tracing tools, Agent Inspector is designed specifically for AI agents with:
|
|
109
|
+
|
|
110
|
+
- **Agent-first semantics** - Tracks reasoning, decisions, and tool orchestration
|
|
111
|
+
- **Framework agnostic** - Works with LangChain, AutoGen, custom agents, and more
|
|
112
|
+
- **Non-blocking** - Never impacts agent performance (<1ms overhead)
|
|
113
|
+
- **Secure by default** - Automatic redaction, compression, and encryption
|
|
114
|
+
- **Local-first** - No SaaS required, all data stays on your machine
|
|
115
|
+
- **Simple UI** - Visual timeline for understanding agent behavior
|
|
116
|
+
|
|
117
|
+
### What Makes It Different
|
|
118
|
+
|
|
119
|
+
Traditional tools model systems as function calls and spans. Agent Inspector models:
|
|
120
|
+
- 🤖 **LLM decisions** - Why did the agent choose this tool?
|
|
121
|
+
- 🔧 **Tool execution** - What arguments were passed? What was the result?
|
|
122
|
+
- 📖 **Memory operations** - What did the agent read/write?
|
|
123
|
+
- ❌ **Failure modes** - Where did the agent get stuck or fail?
|
|
124
|
+
- ✅ **Final outcomes** - What was the final answer?
|
|
125
|
+
|
|
126
|
+
---
|
|
127
|
+
|
|
128
|
+
## Features
|
|
129
|
+
|
|
130
|
+
### Core tracing SDK
|
|
131
|
+
- **Context manager API** – `with trace.run("run_name"):` wraps agent execution; all events are tied to that run.
|
|
132
|
+
- **Event emission** – `trace.llm()`, `trace.tool()`, `trace.memory_read()`, `trace.memory_write()`, `trace.error()`, `trace.final()`; optional `trace.emit(event)` for custom event types (`EventType.CUSTOM`).
|
|
133
|
+
- **Nested runs** – Multiple `trace.run()` blocks can be nested (e.g. orchestrator + specialist); parent/child is tracked via `parent_event_id`.
|
|
134
|
+
- **Active context** – `trace.get_active_context()` returns the current run’s context; works in both **sync and async** (asyncio) via `contextvars`.
|
|
135
|
+
- **Global trace** – `get_trace()` / `set_trace(trace)` for default instance or testing; module-level `trace` proxy.
|
|
136
|
+
|
|
137
|
+
### Sampling and backpressure
|
|
138
|
+
- **Sampling** – `TraceConfig.sample_rate` (0.0–1.0) and `only_on_error`; deterministic hash-based default; optional **pluggable `Sampler`** via `Trace(sampler=...)`.
|
|
139
|
+
- **Non-blocking queue** – Events are queued with `put_nowait`; a background worker batches and flushes to the exporter so the hot path never blocks.
|
|
140
|
+
- **Drain on shutdown** – On `shutdown()`, the worker drains the queue and flushes remaining events so nothing is dropped at exit.
|
|
141
|
+
- **Critical-event backpressure** – Optional `TraceConfig.block_on_run_end` and `run_end_block_timeout_ms`; when set, `run_end` is queued with a blocking put (up to timeout) so it is not dropped when the queue is full.
|
|
142
|
+
|
|
143
|
+
### Extensibility
|
|
144
|
+
- **Exporter protocol** – Implement `Exporter` (initialize, export_batch, shutdown) and pass to `Trace(exporter=...)`; default is `StorageExporter` (SQLite).
|
|
145
|
+
- **CompositeExporter** – Fan-out to multiple exporters: `Trace(exporter=CompositeExporter([db_exporter, http_exporter]))`.
|
|
146
|
+
- **Sampler protocol** – Implement `Sampler.should_sample(run_id, run_name, config)` and pass to `Trace(sampler=...)` for custom sampling (e.g. by user, tenant).
|
|
147
|
+
- **Custom events** – Use `EventType.CUSTOM` and `TraceContext.emit(event)` or `Trace.emit(event)` for custom `BaseEvent` subclasses.
|
|
148
|
+
|
|
149
|
+
### Data pipeline
|
|
150
|
+
- **Redaction** – Configurable `redact_keys` and `redact_patterns`; applied before serialization.
|
|
151
|
+
- **Serialization** – Compact JSON for storage.
|
|
152
|
+
- **Compression** – Optional gzip (configurable level) before storage.
|
|
153
|
+
- **Encryption** – Optional Fernet symmetric encryption at rest (`encryption_enabled`, `encryption_key`).
|
|
154
|
+
|
|
155
|
+
### Storage
|
|
156
|
+
- **SQLite** – WAL mode for concurrent access; runs and steps tables; indexes on run_id and timestamp.
|
|
157
|
+
- **Pruning** – CLI `prune --retention-days N` and optional `--vacuum`; API/DB support for retention.
|
|
158
|
+
- **Backup** – CLI `backup /path/to/backup.db` for full DB copy.
|
|
159
|
+
- **Export to JSON** – **API** `GET /v1/runs/{run_id}/export` returns run metadata + timeline with decoded event data; **CLI** `agent-inspector export <run_id> [--output file.json]` and `agent-inspector export --all [--limit N] [--output file.json]` for backup or migration.
|
|
160
|
+
|
|
161
|
+
### API
|
|
162
|
+
- **FastAPI** – REST API with OpenAPI docs at `/docs` and `/redoc`.
|
|
163
|
+
- **Endpoints** – Health, list runs (with filters), get run, get run timeline, get run steps, get step data, **export run**, stats; optional API key auth and CORS.
|
|
164
|
+
- **List runs filters** – `limit`, `offset`, `status`, `user_id`, `session_id`, `search`, **`started_after`**, **`started_before`** (timestamps in ms since epoch) for date-range queries.
|
|
165
|
+
|
|
166
|
+
### UI
|
|
167
|
+
- **Web interface** – Three-panel layout: run list (filters, search), timeline, detail view; dark mode; real-time updates for running runs; served at `/ui/`.
|
|
168
|
+
|
|
169
|
+
### CLI
|
|
170
|
+
- **Commands** – `init`, `server`, `stats`, `prune`, `vacuum`, `backup`, **`export`** (single run or `--all`), `config`, `--version`.
|
|
171
|
+
- **Profiles** – `config --profile production|development|debug`; env `TRACE_PROFILE`.
|
|
172
|
+
|
|
173
|
+
### Optional integrations
|
|
174
|
+
- **LangChain** – `pip install ai-agent-inspector[langchain]`; `enable_langchain()` for automatic tracing of LLM and tool calls.
|
|
175
|
+
- **OpenTelemetry OTLP** – `pip install ai-agent-inspector[otel]`; `OTLPExporter(endpoint=...)` sends events as OTLP spans to Jaeger, Tempo, Grafana, etc.
|
|
176
|
+
|
|
177
|
+
### Configuration
|
|
178
|
+
- **Presets** – Production, development, debug (sample rate, compression, encryption, log level).
|
|
179
|
+
- **Environment variables** – All main options (sampling, queue, redaction, encryption, DB path, API, UI, logging, **block_on_run_end**, **run_end_block_timeout**) can be set via `TRACE_*` env vars.
|
|
180
|
+
- **Code** – `TraceConfig` in code; `set_config(config)` for global default.
|
|
181
|
+
|
|
182
|
+
---
|
|
183
|
+
|
|
184
|
+
## Installation
|
|
185
|
+
|
|
186
|
+
### Requirements
|
|
187
|
+
|
|
188
|
+
- Python 3.9 or higher
|
|
189
|
+
- pip or another package manager
|
|
190
|
+
|
|
191
|
+
### Install from PyPI
|
|
192
|
+
|
|
193
|
+
The PyPI package is **`ai-agent-inspector`** (distinct from the existing `agent-inspector` project on PyPI). After install, the CLI is still `agent-inspector` and imports are `from agent_inspector import ...`.
|
|
194
|
+
|
|
195
|
+
```bash
|
|
196
|
+
pip install ai-agent-inspector
|
|
197
|
+
```
|
|
198
|
+
|
|
199
|
+
### Install from Source
|
|
200
|
+
|
|
201
|
+
```bash
|
|
202
|
+
git clone https://github.com/koladilip/ai-agent-inspector.git
|
|
203
|
+
cd ai-agent-inspector
|
|
204
|
+
pip install -e .
|
|
205
|
+
```
|
|
206
|
+
|
|
207
|
+
### Optional Dependencies
|
|
208
|
+
|
|
209
|
+
```bash
|
|
210
|
+
# For LangChain adapter
|
|
211
|
+
pip install "ai-agent-inspector[langchain]"
|
|
212
|
+
|
|
213
|
+
# For development
|
|
214
|
+
pip install "ai-agent-inspector[dev]"
|
|
215
|
+
```
|
|
216
|
+
|
|
217
|
+
---
|
|
218
|
+
|
|
219
|
+
## Quick Start
|
|
220
|
+
|
|
221
|
+
### 1. Initialize Agent Inspector
|
|
222
|
+
|
|
223
|
+
```bash
|
|
224
|
+
agent-inspector init
|
|
225
|
+
```
|
|
226
|
+
|
|
227
|
+
This creates a default configuration and initializes the SQLite database.
|
|
228
|
+
|
|
229
|
+
### 2. Start Tracing in Your Code
|
|
230
|
+
|
|
231
|
+
```python
|
|
232
|
+
from agent_inspector import trace
|
|
233
|
+
|
|
234
|
+
# Wrap your agent execution in a trace context
|
|
235
|
+
with trace.run("my_agent"):
|
|
236
|
+
# Your agent code here
|
|
237
|
+
trace.llm(
|
|
238
|
+
model="gpt-4",
|
|
239
|
+
prompt="What is the capital of France?",
|
|
240
|
+
response="The capital of France is Paris."
|
|
241
|
+
)
|
|
242
|
+
|
|
243
|
+
trace.tool(
|
|
244
|
+
tool_name="search",
|
|
245
|
+
tool_args={"query": "capital of France"},
|
|
246
|
+
tool_result="Paris"
|
|
247
|
+
)
|
|
248
|
+
|
|
249
|
+
trace.final(answer="The capital of France is Paris.")
|
|
250
|
+
```
|
|
251
|
+
|
|
252
|
+
### 3. Start the API Server
|
|
253
|
+
|
|
254
|
+
```bash
|
|
255
|
+
agent-inspector server
|
|
256
|
+
```
|
|
257
|
+
|
|
258
|
+
### 4. View Traces in the UI
|
|
259
|
+
|
|
260
|
+
Open your browser to: **http://localhost:8000/**
|
|
261
|
+
Root redirects to **/ui/**.
|
|
262
|
+
|
|
263
|
+
---
|
|
264
|
+
|
|
265
|
+
## Architecture
|
|
266
|
+
|
|
267
|
+
Agent Inspector is built around explicit interfaces so each layer can evolve independently.
|
|
268
|
+
|
|
269
|
+
### High-level system view
|
|
270
|
+
|
|
271
|
+
```mermaid
|
|
272
|
+
flowchart LR
|
|
273
|
+
subgraph App["Application"]
|
|
274
|
+
Agent[Agent / LLM code]
|
|
275
|
+
Adapter[Framework Adapters]
|
|
276
|
+
end
|
|
277
|
+
|
|
278
|
+
subgraph SDK["Agent Inspector SDK"]
|
|
279
|
+
Trace[Trace]
|
|
280
|
+
Queue[EventQueue]
|
|
281
|
+
Worker[Background Worker]
|
|
282
|
+
end
|
|
283
|
+
|
|
284
|
+
subgraph Export["Exporters"]
|
|
285
|
+
StorageExp[StorageExporter]
|
|
286
|
+
OTLPExp[OTLPExporter]
|
|
287
|
+
Composite[CompositeExporter]
|
|
288
|
+
end
|
|
289
|
+
|
|
290
|
+
subgraph Backends["Backends"]
|
|
291
|
+
SQLite[(SQLite)]
|
|
292
|
+
OTLP[OTLP / Jaeger]
|
|
293
|
+
end
|
|
294
|
+
|
|
295
|
+
subgraph Read["Query path"]
|
|
296
|
+
API[FastAPI]
|
|
297
|
+
UI[Web UI]
|
|
298
|
+
end
|
|
299
|
+
|
|
300
|
+
Agent --> Trace
|
|
301
|
+
Adapter --> Trace
|
|
302
|
+
Trace --> Queue
|
|
303
|
+
Queue --> Worker
|
|
304
|
+
Worker --> StorageExp
|
|
305
|
+
Worker --> OTLPExp
|
|
306
|
+
Worker --> Composite
|
|
307
|
+
StorageExp --> SQLite
|
|
308
|
+
OTLPExp --> OTLP
|
|
309
|
+
Composite --> StorageExp
|
|
310
|
+
Composite --> OTLPExp
|
|
311
|
+
SQLite --> API
|
|
312
|
+
API --> UI
|
|
313
|
+
```
|
|
314
|
+
|
|
315
|
+
### Component layers
|
|
316
|
+
|
|
317
|
+
```mermaid
|
|
318
|
+
flowchart TB
|
|
319
|
+
subgraph Adapters["Adapters (optional)"]
|
|
320
|
+
LangChain[LangChain]
|
|
321
|
+
Custom[Custom adapters]
|
|
322
|
+
end
|
|
323
|
+
|
|
324
|
+
subgraph Core["Core SDK"]
|
|
325
|
+
TraceC[Trace]
|
|
326
|
+
Events[Events]
|
|
327
|
+
Config[TraceConfig]
|
|
328
|
+
QueueC[EventQueue]
|
|
329
|
+
Sampler[Sampler]
|
|
330
|
+
ExporterProto[Exporter protocol]
|
|
331
|
+
end
|
|
332
|
+
|
|
333
|
+
subgraph Processing["Processing"]
|
|
334
|
+
Pipeline[Pipeline]
|
|
335
|
+
Redact[Redaction]
|
|
336
|
+
Serialize[Serialization]
|
|
337
|
+
Compress[Compression]
|
|
338
|
+
Encrypt[Encryption]
|
|
339
|
+
end
|
|
340
|
+
|
|
341
|
+
subgraph StorageLayer["Storage"]
|
|
342
|
+
StorageExpC[StorageExporter]
|
|
343
|
+
DB[(Database)]
|
|
344
|
+
end
|
|
345
|
+
|
|
346
|
+
subgraph OptionalExport["Optional exporters"]
|
|
347
|
+
OTLPExpC[OTLPExporter]
|
|
348
|
+
end
|
|
349
|
+
|
|
350
|
+
subgraph Serve["Serve"]
|
|
351
|
+
APIServer[API]
|
|
352
|
+
UIServer[UI]
|
|
353
|
+
ReadStore[ReadStore]
|
|
354
|
+
end
|
|
355
|
+
|
|
356
|
+
LangChain --> TraceC
|
|
357
|
+
Custom --> TraceC
|
|
358
|
+
TraceC --> Events
|
|
359
|
+
TraceC --> QueueC
|
|
360
|
+
TraceC --> Sampler
|
|
361
|
+
TraceC --> ExporterProto
|
|
362
|
+
QueueC --> ExporterProto
|
|
363
|
+
ExporterProto --> StorageExpC
|
|
364
|
+
ExporterProto --> OTLPExpC
|
|
365
|
+
StorageExpC --> Pipeline
|
|
366
|
+
Pipeline --> Redact --> Serialize --> Compress --> Encrypt
|
|
367
|
+
Encrypt --> DB
|
|
368
|
+
DB --> ReadStore
|
|
369
|
+
ReadStore --> APIServer
|
|
370
|
+
APIServer --> UIServer
|
|
371
|
+
```
|
|
372
|
+
|
|
373
|
+
### Event flow (sequence)
|
|
374
|
+
|
|
375
|
+
From application code to storage: events are emitted synchronously into a queue, then processed asynchronously by a worker that batches and exports.
|
|
376
|
+
|
|
377
|
+
```mermaid
|
|
378
|
+
sequenceDiagram
|
|
379
|
+
participant App as Application
|
|
380
|
+
participant Trace as Trace
|
|
381
|
+
participant Ctx as TraceContext
|
|
382
|
+
participant Queue as EventQueue
|
|
383
|
+
participant Worker as Worker thread
|
|
384
|
+
participant Exporter as Exporter
|
|
385
|
+
participant Pipeline as Pipeline
|
|
386
|
+
participant DB as SQLite
|
|
387
|
+
|
|
388
|
+
App->>Trace: trace.run("my_run")
|
|
389
|
+
Trace->>Ctx: create TraceContext
|
|
390
|
+
Trace->>App: enter context
|
|
391
|
+
|
|
392
|
+
App->>Trace: trace.llm(...) / trace.tool(...)
|
|
393
|
+
Trace->>Ctx: emit event
|
|
394
|
+
Ctx->>Queue: put(event) [non-blocking]
|
|
395
|
+
Note over Queue: Event queued, agent continues
|
|
396
|
+
|
|
397
|
+
loop Background worker
|
|
398
|
+
Worker->>Queue: get batch (size or timeout)
|
|
399
|
+
Queue-->>Worker: events[]
|
|
400
|
+
Worker->>Exporter: export_batch(events)
|
|
401
|
+
Exporter->>Pipeline: process each event
|
|
402
|
+
Pipeline->>Pipeline: redact → serialize → compress → encrypt
|
|
403
|
+
Pipeline->>DB: insert run / steps
|
|
404
|
+
end
|
|
405
|
+
|
|
406
|
+
App->>Trace: exit context
|
|
407
|
+
Trace->>Ctx: run_end
|
|
408
|
+
Ctx->>Queue: put(run_end)
|
|
409
|
+
```
|
|
410
|
+
|
|
411
|
+
### Data pipeline (storage path)
|
|
412
|
+
|
|
413
|
+
Events written to SQLite pass through the processing pipeline before persistence.
|
|
414
|
+
|
|
415
|
+
```mermaid
|
|
416
|
+
flowchart LR
|
|
417
|
+
A[Raw event] --> B[Redaction]
|
|
418
|
+
B --> C[JSON serialize]
|
|
419
|
+
C --> D{Compression?}
|
|
420
|
+
D -->|yes| E[Gzip]
|
|
421
|
+
D -->|no| F[Encryption?]
|
|
422
|
+
E --> F
|
|
423
|
+
F -->|yes| G[Fernet encrypt]
|
|
424
|
+
F -->|no| H[(SQLite)]
|
|
425
|
+
G --> H
|
|
426
|
+
```
|
|
427
|
+
|
|
428
|
+
### SDK Core
|
|
429
|
+
- `Trace` provides the context manager API (`trace.run(...)`) and event emission.
|
|
430
|
+
- Events are immutable dictionaries serialized by the processing pipeline.
|
|
431
|
+
- Events flow into an `Exporter` which handles delivery.
|
|
432
|
+
|
|
433
|
+
### Exporters
|
|
434
|
+
- The SDK depends on the `Exporter` interface.
|
|
435
|
+
- `StorageExporter` implements it using the database + pipeline.
|
|
436
|
+
- Alternative exporters can be plugged in without changing the SDK.
|
|
437
|
+
|
|
438
|
+
### Storage
|
|
439
|
+
- SQLite with WAL mode for concurrent access.
|
|
440
|
+
- Runs and steps are stored separately for efficient queries.
|
|
441
|
+
|
|
442
|
+
### API & UI
|
|
443
|
+
- API depends on a `ReadStore` interface to query runs and steps.
|
|
444
|
+
- UI is served as static assets under `/ui/static`.
|
|
445
|
+
|
|
446
|
+
---
|
|
447
|
+
|
|
448
|
+
## Configuration
|
|
449
|
+
|
|
450
|
+
### Configuration Presets
|
|
451
|
+
|
|
452
|
+
Agent Inspector comes with three configuration presets:
|
|
453
|
+
|
|
454
|
+
#### Production
|
|
455
|
+
```bash
|
|
456
|
+
agent-inspector config --profile production
|
|
457
|
+
```
|
|
458
|
+
- Sample rate: 1%
|
|
459
|
+
- Compression: Enabled
|
|
460
|
+
- Encryption: Enabled
|
|
461
|
+
- Log level: WARNING
|
|
462
|
+
|
|
463
|
+
#### Development
|
|
464
|
+
```bash
|
|
465
|
+
agent-inspector config --profile development
|
|
466
|
+
```
|
|
467
|
+
- Sample rate: 50%
|
|
468
|
+
- Compression: Enabled
|
|
469
|
+
- Encryption: Disabled
|
|
470
|
+
- Log level: INFO
|
|
471
|
+
|
|
472
|
+
#### Debug
|
|
473
|
+
```bash
|
|
474
|
+
agent-inspector config --profile debug
|
|
475
|
+
```
|
|
476
|
+
- Sample rate: 100%
|
|
477
|
+
- Compression: Disabled
|
|
478
|
+
- Encryption: Disabled
|
|
479
|
+
- Log level: DEBUG
|
|
480
|
+
|
|
481
|
+
### Environment Variables
|
|
482
|
+
|
|
483
|
+
Configure Agent Inspector using environment variables:
|
|
484
|
+
|
|
485
|
+
```bash
|
|
486
|
+
# Presets
|
|
487
|
+
export TRACE_PROFILE=development
|
|
488
|
+
|
|
489
|
+
# Sampling
|
|
490
|
+
export TRACE_SAMPLE_RATE=0.5
|
|
491
|
+
export TRACE_ONLY_ON_ERROR=false
|
|
492
|
+
|
|
493
|
+
# Queue & Batch
|
|
494
|
+
export TRACE_QUEUE_SIZE=1000
|
|
495
|
+
export TRACE_BATCH_SIZE=50
|
|
496
|
+
export TRACE_BATCH_TIMEOUT=1000
|
|
497
|
+
|
|
498
|
+
# Redaction
|
|
499
|
+
export TRACE_REDACT_KEYS="password,api_key,token"
|
|
500
|
+
export TRACE_REDACT_PATTERNS="\\b\\d{3}-\\d{2}-\\d{4}\\b"
|
|
501
|
+
|
|
502
|
+
# Encryption
|
|
503
|
+
export TRACE_ENCRYPTION_ENABLED=true
|
|
504
|
+
export TRACE_ENCRYPTION_KEY=your-secret-key-here
|
|
505
|
+
|
|
506
|
+
# Storage
|
|
507
|
+
export TRACE_DB_PATH=agent_inspector.db
|
|
508
|
+
export TRACE_RETENTION_DAYS=30
|
|
509
|
+
|
|
510
|
+
# API
|
|
511
|
+
export TRACE_API_HOST=127.0.0.1
|
|
512
|
+
export TRACE_API_PORT=8000
|
|
513
|
+
export TRACE_API_KEY_REQUIRED=false
|
|
514
|
+
export TRACE_API_KEY=your-api-key
|
|
515
|
+
|
|
516
|
+
# UI
|
|
517
|
+
export TRACE_UI_ENABLED=true
|
|
518
|
+
export TRACE_UI_PATH=/ui
|
|
519
|
+
|
|
520
|
+
# Processing
|
|
521
|
+
export TRACE_COMPRESSION_ENABLED=true
|
|
522
|
+
export TRACE_COMPRESSION_LEVEL=6
|
|
523
|
+
|
|
524
|
+
# Logging
|
|
525
|
+
export TRACE_LOG_LEVEL=INFO
|
|
526
|
+
export TRACE_LOG_PATH=agent_inspector.log
|
|
527
|
+
```
|
|
528
|
+
|
|
529
|
+
### Custom Configuration
|
|
530
|
+
|
|
531
|
+
Create a custom configuration in code:
|
|
532
|
+
|
|
533
|
+
```python
|
|
534
|
+
from agent_inspector import TraceConfig, set_config
|
|
535
|
+
|
|
536
|
+
config = TraceConfig(
|
|
537
|
+
sample_rate=1.0, # Trace all runs
|
|
538
|
+
only_on_error=False,
|
|
539
|
+
redact_keys=["password", "api_key", "secret"],
|
|
540
|
+
redact_patterns=[
|
|
541
|
+
r"\b\d{3}-\d{2}-\d{4}\b", # SSN
|
|
542
|
+
r"\b\d{4}[-\s]?\d{4}[-\s]?\d{4}[-\s]?\d{4}\b", # Credit card
|
|
543
|
+
],
|
|
544
|
+
encryption_enabled=False,
|
|
545
|
+
compression_enabled=True,
|
|
546
|
+
compression_level=6,
|
|
547
|
+
queue_size=1000,
|
|
548
|
+
batch_size=50,
|
|
549
|
+
db_path="custom_inspector.db",
|
|
550
|
+
retention_days=30,
|
|
551
|
+
)
|
|
552
|
+
|
|
553
|
+
set_config(config)
|
|
554
|
+
```
|
|
555
|
+
|
|
556
|
+
---
|
|
557
|
+
|
|
558
|
+
## Usage Examples
|
|
559
|
+
|
|
560
|
+
### Basic Agent Tracing
|
|
561
|
+
|
|
562
|
+
```python
|
|
563
|
+
from agent_inspector import trace
|
|
564
|
+
|
|
565
|
+
def search_flights_agent(user_query):
|
|
566
|
+
with trace.run("flight_search", user_id="user123"):
|
|
567
|
+
# Agent decides which tool to use
|
|
568
|
+
trace.llm(
|
|
569
|
+
model="gpt-4",
|
|
570
|
+
prompt=f"User: {user_query}. Which tool should I use?",
|
|
571
|
+
response="Use the search_flights tool."
|
|
572
|
+
)
|
|
573
|
+
|
|
574
|
+
# Tool execution
|
|
575
|
+
trace.tool(
|
|
576
|
+
tool_name="search_flights",
|
|
577
|
+
tool_args={"query": user_query},
|
|
578
|
+
tool_result={
|
|
579
|
+
"flights": [
|
|
580
|
+
{"airline": "Delta", "price": "$350"},
|
|
581
|
+
{"airline": "United", "price": "$320"},
|
|
582
|
+
]
|
|
583
|
+
}
|
|
584
|
+
)
|
|
585
|
+
|
|
586
|
+
# Agent processes results
|
|
587
|
+
trace.llm(
|
|
588
|
+
model="gpt-4",
|
|
589
|
+
prompt=f"Found 2 flights. Which should I recommend?",
|
|
590
|
+
response="Recommend United for $320, it's cheaper."
|
|
591
|
+
)
|
|
592
|
+
|
|
593
|
+
# Final answer
|
|
594
|
+
trace.final(
|
|
595
|
+
answer="I recommend United Airlines for $320. It's the cheapest option."
|
|
596
|
+
)
|
|
597
|
+
|
|
598
|
+
# Run the agent
|
|
599
|
+
search_flights_agent("Find flights from SFO to JFK")
|
|
600
|
+
```
|
|
601
|
+
|
|
602
|
+
### Real Agent Example (OpenAI-compatible)
|
|
603
|
+
|
|
604
|
+
This example makes real LLM calls and runs multiple scenarios.
|
|
605
|
+
|
|
606
|
+
```bash
|
|
607
|
+
cp .env.example .env
|
|
608
|
+
```
|
|
609
|
+
|
|
610
|
+
Set these in `.env`:
|
|
611
|
+
- `OPENAI_BASE_URL`
|
|
612
|
+
- `OPENAI_API_KEY`
|
|
613
|
+
- `OPENAI_MODEL`
|
|
614
|
+
|
|
615
|
+
Run a single question:
|
|
616
|
+
```bash
|
|
617
|
+
python examples/real_agent.py "What is 13 * (7 + 5)?"
|
|
618
|
+
```
|
|
619
|
+
|
|
620
|
+
Run the full scenario suite:
|
|
621
|
+
```bash
|
|
622
|
+
python examples/real_agent.py --suite
|
|
623
|
+
```
|
|
624
|
+
|
|
625
|
+
### With LangChain (Automatic)
|
|
626
|
+
|
|
627
|
+
```python
|
|
628
|
+
from langchain.agents import initialize_agent, Tool, AgentType
|
|
629
|
+
from langchain.llms import OpenAI
|
|
630
|
+
from agent_inspector.adapters import enable_langchain
|
|
631
|
+
|
|
632
|
+
# Initialize your LangChain agent
|
|
633
|
+
llm = OpenAI(temperature=0)
|
|
634
|
+
tools = [
|
|
635
|
+
Tool(name="search", func=search_flights, description="Search for flights")
|
|
636
|
+
]
|
|
637
|
+
agent = initialize_agent(tools, llm, agent=AgentType.ZERO_SHOT_REACT_DESCRIPTION)
|
|
638
|
+
|
|
639
|
+
# Use with automatic tracing
|
|
640
|
+
with enable_langchain(run_name="langchain_flight_agent") as callbacks:
|
|
641
|
+
result = agent.run("Find flights from SFO to JFK")
|
|
642
|
+
print(result)
|
|
643
|
+
```
|
|
644
|
+
|
|
645
|
+
That's it! All LLM calls, tool calls, and agent actions are automatically traced.
|
|
646
|
+
|
|
647
|
+
### Error Handling
|
|
648
|
+
|
|
649
|
+
```python
|
|
650
|
+
from agent_inspector import trace
|
|
651
|
+
|
|
652
|
+
with trace.run("error_demo"):
|
|
653
|
+
try:
|
|
654
|
+
# Successful operation
|
|
655
|
+
trace.llm(
|
|
656
|
+
model="gpt-4",
|
|
657
|
+
prompt="What is 2+2?",
|
|
658
|
+
response="4"
|
|
659
|
+
)
|
|
660
|
+
|
|
661
|
+
# Tool that fails
|
|
662
|
+
trace.tool(
|
|
663
|
+
tool_name="broken_tool",
|
|
664
|
+
tool_args={"input": "test"},
|
|
665
|
+
tool_result="Error: Connection timeout"
|
|
666
|
+
)
|
|
667
|
+
|
|
668
|
+
# Log the error
|
|
669
|
+
trace.error(
|
|
670
|
+
error_type="ConnectionError",
|
|
671
|
+
error_message="Tool failed to connect",
|
|
672
|
+
critical=False
|
|
673
|
+
)
|
|
674
|
+
|
|
675
|
+
# Continue with fallback
|
|
676
|
+
trace.tool(
|
|
677
|
+
tool_name="fallback_tool",
|
|
678
|
+
tool_args={"input": "test"},
|
|
679
|
+
tool_result="success"
|
|
680
|
+
)
|
|
681
|
+
|
|
682
|
+
except Exception as e:
|
|
683
|
+
# Log unexpected errors
|
|
684
|
+
trace.error(
|
|
685
|
+
error_type=type(e).__name__,
|
|
686
|
+
error_message=str(e),
|
|
687
|
+
critical=True
|
|
688
|
+
)
|
|
689
|
+
raise
|
|
690
|
+
```
|
|
691
|
+
|
|
692
|
+
### Nested Agents
|
|
693
|
+
|
|
694
|
+
```python
|
|
695
|
+
from agent_inspector import trace
|
|
696
|
+
|
|
697
|
+
# Main agent
|
|
698
|
+
with trace.run("planning_agent", user_id="user123") as main_ctx:
|
|
699
|
+
trace.llm(
|
|
700
|
+
model="gpt-4",
|
|
701
|
+
prompt="User wants to book a flight. Should I delegate?",
|
|
702
|
+
response="Yes, delegate to booking agent."
|
|
703
|
+
)
|
|
704
|
+
|
|
705
|
+
# Sub-agent (nested)
|
|
706
|
+
with trace.run("booking_agent", session_id="booking_456"):
|
|
707
|
+
trace.tool(
|
|
708
|
+
tool_name="book_flight",
|
|
709
|
+
tool_args={"flight_id": "UA123"},
|
|
710
|
+
tool_result={"status": "confirmed", "confirmation": "CONF-12345"}
|
|
711
|
+
)
|
|
712
|
+
|
|
713
|
+
trace.final(answer="Flight booked successfully!")
|
|
714
|
+
|
|
715
|
+
# Main agent continues
|
|
716
|
+
trace.final(answer="I've booked your flight. Confirmation: CONF-12345")
|
|
717
|
+
```
|
|
718
|
+
|
|
719
|
+
### Memory Operations
|
|
720
|
+
|
|
721
|
+
```python
|
|
722
|
+
from agent_inspector import trace
|
|
723
|
+
|
|
724
|
+
with trace.run("memory_agent"):
|
|
725
|
+
# Read from memory
|
|
726
|
+
trace.memory_read(
|
|
727
|
+
memory_key="user_preferences",
|
|
728
|
+
memory_value={"preferred_airline": "Delta", "seat": "window"},
|
|
729
|
+
memory_type="key_value"
|
|
730
|
+
)
|
|
731
|
+
|
|
732
|
+
# Write to memory
|
|
733
|
+
trace.memory_write(
|
|
734
|
+
memory_key="last_search",
|
|
735
|
+
memory_value={"query": "SFO to JFK", "timestamp": 1234567890},
|
|
736
|
+
memory_type="key_value",
|
|
737
|
+
overwrite=True
|
|
738
|
+
)
|
|
739
|
+
|
|
740
|
+
trace.final(answer="I found your preferences and remembered your search.")
|
|
741
|
+
```
|
|
742
|
+
|
|
743
|
+
---
|
|
744
|
+
|
|
745
|
+
## API Documentation
|
|
746
|
+
|
|
747
|
+
Once you start the API server, visit:
|
|
748
|
+
|
|
749
|
+
- **Swagger UI**: http://localhost:8000/docs
|
|
750
|
+
- **ReDoc**: http://localhost:8000/redoc
|
|
751
|
+
|
|
752
|
+
### Main Endpoints
|
|
753
|
+
|
|
754
|
+
#### Health Check
|
|
755
|
+
```
|
|
756
|
+
GET /health
|
|
757
|
+
```
|
|
758
|
+
|
|
759
|
+
#### List Runs
|
|
760
|
+
```
|
|
761
|
+
GET /v1/runs
|
|
762
|
+
?limit=100
|
|
763
|
+
&offset=0
|
|
764
|
+
&status=completed
|
|
765
|
+
&user_id=user123
|
|
766
|
+
&search=flight
|
|
767
|
+
```
|
|
768
|
+
|
|
769
|
+
#### Get Run Details
|
|
770
|
+
```
|
|
771
|
+
GET /v1/runs/{run_id}
|
|
772
|
+
```
|
|
773
|
+
|
|
774
|
+
#### Get Run Timeline
|
|
775
|
+
```
|
|
776
|
+
GET /v1/runs/{run_id}/timeline
|
|
777
|
+
?include_data=true
|
|
778
|
+
```
|
|
779
|
+
|
|
780
|
+
#### Get Run Steps
|
|
781
|
+
```
|
|
782
|
+
GET /v1/runs/{run_id}/steps
|
|
783
|
+
?limit=50
|
|
784
|
+
&offset=0
|
|
785
|
+
&event_type=llm_call
|
|
786
|
+
```
|
|
787
|
+
|
|
788
|
+
#### Get Step Data
|
|
789
|
+
```
|
|
790
|
+
GET /v1/runs/{run_id}/steps/{step_id}/data
|
|
791
|
+
```
|
|
792
|
+
|
|
793
|
+
#### Statistics
|
|
794
|
+
```
|
|
795
|
+
GET /v1/stats
|
|
796
|
+
```
|
|
797
|
+
|
|
798
|
+
---
|
|
799
|
+
|
|
800
|
+
## Framework Adapters
|
|
801
|
+
|
|
802
|
+
### LangChain
|
|
803
|
+
|
|
804
|
+
Install the optional dependency:
|
|
805
|
+
|
|
806
|
+
```bash
|
|
807
|
+
pip install "ai-agent-inspector[langchain]"
|
|
808
|
+
```
|
|
809
|
+
|
|
810
|
+
Automatic tracing:
|
|
811
|
+
|
|
812
|
+
```python
|
|
813
|
+
from agent_inspector.adapters import enable_langchain
|
|
814
|
+
from langchain.agents import initialize_agent
|
|
815
|
+
|
|
816
|
+
# Create agent
|
|
817
|
+
agent = initialize_agent(tools, llm, agent=AgentType.ZERO_SHOT_REACT_DESCRIPTION)
|
|
818
|
+
|
|
819
|
+
# Trace automatically
|
|
820
|
+
with enable_langchain() as callbacks:
|
|
821
|
+
result = agent.run("Your query here")
|
|
822
|
+
```
|
|
823
|
+
|
|
824
|
+
Manual callback handler:
|
|
825
|
+
|
|
826
|
+
```python
|
|
827
|
+
from agent_inspector.adapters import get_callback_handler
|
|
828
|
+
|
|
829
|
+
# Get callback handler
|
|
830
|
+
callbacks = [get_callback_handler()]
|
|
831
|
+
|
|
832
|
+
# Use with LangChain chains
|
|
833
|
+
chain = LLMChain(llm=llm, prompt=prompt)
|
|
834
|
+
result = chain.run("Your query", callbacks=callbacks)
|
|
835
|
+
```
|
|
836
|
+
|
|
837
|
+
### Creating Custom Adapters
|
|
838
|
+
|
|
839
|
+
Create a new adapter by extending `BaseCallbackHandler` (for LangChain-like frameworks) or by using the Trace SDK directly:
|
|
840
|
+
|
|
841
|
+
```python
|
|
842
|
+
from agent_inspector import Trace, get_trace
|
|
843
|
+
|
|
844
|
+
class CustomAdapter:
|
|
845
|
+
def __init__(self, trace: Trace = None):
|
|
846
|
+
self.trace = trace or get_trace()
|
|
847
|
+
|
|
848
|
+
def on_llm_call(self, model, prompt, response):
|
|
849
|
+
"""Handle LLM calls in your framework."""
|
|
850
|
+
context = self.trace.get_active_context()
|
|
851
|
+
if context:
|
|
852
|
+
context.llm(model=model, prompt=prompt, response=response)
|
|
853
|
+
|
|
854
|
+
def on_tool_call(self, tool_name, args, result):
|
|
855
|
+
"""Handle tool calls in your framework."""
|
|
856
|
+
context = self.trace.get_active_context()
|
|
857
|
+
if context:
|
|
858
|
+
context.tool(tool_name=tool_name, tool_args=args, tool_result=result)
|
|
859
|
+
|
|
860
|
+
# Use your adapter
|
|
861
|
+
with trace.run("custom_agent"):
|
|
862
|
+
adapter = CustomAdapter()
|
|
863
|
+
|
|
864
|
+
# Your framework code
|
|
865
|
+
adapter.on_llm_call("gpt-4", "Hello", "Hi there!")
|
|
866
|
+
```
|
|
867
|
+
|
|
868
|
+
---
|
|
869
|
+
|
|
870
|
+
## Development
|
|
871
|
+
|
|
872
|
+
### Setup Development Environment
|
|
873
|
+
|
|
874
|
+
```bash
|
|
875
|
+
# Clone the repository
|
|
876
|
+
git clone https://github.com/koladilip/ai-agent-inspector.git
|
|
877
|
+
cd ai-agent-inspector
|
|
878
|
+
|
|
879
|
+
# Install in development mode
|
|
880
|
+
pip install -e ".[dev]"
|
|
881
|
+
|
|
882
|
+
# Run tests
|
|
883
|
+
pytest
|
|
884
|
+
|
|
885
|
+
# Run with coverage
|
|
886
|
+
pytest --cov=agent_inspector --cov-report=html
|
|
887
|
+
```
|
|
888
|
+
|
|
889
|
+
### Project Structure
|
|
890
|
+
|
|
891
|
+
```
|
|
892
|
+
agent_inspector/
|
|
893
|
+
├── core/ # Core tracing SDK
|
|
894
|
+
│ ├── config.py # Configuration management
|
|
895
|
+
│ ├── events.py # Event model
|
|
896
|
+
│ ├── interfaces.py # Exporter and ReadStore protocols
|
|
897
|
+
│ ├── queue.py # Non-blocking queue
|
|
898
|
+
│ └── trace.py # Main Trace SDK
|
|
899
|
+
├── processing/ # Data processing pipeline
|
|
900
|
+
│ └── pipeline.py # Redaction, compression, encryption
|
|
901
|
+
├── storage/ # SQLite database
|
|
902
|
+
│ ├── database.py # Database operations
|
|
903
|
+
│ └── exporter.py # Storage exporter implementation
|
|
904
|
+
├── api/ # FastAPI REST API
|
|
905
|
+
│ └── main.py # API server
|
|
906
|
+
├── ui/ # Web interface
|
|
907
|
+
│ ├── app.py # UI router + static mounting
|
|
908
|
+
│ ├── static/ # CSS/JS assets
|
|
909
|
+
│ └── templates/ # HTML templates
|
|
910
|
+
├── adapters/ # Framework integrations
|
|
911
|
+
│ └── langchain_adapter.py
|
|
912
|
+
└── cli.py # Command-line interface
|
|
913
|
+
```
|
|
914
|
+
|
|
915
|
+
### Running Examples
|
|
916
|
+
|
|
917
|
+
```bash
|
|
918
|
+
# Basic tracing example
|
|
919
|
+
python examples/basic_tracing.py
|
|
920
|
+
|
|
921
|
+
# Real agent example (OpenAI-compatible)
|
|
922
|
+
python examples/real_agent.py "What is 13 * (7 + 5)?"
|
|
923
|
+
|
|
924
|
+
# Start API server
|
|
925
|
+
python -m agent_inspector.cli server
|
|
926
|
+
|
|
927
|
+
# View statistics
|
|
928
|
+
python -m agent_inspector.cli stats
|
|
929
|
+
|
|
930
|
+
# Prune old data
|
|
931
|
+
python -m agent_inspector.cli prune --retention-days 30 --vacuum
|
|
932
|
+
```
|
|
933
|
+
|
|
934
|
+
### Code Quality
|
|
935
|
+
|
|
936
|
+
```bash
|
|
937
|
+
# Format code
|
|
938
|
+
black agent_inspector/ examples/ tests/
|
|
939
|
+
|
|
940
|
+
# Lint code
|
|
941
|
+
flake8 agent_inspector/
|
|
942
|
+
|
|
943
|
+
# Type check
|
|
944
|
+
mypy agent_inspector/
|
|
945
|
+
```
|
|
946
|
+
|
|
947
|
+
### Releasing
|
|
948
|
+
|
|
949
|
+
Releases are automated with [Release Please](https://github.com/googleapis/release-please). Use **conventional commits** so Release Please can open and update a Release PR:
|
|
950
|
+
|
|
951
|
+
- **feat:** – new feature (bumps minor version)
|
|
952
|
+
- **fix:** – bug fix (bumps patch version)
|
|
953
|
+
- **feat!:** or **BREAKING CHANGE:** – breaking change (bumps major version)
|
|
954
|
+
|
|
955
|
+
When you merge the Release PR, a tag is created and the [publish workflow](.github/workflows/publish.yml) publishes to PyPI (OIDC).
|
|
956
|
+
|
|
957
|
+
---
|
|
958
|
+
|
|
959
|
+
## Contributing
|
|
960
|
+
|
|
961
|
+
We welcome contributions! Here's how to get started:
|
|
962
|
+
|
|
963
|
+
### Reporting Issues
|
|
964
|
+
|
|
965
|
+
1. Check existing issues on GitHub
|
|
966
|
+
2. Create a new issue with:
|
|
967
|
+
- Clear description of the bug or feature
|
|
968
|
+
- Steps to reproduce (for bugs)
|
|
969
|
+
- Expected vs actual behavior
|
|
970
|
+
- Environment details (Python version, OS, etc.)
|
|
971
|
+
|
|
972
|
+
### Submitting Pull Requests
|
|
973
|
+
|
|
974
|
+
1. Fork the repository
|
|
975
|
+
2. Create a feature branch:
|
|
976
|
+
```bash
|
|
977
|
+
git checkout -b feature/my-feature
|
|
978
|
+
```
|
|
979
|
+
3. Make your changes
|
|
980
|
+
4. Run tests:
|
|
981
|
+
```bash
|
|
982
|
+
pytest
|
|
983
|
+
```
|
|
984
|
+
5. Ensure code quality:
|
|
985
|
+
```bash
|
|
986
|
+
black agent_inspector/
|
|
987
|
+
flake8 agent_inspector/
|
|
988
|
+
```
|
|
989
|
+
6. Commit your changes
|
|
990
|
+
7. Push to your fork
|
|
991
|
+
8. Create a pull request
|
|
992
|
+
|
|
993
|
+
### Development Guidelines
|
|
994
|
+
|
|
995
|
+
- Follow PEP 8 style guide
|
|
996
|
+
- Add tests for new features
|
|
997
|
+
- Update documentation
|
|
998
|
+
- Keep changes minimal and focused
|
|
999
|
+
|
|
1000
|
+
---
|
|
1001
|
+
|
|
1002
|
+
## CLI Commands
|
|
1003
|
+
|
|
1004
|
+
```bash
|
|
1005
|
+
# Initialize Agent Inspector
|
|
1006
|
+
agent-inspector init [--profile production|development|debug]
|
|
1007
|
+
|
|
1008
|
+
# Start API server
|
|
1009
|
+
agent-inspector server [--host HOST] [--port PORT]
|
|
1010
|
+
|
|
1011
|
+
# View statistics
|
|
1012
|
+
agent-inspector stats
|
|
1013
|
+
|
|
1014
|
+
# Prune old traces
|
|
1015
|
+
agent-inspector prune [--retention-days N] [--vacuum]
|
|
1016
|
+
|
|
1017
|
+
# Vacuum database
|
|
1018
|
+
agent-inspector vacuum
|
|
1019
|
+
|
|
1020
|
+
# Create backup
|
|
1021
|
+
agent-inspector backup /path/to/backup.db
|
|
1022
|
+
|
|
1023
|
+
# View configuration
|
|
1024
|
+
agent-inspector config [--show] [--profile PROFILE]
|
|
1025
|
+
|
|
1026
|
+
# Show version
|
|
1027
|
+
agent-inspector --version
|
|
1028
|
+
```
|
|
1029
|
+
|
|
1030
|
+
---
|
|
1031
|
+
|
|
1032
|
+
## Performance
|
|
1033
|
+
|
|
1034
|
+
Agent Inspector is designed for minimal overhead:
|
|
1035
|
+
|
|
1036
|
+
| Operation | Target | Typical |
|
|
1037
|
+
|-----------|--------|---------|
|
|
1038
|
+
| Queue event | <100μs | ~50μs |
|
|
1039
|
+
| Create event | <1ms | ~200μs |
|
|
1040
|
+
| Compress data | N/A | 5-10x reduction |
|
|
1041
|
+
| API latency | <100ms | ~50ms |
|
|
1042
|
+
| UI load | <500ms | ~200ms |
|
|
1043
|
+
|
|
1044
|
+
### Memory Usage
|
|
1045
|
+
|
|
1046
|
+
- Queue: ~10KB (1000 events × 10 bytes/event)
|
|
1047
|
+
- Background thread: ~5MB (batch processing)
|
|
1048
|
+
- Database: Varies with trace volume
|
|
1049
|
+
|
|
1050
|
+
---
|
|
1051
|
+
|
|
1052
|
+
## Security
|
|
1053
|
+
|
|
1054
|
+
### Default Protections
|
|
1055
|
+
|
|
1056
|
+
- 🔒 **Redaction** - Sensitive keys redacted by default
|
|
1057
|
+
- 🗜️ **Compression** - Reduces storage footprint
|
|
1058
|
+
- 🔐 **Encryption** - Fernet encryption (optional)
|
|
1059
|
+
- 📊 **Sampling** - Reduces data collection volume
|
|
1060
|
+
- 💾 **Local-First** - No data leaves your machine
|
|
1061
|
+
|
|
1062
|
+
### Best Practices
|
|
1063
|
+
|
|
1064
|
+
1. **Never log API keys** - Use redaction or environment variables
|
|
1065
|
+
2. **Enable encryption** - For production deployments
|
|
1066
|
+
3. **Use sampling** - Reduce overhead in high-traffic scenarios
|
|
1067
|
+
4. **Review traces** - Regularly audit what's being captured
|
|
1068
|
+
5. **Prune old data** - Set appropriate retention policies
|
|
1069
|
+
|
|
1070
|
+
---
|
|
1071
|
+
|
|
1072
|
+
## License
|
|
1073
|
+
|
|
1074
|
+
This project is licensed under the MIT License - see the [LICENSE](LICENSE) file for details.
|
|
1075
|
+
|
|
1076
|
+
---
|
|
1077
|
+
|
|
1078
|
+
## Support
|
|
1079
|
+
|
|
1080
|
+
- 📖 [Documentation](https://github.com/koladilip/ai-agent-inspector#readme)
|
|
1081
|
+
- 🐛 [Issue Tracker](https://github.com/koladilip/ai-agent-inspector/issues)
|
|
1082
|
+
---
|
|
1083
|
+
|
|
1084
|
+
## Acknowledgments
|
|
1085
|
+
|
|
1086
|
+
|
|
1087
|
+
---
|
|
1088
|
+
|
|
1089
|
+
<div align="center">
|
|
1090
|
+
|
|
1091
|
+
**Made with ❤️ by Dilip Kola**
|
|
1092
|
+
|
|
1093
|
+
[⭐ Star on GitHub](https://github.com/koladilip/ai-agent-inspector)
|
|
1094
|
+
</div>
|