agentic-ledger 0.1.0__tar.gz

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
@@ -0,0 +1,36 @@
1
+ .DS_Store
2
+ .claude/
3
+ CLAUDE.md
4
+
5
+ # Python bytecode and caches
6
+ __pycache__/
7
+ *.py[cod]
8
+ *$py.class
9
+
10
+ # Virtual environments
11
+ .venv/
12
+ venv/
13
+ env/
14
+ ENV/
15
+
16
+ # Packaging/build artifacts
17
+ build/
18
+ dist/
19
+ *.egg-info/
20
+ .eggs/
21
+ pip-wheel-metadata/
22
+
23
+ # Test and type-checker caches
24
+ .pytest_cache/
25
+ .mypy_cache/
26
+ .ruff_cache/
27
+ .pyre/
28
+ .hypothesis/
29
+
30
+ # Coverage reports
31
+ .coverage
32
+ .coverage.*
33
+ htmlcov/
34
+
35
+ # Jupyter
36
+ .ipynb_checkpoints/
@@ -0,0 +1,21 @@
1
+ MIT License
2
+
3
+ Copyright (c) 2026 ShekharBhardwaj
4
+
5
+ Permission is hereby granted, free of charge, to any person obtaining a copy
6
+ of this software and associated documentation files (the "Software"), to deal
7
+ in the Software without restriction, including without limitation the rights
8
+ to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
9
+ copies of the Software, and to permit persons to whom the Software is
10
+ furnished to do so, subject to the following conditions:
11
+
12
+ The above copyright notice and this permission notice shall be included in all
13
+ copies or substantial portions of the Software.
14
+
15
+ THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
16
+ IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
17
+ FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
18
+ AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
19
+ LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
20
+ OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
21
+ SOFTWARE.
@@ -0,0 +1,523 @@
1
+ Metadata-Version: 2.4
2
+ Name: agentic-ledger
3
+ Version: 0.1.0
4
+ Summary: Runtime observability for AI agents — see exactly what your agent did, why it did it, and what it cost.
5
+ Project-URL: Homepage, https://github.com/ShekharBhardwaj/AgentLedger
6
+ Project-URL: Repository, https://github.com/ShekharBhardwaj/AgentLedger
7
+ Project-URL: Issues, https://github.com/ShekharBhardwaj/AgentLedger/issues
8
+ Author: Shekhar Bhardwaj
9
+ License-Expression: MIT
10
+ License-File: LICENSE
11
+ Keywords: agents,ai,anthropic,audit,crewai,debugging,langchain,llm,observability,openai,tracing
12
+ Classifier: Development Status :: 3 - Alpha
13
+ Classifier: Intended Audience :: Developers
14
+ Classifier: License :: OSI Approved :: MIT License
15
+ Classifier: Programming Language :: Python :: 3
16
+ Classifier: Programming Language :: Python :: 3.10
17
+ Classifier: Programming Language :: Python :: 3.11
18
+ Classifier: Programming Language :: Python :: 3.12
19
+ Classifier: Topic :: Software Development :: Debuggers
20
+ Classifier: Topic :: Software Development :: Libraries :: Python Modules
21
+ Requires-Python: >=3.10
22
+ Requires-Dist: aiosqlite>=0.19
23
+ Requires-Dist: fastapi>=0.110
24
+ Requires-Dist: httpx>=0.27
25
+ Requires-Dist: uvicorn[standard]>=0.29
26
+ Provides-Extra: dev
27
+ Requires-Dist: aiosqlite>=0.19; extra == 'dev'
28
+ Requires-Dist: anthropic>=0.18; extra == 'dev'
29
+ Requires-Dist: hatch-vcs; extra == 'dev'
30
+ Requires-Dist: openai>=1.0; extra == 'dev'
31
+ Requires-Dist: pytest-asyncio>=0.23; extra == 'dev'
32
+ Requires-Dist: pytest>=7.0; extra == 'dev'
33
+ Provides-Extra: otel
34
+ Requires-Dist: opentelemetry-exporter-otlp-proto-http>=1.24; extra == 'otel'
35
+ Requires-Dist: opentelemetry-sdk>=1.24; extra == 'otel'
36
+ Provides-Extra: postgres
37
+ Requires-Dist: asyncpg>=0.29; extra == 'postgres'
38
+ Description-Content-Type: text/markdown
39
+
40
+ # AgentLedger
41
+
42
+ [![CI](https://github.com/ShekharBhardwaj/AgentLedger/actions/workflows/ci.yml/badge.svg)](https://github.com/ShekharBhardwaj/AgentLedger/actions/workflows/ci.yml)
43
+ [![PyPI](https://img.shields.io/pypi/v/agentic-ledger)](https://pypi.org/project/agentic-ledger/)
44
+ [![Docker](https://img.shields.io/badge/docker-ghcr.io-blue)](https://ghcr.io/shekharBhardwaj/agentledger)
45
+ [![License: MIT](https://img.shields.io/badge/license-MIT-green)](LICENSE)
46
+
47
+ Runtime observability for AI agents — see exactly what your agent did, why it did it, and what it cost.
48
+
49
+ Works with **any agent framework**, **any LLM provider**, **any model gateway**. Zero code changes required. Point your agent at the proxy and everything is captured automatically.
50
+
51
+ ---
52
+
53
+ ## How it works
54
+
55
+ AgentLedger runs as a transparent proxy between your agent and the LLM provider. It intercepts every request and response, assigns it an `action_id`, stores it, and returns the upstream response unmodified. Your agent never knows the proxy is there.
56
+
57
+ ```
58
+ Your Agent → AgentLedger Proxy → OpenAI / Anthropic / LiteLLM / any LLM
59
+
60
+ SQLite or Postgres
61
+
62
+ Live Dashboard + API
63
+ ```
64
+
65
+ ---
66
+
67
+ ## Quick Start
68
+
69
+ **Step 1 — Start the proxy**
70
+
71
+ With Docker (recommended, no Python required):
72
+ ```bash
73
+ docker run -p 8000:8000 \
74
+ -e AGENTLEDGER_UPSTREAM_URL=https://api.openai.com \
75
+ -v $(pwd)/data:/data \
76
+ ghcr.io/shekharBhardwaj/agentledger:latest
77
+ ```
78
+
79
+ Or with docker compose (SQLite by default, Postgres available — see `docker-compose.yml`):
80
+ ```bash
81
+ AGENTLEDGER_UPSTREAM_URL=https://api.openai.com docker compose up
82
+ ```
83
+
84
+ With `uv`:
85
+ ```bash
86
+ uv add agentic-ledger
87
+ AGENTLEDGER_UPSTREAM_URL=https://api.openai.com uv run python -m agentledger.proxy
88
+ ```
89
+
90
+ With `pip`:
91
+ ```bash
92
+ python -m venv venv && source venv/bin/activate
93
+ pip install agentic-ledger
94
+ AGENTLEDGER_UPSTREAM_URL=https://api.openai.com ./venv/bin/python -m agentledger.proxy
95
+ ```
96
+
97
+ > **Postgres?** Install the extra and set `AGENTLEDGER_DSN`:
98
+ > ```bash
99
+ > pip install "agentic-ledger[postgres]"
100
+ > AGENTLEDGER_DSN=postgresql://user:password@localhost/agentledger
101
+ > ```
102
+
103
+ > **OpenTelemetry?** Install the extra and set `AGENTLEDGER_OTEL_ENDPOINT`:
104
+ > ```bash
105
+ > pip install "agentic-ledger[otel]"
106
+ > AGENTLEDGER_OTEL_ENDPOINT=http://localhost:4318
107
+ > ```
108
+
109
+ Proxy starts on `http://localhost:8000`. Traces are saved to `agentledger.db` in the current folder (or `/data/agentledger.db` in Docker).
110
+
111
+ ---
112
+
113
+ **Step 2 — Point your agent at the proxy**
114
+
115
+ Two changes: set `base_url` to the proxy and add a session ID header to group calls into a run. Everything else — your API key, model, messages — stays exactly the same.
116
+
117
+ **OpenAI:**
118
+ ```python
119
+ from openai import OpenAI
120
+
121
+ client = OpenAI(
122
+ base_url="http://localhost:8000/v1", # ← proxy
123
+ api_key="your-openai-key",
124
+ default_headers={"x-agentledger-session-id": "run-1"},
125
+ )
126
+
127
+ response = client.chat.completions.create(
128
+ model="gpt-4o",
129
+ messages=[{"role": "user", "content": "Research the top 3 AI trends in 2026"}],
130
+ )
131
+ ```
132
+
133
+ **Anthropic:**
134
+ ```python
135
+ import anthropic
136
+
137
+ client = anthropic.Anthropic(
138
+ base_url="http://localhost:8000", # ← proxy
139
+ api_key="your-anthropic-key",
140
+ default_headers={"x-agentledger-session-id": "run-1"},
141
+ )
142
+ ```
143
+
144
+ **LiteLLM / OpenRouter / any gateway:**
145
+ ```bash
146
+ # Point AgentLedger at your gateway
147
+ AGENTLEDGER_UPSTREAM_URL=http://localhost:4000 uv run python -m agentledger.proxy
148
+
149
+ # Then point your agent at AgentLedger
150
+ client = OpenAI(base_url="http://localhost:8000/v1", ...)
151
+ ```
152
+
153
+ ---
154
+
155
+ **Step 3 — Open the dashboard**
156
+
157
+ ```
158
+ http://localhost:8000
159
+ ```
160
+
161
+ The dashboard updates live via WebSocket as calls come in. No refresh needed.
162
+
163
+ - **Calls tab** — every LLM call with full prompt, system prompt, tool calls, tool results, output, tokens, cost, latency, and errors
164
+ - **Flow tab** — visual DAG of your multi-agent system. Each agent is a node with aggregated cost, latency, and call count. Edges represent handoffs. Click a node to highlight its calls.
165
+ - **Search** — full-text search across all sessions by prompt, output, agent name, or user ID
166
+
167
+ ---
168
+
169
+ ## What gets captured
170
+
171
+ Every LLM call is stored with:
172
+
173
+ | Field | What it contains |
174
+ |---|---|
175
+ | `action_id` | UUID assigned at interception time |
176
+ | `session_id` | Run grouping (from header) |
177
+ | `timestamp` | When the call was made |
178
+ | `model_id` | Model used |
179
+ | `provider` | `openai` or `anthropic` |
180
+ | `messages` | Full message history sent to the model |
181
+ | `system_prompt` | Extracted system prompt |
182
+ | `tools` | Tool definitions available to the model |
183
+ | `tool_calls` | Tools the model decided to call |
184
+ | `tool_results` | What the tools returned (from next call's messages) |
185
+ | `content` | Model's text output |
186
+ | `stop_reason` | Why the model stopped |
187
+ | `tokens_in` / `tokens_out` | Token usage |
188
+ | `cost_usd` | Estimated cost based on model pricing |
189
+ | `latency_ms` | End-to-end response time |
190
+ | `status_code` | HTTP status from upstream — errors are captured too |
191
+ | `error_detail` | Upstream error message for non-200 responses |
192
+ | `agent_name` | From `x-agentledger-agent-name` header |
193
+ | `user_id` | From `x-agentledger-user-id` header |
194
+ | `app_id` | From `x-agentledger-app-id` header |
195
+ | `environment` | From `x-agentledger-environment` header |
196
+ | `parent_action_id` | Parent call in a nested agent graph |
197
+ | `handoff_from` / `handoff_to` | Agent handoff tracking for the Flow DAG |
198
+
199
+ ---
200
+
201
+ ## API reference
202
+
203
+ | Method | Endpoint | Description |
204
+ |---|---|---|
205
+ | `GET` | `/` | Live dashboard |
206
+ | `WS` | `/ws` | WebSocket stream — powers live dashboard updates |
207
+ | `GET` | `/api/sessions` | List recent sessions with aggregated stats |
208
+ | `GET` | `/api/search?q=...` | Full-text search across all captured calls |
209
+ | `GET` | `/session/{session_id}` | All calls in a session, ordered by time |
210
+ | `GET` | `/explain/{action_id}` | Single call by action ID |
211
+ | `GET` | `/export/{session_id}` | JSON compliance export with SHA-256 integrity hash |
212
+ | `GET` | `/export/{session_id}/report` | Printable HTML audit report |
213
+ | `POST` | `/mcp` | MCP tool server — exposes captured data to other agents |
214
+
215
+ **Examples:**
216
+ ```bash
217
+ # All calls in a session
218
+ curl http://localhost:8000/session/run-1
219
+
220
+ # Search across all sessions
221
+ curl "http://localhost:8000/api/search?q=failed+to+connect"
222
+
223
+ # Download JSON audit trail (includes SHA-256 hash for tamper detection)
224
+ curl http://localhost:8000/export/run-1 -o audit-run-1.json
225
+
226
+ # Printable HTML report — open in browser, print to PDF
227
+ open http://localhost:8000/export/run-1/report
228
+ ```
229
+
230
+ ---
231
+
232
+ ## Configuration
233
+
234
+ ### Environment variables
235
+
236
+ **Core:**
237
+
238
+ | Variable | Required | Default | Description |
239
+ |---|---|---|---|
240
+ | `AGENTLEDGER_UPSTREAM_URL` | **Yes** | `https://api.openai.com` | LLM endpoint to forward requests to. Accepts OpenAI, Anthropic, LiteLLM, OpenRouter, or any OpenAI-compatible URL. |
241
+ | `AGENTLEDGER_DSN` | No | `sqlite:///agentledger.db` | Database. SQLite for local dev, Postgres URL for production. |
242
+ | `AGENTLEDGER_HOST` | No | `0.0.0.0` | Host to bind to. Use `127.0.0.1` to restrict to localhost only. |
243
+ | `AGENTLEDGER_PORT` | No | `8000` | Port to run on. |
244
+ | `AGENTLEDGER_API_KEY` | No | _(none)_ | Protects the dashboard and all read endpoints. Skip for local dev. Set when the proxy is on a server — you choose the value. |
245
+
246
+ **Cost budgets** — block calls that exceed a spend limit (returns HTTP 429):
247
+
248
+ | Variable | Default | Description |
249
+ |---|---|---|
250
+ | `AGENTLEDGER_BUDGET_SESSION` | _(none)_ | Max USD per `session_id` across its lifetime. |
251
+ | `AGENTLEDGER_BUDGET_AGENT` | _(none)_ | Max USD per `agent_name` per calendar day (UTC). |
252
+ | `AGENTLEDGER_BUDGET_DAILY` | _(none)_ | Max USD total across all calls per calendar day (UTC). |
253
+
254
+ **Rate limits** — block calls that exceed request frequency (returns HTTP 429, sliding 60-second window):
255
+
256
+ | Variable | Default | Description |
257
+ |---|---|---|
258
+ | `AGENTLEDGER_RATE_LIMIT_RPM` | _(none)_ | Max requests per minute globally. |
259
+ | `AGENTLEDGER_RATE_LIMIT_SESSION_RPM` | _(none)_ | Max requests per minute per `session_id`. |
260
+ | `AGENTLEDGER_RATE_LIMIT_AGENT_RPM` | _(none)_ | Max requests per minute per `agent_name`. |
261
+ | `AGENTLEDGER_RATE_LIMIT_USER_RPM` | _(none)_ | Max requests per minute per `user_id`. |
262
+
263
+ **Alerts** — POST to your webhook when a threshold is breached (does not block calls — see [Alerts](#alerts)):
264
+
265
+ | Variable | Default | Description |
266
+ |---|---|---|
267
+ | `AGENTLEDGER_ALERT_WEBHOOK_URL` | _(none)_ | URL to POST alert payloads to. Required for any alerts to fire. |
268
+ | `AGENTLEDGER_ALERT_COST_PER_CALL` | _(none)_ | Alert when a single call costs more than `$X`. |
269
+ | `AGENTLEDGER_ALERT_LATENCY_MS` | _(none)_ | Alert when a single call takes longer than `Xms`. |
270
+ | `AGENTLEDGER_ALERT_ERROR_RATE` | _(none)_ | Alert when session error rate exceeds `X` (e.g. `0.5` = 50%). |
271
+ | `AGENTLEDGER_ALERT_DAILY_SPEND` | _(none)_ | Alert when daily spend crosses `$X`. Unlike budgets, this does not block calls. |
272
+
273
+ ---
274
+
275
+ ### Common startup examples
276
+
277
+ ```bash
278
+ # Local dev — OpenAI (default)
279
+ AGENTLEDGER_UPSTREAM_URL=https://api.openai.com uv run python -m agentledger.proxy
280
+
281
+ # Local dev — Anthropic
282
+ AGENTLEDGER_UPSTREAM_URL=https://api.anthropic.com uv run python -m agentledger.proxy
283
+
284
+ # Local dev — LiteLLM gateway (any model)
285
+ AGENTLEDGER_UPSTREAM_URL=http://localhost:4000 uv run python -m agentledger.proxy
286
+
287
+ # Production — Postgres + auth + budgets + rate limits + alerts
288
+ AGENTLEDGER_UPSTREAM_URL=https://api.openai.com \
289
+ AGENTLEDGER_DSN=postgresql://user:password@localhost/agentledger \
290
+ AGENTLEDGER_API_KEY=my-secret \
291
+ AGENTLEDGER_BUDGET_DAILY=20.00 \
292
+ AGENTLEDGER_BUDGET_SESSION=2.00 \
293
+ AGENTLEDGER_RATE_LIMIT_SESSION_RPM=20 \
294
+ AGENTLEDGER_RATE_LIMIT_USER_RPM=60 \
295
+ AGENTLEDGER_ALERT_WEBHOOK_URL=https://hooks.slack.com/services/xxx/yyy/zzz \
296
+ AGENTLEDGER_ALERT_COST_PER_CALL=0.50 \
297
+ AGENTLEDGER_ALERT_DAILY_SPEND=15.00 \
298
+ uv run python -m agentledger.proxy
299
+ ```
300
+
301
+ When `AGENTLEDGER_API_KEY` is set, pass it to access protected endpoints:
302
+ ```bash
303
+ # Header
304
+ curl -H "x-agentledger-api-key: my-secret" http://localhost:8000/session/run-1
305
+
306
+ # Query param (browser)
307
+ http://localhost:8000?api_key=my-secret
308
+ ```
309
+
310
+ ---
311
+
312
+ ### Request headers
313
+
314
+ Pass these from your agent on each LLM call. All optional. They enrich captured data, power the Flow tab, and enable per-dimension budgets and rate limits.
315
+
316
+ | Header | Default | Description |
317
+ |---|---|---|
318
+ | `x-agentledger-session-id` | _(none)_ | Groups all calls in a run. Use a consistent ID per agent execution (e.g. a UUID or `"run-1"`). Without this, calls are stored but not grouped in the dashboard. |
319
+ | `x-agentledger-user-id` | _(none)_ | End user who triggered this run. Enables per-user rate limiting and auditing. |
320
+ | `x-agentledger-agent-name` | _(none)_ | Name of the agent making this call (e.g. `"orchestrator"`, `"researcher"`). Powers the Flow tab DAG and agent-level budgets and rate limits. |
321
+ | `x-agentledger-app-id` | _(none)_ | Application name or ID. Useful when multiple apps share one proxy. |
322
+ | `x-agentledger-parent-action-id` | _(none)_ | The `action_id` of the call that spawned this one. Builds the nested agent call graph. |
323
+ | `x-agentledger-environment` | `development` | `production`, `staging`, or `development`. Shown in the dashboard. |
324
+ | `x-agentledger-handoff-from` | _(none)_ | Agent handing off control (e.g. `"orchestrator"`). Renders as a directed edge in the Flow DAG. |
325
+ | `x-agentledger-handoff-to` | _(none)_ | Agent receiving control (e.g. `"researcher"`). Renders as a directed edge in the Flow DAG. |
326
+
327
+ **Single agent — fully annotated:**
328
+ ```python
329
+ from openai import OpenAI
330
+
331
+ client = OpenAI(
332
+ base_url="http://localhost:8000/v1",
333
+ api_key="your-openai-key",
334
+ default_headers={
335
+ "x-agentledger-session-id": "run-abc123",
336
+ "x-agentledger-user-id": "user-42",
337
+ "x-agentledger-agent-name": "researcher",
338
+ "x-agentledger-app-id": "my-app",
339
+ "x-agentledger-environment": "production",
340
+ },
341
+ )
342
+ ```
343
+
344
+ **Multi-agent system — tracking handoffs:**
345
+ ```python
346
+ from openai import OpenAI
347
+
348
+ # Orchestrator
349
+ orchestrator_client = OpenAI(
350
+ base_url="http://localhost:8000/v1",
351
+ api_key="your-openai-key",
352
+ default_headers={
353
+ "x-agentledger-session-id": "run-abc123",
354
+ "x-agentledger-agent-name": "orchestrator",
355
+ },
356
+ )
357
+
358
+ # Researcher (receives handoff from orchestrator)
359
+ researcher_client = OpenAI(
360
+ base_url="http://localhost:8000/v1",
361
+ api_key="your-openai-key",
362
+ default_headers={
363
+ "x-agentledger-session-id": "run-abc123",
364
+ "x-agentledger-agent-name": "researcher",
365
+ "x-agentledger-handoff-from": "orchestrator",
366
+ "x-agentledger-handoff-to": "researcher",
367
+ },
368
+ )
369
+ ```
370
+
371
+ The Flow tab renders `orchestrator → researcher` as a DAG with cost and latency on each node.
372
+
373
+ ---
374
+
375
+ ## Alerts
376
+
377
+ AgentLedger fires a `POST` to your webhook URL when a threshold is breached. You connect it to whatever you already use — Slack, PagerDuty, Discord, email, or a custom endpoint. AgentLedger sends the payload; the integration is on your side.
378
+
379
+ **Payload format:**
380
+ ```json
381
+ {
382
+ "type": "high_cost",
383
+ "message": "Single call cost $0.1842 exceeded threshold $0.10",
384
+ "value": 0.1842,
385
+ "threshold": 0.10,
386
+ "action_id": "a1b2c3d4-...",
387
+ "session_id": "run-1",
388
+ "agent_name": "researcher",
389
+ "timestamp": "2026-04-03T12:00:00+00:00"
390
+ }
391
+ ```
392
+
393
+ **Alert types:**
394
+
395
+ | Type | Triggered when |
396
+ |---|---|
397
+ | `high_cost` | A single call exceeds `AGENTLEDGER_ALERT_COST_PER_CALL` |
398
+ | `high_latency` | A single call takes longer than `AGENTLEDGER_ALERT_LATENCY_MS` |
399
+ | `high_error_rate` | Session error rate exceeds `AGENTLEDGER_ALERT_ERROR_RATE` |
400
+ | `daily_spend` | Daily total spend crosses `AGENTLEDGER_ALERT_DAILY_SPEND` |
401
+
402
+ **Budgets vs alerts:**
403
+ - **Budgets** (`AGENTLEDGER_BUDGET_*`) — block the call before it reaches the LLM. Agent gets HTTP 429.
404
+ - **Alerts** (`AGENTLEDGER_ALERT_*`) — the call goes through, you get notified after.
405
+
406
+ **Slack** — create an [Incoming Webhook](https://api.slack.com/messaging/webhooks) and point `AGENTLEDGER_ALERT_WEBHOOK_URL` at it.
407
+
408
+ **PagerDuty** — use the [Events API v2](https://developer.pagerduty.com/docs/events-api-v2/) URL or a thin adapter that maps `type` → PagerDuty severity.
409
+
410
+ **Discord** — use a Discord channel webhook URL directly.
411
+
412
+ **Custom** — any HTTP endpoint that accepts a JSON `POST`.
413
+
414
+ ---
415
+
416
+ ## OpenTelemetry export
417
+
418
+ AgentLedger can emit every intercepted LLM call as an OTel span to any OTLP-compatible collector: Grafana Tempo, Jaeger, Honeycomb, Datadog, Dynatrace, or any vendor that supports OTLP/HTTP.
419
+
420
+ **Install the extra:**
421
+ ```bash
422
+ pip install "agentic-ledger[otel]"
423
+ # or
424
+ uv add "agentic-ledger[otel]"
425
+ ```
426
+
427
+ **Configure:**
428
+
429
+ | Variable | Default | Description |
430
+ |---|---|---|
431
+ | `AGENTLEDGER_OTEL_ENDPOINT` | _(none)_ | OTLP/HTTP base URL, e.g. `http://localhost:4318`. OTel export is disabled when not set. |
432
+ | `AGENTLEDGER_OTEL_SERVICE_NAME` | `agentledger` | Value of `service.name` in the emitted resource. |
433
+ | `AGENTLEDGER_OTEL_HEADERS` | _(none)_ | Comma-separated `key=value` pairs for auth headers, e.g. `x-honeycomb-team=abc123,x-honeycomb-dataset=llm`. |
434
+
435
+ **Example — Grafana Tempo:**
436
+ ```bash
437
+ AGENTLEDGER_UPSTREAM_URL=https://api.openai.com \
438
+ AGENTLEDGER_OTEL_ENDPOINT=http://localhost:4318 \
439
+ AGENTLEDGER_OTEL_SERVICE_NAME=my-agent \
440
+ uv run python -m agentledger.proxy
441
+ ```
442
+
443
+ **Example — Honeycomb:**
444
+ ```bash
445
+ AGENTLEDGER_OTEL_ENDPOINT=https://api.honeycomb.io \
446
+ AGENTLEDGER_OTEL_HEADERS=x-honeycomb-team=YOUR_API_KEY,x-honeycomb-dataset=llm-traces \
447
+ uv run python -m agentledger.proxy
448
+ ```
449
+
450
+ **Span attributes emitted (GenAI semantic conventions):**
451
+
452
+ | Attribute | Source |
453
+ |---|---|
454
+ | `gen_ai.system` | Provider (`openai` / `anthropic`) |
455
+ | `gen_ai.operation.name` | Always `chat` |
456
+ | `gen_ai.request.model` | Model ID |
457
+ | `gen_ai.request.temperature` | If set |
458
+ | `gen_ai.request.max_tokens` | If set |
459
+ | `gen_ai.usage.input_tokens` | Tokens in |
460
+ | `gen_ai.usage.output_tokens` | Tokens out |
461
+ | `gen_ai.response.finish_reasons` | Stop reason |
462
+ | `agentledger.action_id` | Unique call ID |
463
+ | `agentledger.session_id` | Run grouping |
464
+ | `agentledger.agent_name` | From header |
465
+ | `agentledger.user_id` | From header |
466
+ | `agentledger.cost_usd` | Estimated cost |
467
+ | `agentledger.latency_ms` | End-to-end latency |
468
+ | `agentledger.environment` | From header |
469
+ | `agentledger.handoff_from` / `agentledger.handoff_to` | Agent handoffs |
470
+ | `http.status_code` | HTTP status from upstream |
471
+
472
+ Spans are grouped into traces by `session_id` — all calls in a session appear as one trace in your backend. Parent-child relationships follow `x-agentledger-parent-action-id`. Error spans (`status_code != 200`) are marked with `StatusCode.ERROR`.
473
+
474
+ ---
475
+
476
+ ## Compliance export
477
+
478
+ Every session can be exported as a signed audit trail — useful for regulated industries, internal audits, or passing traces to external tools.
479
+
480
+ ```bash
481
+ # Machine-readable JSON with SHA-256 integrity hash
482
+ curl http://localhost:8000/export/run-1 -o audit-run-1.json
483
+
484
+ # Printable HTML — open in browser and print to PDF
485
+ open http://localhost:8000/export/run-1/report
486
+ ```
487
+
488
+ The JSON export includes a `sha256` hash of the calls array. Recipients can verify the export has not been modified after generation.
489
+
490
+ ---
491
+
492
+ ## Releasing
493
+
494
+ Tagging a version triggers the full release pipeline automatically:
495
+
496
+ ```bash
497
+ git tag v0.2.0
498
+ git push origin v0.2.0
499
+ ```
500
+
501
+ This runs three jobs:
502
+ 1. **Docker** — builds and pushes `ghcr.io/shekharBhardwaj/agentledger:0.2.0` and `:latest` to GHCR
503
+ 2. **PyPI** — builds and publishes `agentic-ledger==0.2.0` to PyPI using trusted publishing (no API token needed)
504
+ 3. **GitHub Release** — creates a release with auto-generated changelog from commit messages
505
+
506
+ **First-time PyPI setup** (one time only):
507
+ 1. Go to [pypi.org/manage/account/publishing](https://pypi.org/manage/account/publishing/)
508
+ 2. Add a new pending publisher:
509
+ ```
510
+ PyPI project name: agentic-ledger
511
+ Owner: ShekharBhardwaj
512
+ Repository: AgentLedger
513
+ Workflow name: release.yml
514
+ Environment name: pypi
515
+ ```
516
+ 3. Create a `pypi` environment in GitHub: repo → Settings → Environments → New environment → name it `pypi`
517
+ 4. That's it — no secrets needed
518
+
519
+ ---
520
+
521
+ ## License
522
+
523
+ MIT