@amplitude/ai 0.1.0 → 0.1.1

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
package/AGENTS.md CHANGED
@@ -2,7 +2,7 @@
2
2
 
3
3
  # AGENTS.md
4
4
 
5
- Package: `@amplitude/ai` v0.1.0
5
+ Package: `@amplitude/ai` v0.1.1
6
6
 
7
7
  ## Install
8
8
 
package/README.md CHANGED
@@ -39,17 +39,26 @@ One call auto-detects and patches every installed provider (OpenAI, Anthropic, A
39
39
 
40
40
  - [Installation](#installation)
41
41
  - [Quick Start](#quick-start)
42
+ - [Current Limitations](#current-limitations)
43
+ - [Is this for me?](#is-this-for-me)
44
+ - [Why this SDK?](#why-this-sdk)
45
+ - [What you can build](#what-you-can-build)
42
46
  - [Choose Your Integration Tier](#choose-your-integration-tier)
43
47
  - [Support matrix](#support-matrix)
44
48
  - [Parity and runtime limitations](#parity-and-runtime-limitations)
45
49
  - [Core Concepts](#core-concepts)
50
+ - [User Identity](#user-identity)
51
+ - [Session](#session)
46
52
  - [Configuration](#configuration)
53
+ - [Context Dict Conventions](#context-dict-conventions)
47
54
  - [Privacy & Content Control](#privacy--content-control)
48
55
  - [Cache-Aware Cost Tracking](#cache-aware-cost-tracking)
56
+ - [Semantic Cache Tracking](#semantic-cache-tracking)
49
57
  - [Model Tier Classification](#model-tier-classification)
50
58
  - [Provider Wrappers](#provider-wrappers)
51
59
  - [Streaming Tracking](#streaming-tracking)
52
60
  - [Attachment Tracking](#attachment-tracking)
61
+ - [Implicit Feedback](#implicit-feedback)
53
62
  - [tool() and observe() HOFs](#tool-and-observe-hofs)
54
63
  - [Scoring Patterns](#scoring-patterns)
55
64
  - [Enrichments](#enrichments)
@@ -57,6 +66,8 @@ One call auto-detects and patches every installed provider (OpenAI, Anthropic, A
57
66
  - [Patching (Zero-Code Instrumentation)](#patching-zero-code-instrumentation)
58
67
  - [Auto-Instrumentation CLI](#auto-instrumentation-cli)
59
68
  - [Integrations](#integrations)
69
+ - [Data Flow](#data-flow)
70
+ - [Which Integration Should I Use?](#which-integration-should-i-use)
60
71
  - [Integration Patterns](#integration-patterns)
61
72
  - [Serverless Environments](#serverless-environments)
62
73
  - [Error Handling and Reliability](#error-handling-and-reliability)
@@ -152,6 +163,49 @@ The zero-code / CLI setup gives you cost, latency, token counts, and error track
152
163
 
153
164
  Adding `userId` is one option per call. Adding session context is `session.run()`. See [Session](#session) and [Choose Your Integration Tier](#choose-your-integration-tier).
154
165
 
166
+ ### Current Limitations
167
+
168
+ | Area | Status |
169
+ | ---- | ------ |
170
+ | Runtime | Node.js only (no browser). Python SDK available separately ([amplitude-ai on PyPI](https://pypi.org/project/amplitude-ai/)). |
171
+ | Zero-code patching | OpenAI, Anthropic, Azure OpenAI, Gemini, Mistral, Bedrock (Converse/ConverseStream only). |
172
+ | CrewAI | Python-only; the Node.js export throws `ProviderError` by design. Use LangChain or OpenTelemetry integrations instead. |
173
+ | OTEL scope filtering | Not yet supported (Python SDK has `allowed_scopes`/`blocked_scopes`). |
174
+ | Streaming cost tracking | Automatic for OpenAI and Anthropic. Manual token counts required for other providers' streamed responses. |
175
+
176
+ ### Is this for me?
177
+
178
+ **Yes, if** you're building an AI-powered feature (chatbot, copilot, agent, RAG pipeline) and you want to measure how it impacts real user behavior. AI events land in the same Amplitude project as your product events, so you can build funnels from "user asks a question" to "user converts," create cohorts of users with low AI quality scores, and measure retention without stitching data across tools.
179
+
180
+ **Already using an LLM observability tool?** Keep it. The [OTEL bridge](#opentelemetry) adds Amplitude as a second destination in one line. Your existing traces stay, and you get product analytics on top.
181
+
182
+ ### Why this SDK?
183
+
184
+ Most AI observability tools give you traces. This SDK gives you **per-turn events that live in your product analytics** so you can:
185
+
186
+ - Build funnels from "user opens chat" through "AI responds" to "user converts"
187
+ - Create cohorts of users with low AI quality scores and measure their 7-day retention
188
+ - Answer "is this AI feature helping or hurting?" without moving data between tools
189
+
190
+ The structural difference is the event model. Trace-centric tools typically produce spans per LLM call. This SDK produces **one event per conversation turn** with 40+ properties: model, tokens, cost, latency, reasoning, implicit feedback signals (regeneration, copy, abandonment), cache breakdowns, agent hierarchy, and experiment context. Each event is independently queryable in Amplitude's charts, cohorts, funnels, and retention analysis.
191
+
192
+ **Every AI event carries your product `user_id`.** No separate identity system, no data joining required. Build a funnel from "user opens chat" to "AI responds" to "user upgrades" directly in Amplitude.
193
+
194
+ **Server-side enrichment does the evals for you.** When content is available (`contentMode: 'full'`), Amplitude's enrichment pipeline runs automatically on every session after it closes. You get topic classifications, quality rubrics, behavioral flags, and session outcomes without writing or maintaining any eval code.
195
+
196
+ **Three content-control tiers.** `full` sends content and Amplitude runs enrichments for you. `metadata_only` sends zero content (you still get cost, latency, tokens, session grouping). `customer_enriched` sends zero content but lets you provide your own structured labels via `trackSessionEnrichment()`.
197
+
198
+ **Cache-aware cost tracking.** Pass `cacheReadTokens` and `cacheCreationTokens` for accurate blended costs. Without this breakdown, naive cost calculation can overestimate by 2-5x for cache-heavy workloads.
199
+
200
+ ### What you can build
201
+
202
+ Once AI events are in Amplitude alongside your product events:
203
+
204
+ - **Cohorts.** "Users who had 3+ task failures in the last 30 days." "Users with low task completion scores." Target them with Guides, measure churn impact.
205
+ - **Funnels.** "AI session about charts -> Chart Created." "Sign Up -> First AI Session -> Conversion." Measure whether AI drives feature adoption and onboarding.
206
+ - **Retention.** Do users with successful AI sessions retain better than those with failures? Segment retention curves by `[Agent] Overall Outcome` or task completion score.
207
+ - **Agent analytics.** Compare quality, cost, and failure rate across agents in one chart. Identify which agent in a multi-agent chain introduced a failure.
208
+
155
209
  ## Choose Your Integration Tier
156
210
 
157
211
  | Tier | Code Changes | What You Get |
@@ -328,6 +382,42 @@ const tenant = ai.tenant('org-456', { env: 'production' });
328
382
  const agent = tenant.agent('support-bot', { userId: 'user-123' });
329
383
  ```
330
384
 
385
+ ### User Identity
386
+
387
+ User identity flows through the **session**, **per-call**, or **middleware** -- not at agent creation or patch time. This keeps the agent reusable across users.
388
+
389
+ **Via sessions** (recommended): pass `userId` when opening a session:
390
+
391
+ ```typescript
392
+ const agent = ai.agent('support-bot', { env: 'production' });
393
+ const session = agent.session({ userId: 'user-42' });
394
+
395
+ await session.run(async (s) => {
396
+ s.trackUserMessage('Hello');
397
+ // userId inherited from session context
398
+ });
399
+ ```
400
+
401
+ **Per-call**: pass `userId` on each tracking call (useful with the zero-code tier):
402
+
403
+ ```typescript
404
+ agent.trackUserMessage('Hello', {
405
+ userId: 'user-42',
406
+ sessionId: 'sess-1',
407
+ });
408
+ ```
409
+
410
+ **Via middleware**: `createAmplitudeAIMiddleware` extracts user identity from the request (see [Middleware](#middleware)):
411
+
412
+ ```typescript
413
+ app.use(
414
+ createAmplitudeAIMiddleware({
415
+ amplitudeAI: ai,
416
+ userIdResolver: (req) => req.headers['x-user-id'] ?? null,
417
+ }),
418
+ );
419
+ ```
420
+
331
421
  ### Session
332
422
 
333
423
  Async context manager using `AsyncLocalStorage`. Use `session.run()` to execute a callback within session context; session end is tracked automatically on exit:
@@ -345,12 +435,37 @@ Start a new trace within an ongoing session to group related operations:
345
435
  ```typescript
346
436
  await session.run(async (s) => {
347
437
  const traceId = s.newTrace();
348
- // All subsequent tracking calls inherit this traceId
349
438
  s.trackUserMessage('Follow-up question');
350
439
  s.trackAiMessage(response.content, 'gpt-4o', 'openai', latencyMs);
351
440
  });
352
441
  ```
353
442
 
443
+ For sessions where gaps between messages may exceed 30 minutes (e.g., coding assistants, support agents waiting on customer replies), pass `idleTimeoutMinutes` so Amplitude knows the session is still active:
444
+
445
+ ```typescript
446
+ const session = agent.session({
447
+ userId: 'user-123',
448
+ idleTimeoutMinutes: 240, // expect up to 4-hour gaps
449
+ });
450
+ ```
451
+
452
+ Without this, sessions with long idle periods may be closed and evaluated prematurely. The default is 30 minutes.
453
+
454
+ **Link to Session Replay**: If your frontend uses Amplitude's [Session Replay](https://www.docs.developers.amplitude.com/session-replay/), pass the browser's `deviceId` and `browserSessionId` to link AI sessions to browser recordings:
455
+
456
+ ```typescript
457
+ const session = agent.session({
458
+ userId: 'user-123',
459
+ deviceId: req.headers['x-amp-device-id'],
460
+ browserSessionId: req.headers['x-amp-session-id'],
461
+ });
462
+
463
+ await session.run(async (s) => {
464
+ s.trackUserMessage('What is retention?');
465
+ // All events now carry [Amplitude] Session Replay ID = deviceId/browserSessionId
466
+ });
467
+ ```
468
+
354
469
  ### tool()
355
470
 
356
471
  Higher-order function wrapping functions to auto-track as `[Agent] Tool Call` events:
@@ -424,6 +539,58 @@ const ai = new AmplitudeAI({ apiKey: 'YOUR_API_KEY', config });
424
539
  | `onEventCallback` | Callback invoked after every tracked event `(event, statusCode, message) => void` |
425
540
  | `propagateContext` | Enable cross-service context propagation |
426
541
 
542
+ ## Context Dict Conventions
543
+
544
+ The `context` parameter on `ai.agent()` accepts an arbitrary `Record<string, unknown>` that is JSON-serialized and attached to every event as `[Agent] Context`. This is the recommended way to add segmentation dimensions without requiring new global properties.
545
+
546
+ **Recommended keys:**
547
+
548
+ | Key | Example Values | Use Case |
549
+ | --- | --- | --- |
550
+ | `agent_type` | `"planner"`, `"executor"`, `"retriever"`, `"router"` | Filter/group analytics by agent role in multi-agent systems. |
551
+ | `experiment_variant` | `"control"`, `"treatment-v2"`, `"prompt-rewrite-a"` | Segment AI sessions by A/B test variant. Compare quality scores, abandonment rates, or cost across experiment arms. |
552
+ | `feature_flag` | `"new-rag-pipeline"`, `"reasoning-model-enabled"` | Track which feature flags were active during the session. |
553
+ | `surface` | `"chat"`, `"search"`, `"copilot"`, `"email-draft"` | Identify which UI surface or product area triggered the AI interaction. |
554
+ | `prompt_revision` | `"v7"`, `"abc123"`, `"2026-02-15"` | Track which prompt version was used. Detect prompt regression when combined with `agentVersion`. |
555
+ | `deployment_region` | `"us-east-1"`, `"eu-west-1"` | Segment by deployment region for latency analysis or compliance tracking. |
556
+ | `canary_group` | `"canary"`, `"stable"` | Identify canary vs. stable deployments for progressive rollout monitoring. |
557
+
558
+ **Example:**
559
+
560
+ ```typescript
561
+ const agent = ai.agent('support-bot', {
562
+ userId: 'u1',
563
+ agentVersion: '4.2.0',
564
+ context: {
565
+ agent_type: 'executor',
566
+ experiment_variant: 'reasoning-enabled',
567
+ surface: 'chat',
568
+ feature_flag: 'new-rag-pipeline',
569
+ prompt_revision: 'v7',
570
+ },
571
+ });
572
+
573
+ // All events from this agent (and its sessions, child agents, and provider
574
+ // wrappers) will include [Agent] Context with these keys.
575
+ ```
576
+
577
+ **Context merging in child agents:**
578
+
579
+ ```typescript
580
+ const parent = ai.agent('orchestrator', {
581
+ context: { experiment_variant: 'treatment', surface: 'chat' },
582
+ });
583
+ const child = parent.child('researcher', {
584
+ context: { agent_type: 'retriever' },
585
+ });
586
+ // child context = { experiment_variant: 'treatment', surface: 'chat', agent_type: 'retriever' }
587
+ // Child keys override parent keys; parent keys absent from the child are preserved.
588
+ ```
589
+
590
+ **Querying in Amplitude:** The `[Agent] Context` property is a JSON string. Use Amplitude's JSON property parsing to extract individual keys for charts, cohorts, and funnels. For example, group by `[Agent] Context.agent_type` to see metrics by agent role.
591
+
592
+ > **Note on `experiment_variant` and server-generated events:** Context keys appear on all SDK-emitted events (`[Agent] User Message`, `[Agent] AI Response`, etc.). Server-generated events (`[Agent] Session Evaluation`, `[Agent] Score` with `source="ai"`) do not yet inherit context keys. To segment server-generated quality scores by experiment arm, use Amplitude Derived Properties to extract from `[Agent] Context` on SDK events.
593
+
427
594
  ## Privacy & Content Control
428
595
 
429
596
  Three content modes control what data is sent to Amplitude:
@@ -548,6 +715,18 @@ s.trackAiMessage(response.content, 'gpt-4o', 'openai', latencyMs, {
548
715
  });
549
716
  ```
550
717
 
718
+ ## Semantic Cache Tracking
719
+
720
+ Track full-response semantic cache hits (distinct from token-level prompt caching above):
721
+
722
+ ```typescript
723
+ s.trackAiMessage(cachedResponse.content, 'gpt-4o', 'openai', latencyMs, {
724
+ wasCached: true, // served from Redis/semantic cache
725
+ });
726
+ ```
727
+
728
+ Maps to `[Agent] Was Cached`. Enables "cache hit rate" charts and cost optimization analysis. Only emitted when `true`; omitted (not `false`) when the response was not cached.
729
+
551
730
  ## Model Tier Classification
552
731
 
553
732
  Models are automatically classified into tiers for cost/performance analysis:
@@ -759,6 +938,42 @@ s.trackAiMessage(response.content, 'gpt-4o', 'openai', latencyMs, {
759
938
  });
760
939
  ```
761
940
 
941
+ ## Implicit Feedback
942
+
943
+ Track behavioral signals that indicate whether a response met the user's need, without requiring explicit ratings:
944
+
945
+ ```typescript
946
+ // User asks a question
947
+ s.trackUserMessage('How do I create a funnel?');
948
+
949
+ // AI responds — user copies the answer (positive signal)
950
+ s.trackAiMessage('To create a funnel, go to...', 'gpt-4o', 'openai', latencyMs, {
951
+ wasCopied: true,
952
+ });
953
+
954
+ // User regenerates (negative signal — first response wasn't good enough)
955
+ s.trackUserMessage('How do I create a funnel?', {
956
+ isRegeneration: true,
957
+ });
958
+
959
+ // User edits their question (refining intent)
960
+ s.trackUserMessage('How do I create a conversion funnel for signups?', {
961
+ isEdit: true,
962
+ editedMessageId: originalMsgId, // links the edit to the original
963
+ });
964
+ ```
965
+
966
+ Track abandonment at session end — a low `abandonmentTurn` (e.g., 1) strongly signals first-response dissatisfaction:
967
+
968
+ ```typescript
969
+ agent.trackSessionEnd({
970
+ sessionId: 'sess-1',
971
+ abandonmentTurn: 1, // user left after first AI response
972
+ });
973
+ ```
974
+
975
+ These signals map to `[Agent] Was Copied`, `[Agent] Is Regeneration`, `[Agent] Is Edit`, `[Agent] Edited Message ID`, and `[Agent] Abandonment Turn`. Use them in Amplitude to build quality dashboards without requiring user surveys.
976
+
762
977
  ## tool() and observe() HOFs
763
978
 
764
979
  ### tool()
@@ -908,6 +1123,78 @@ agent.trackSessionEnrichment(enrichments, {
908
1123
  });
909
1124
  ```
910
1125
 
1126
+ ### End-to-End Example: `customer_enriched` Mode
1127
+
1128
+ This mode is for teams that run their own evaluation pipeline (or can't send message content to Amplitude) but still want rich session-level analytics. Here's a complete workflow:
1129
+
1130
+ ```typescript
1131
+ import {
1132
+ AIConfig,
1133
+ AmplitudeAI,
1134
+ ContentMode,
1135
+ MessageLabel,
1136
+ RubricScore,
1137
+ SessionEnrichments,
1138
+ TopicClassification,
1139
+ } from '@amplitude/ai';
1140
+
1141
+ // 1. Configure: no content sent to Amplitude
1142
+ const ai = new AmplitudeAI({
1143
+ apiKey: process.env.AMPLITUDE_AI_API_KEY!,
1144
+ config: new AIConfig({
1145
+ contentMode: ContentMode.CUSTOMER_ENRICHED,
1146
+ }),
1147
+ });
1148
+
1149
+ const agent = ai.agent('support-bot', { agentVersion: '2.1.0' });
1150
+
1151
+ // 2. Run the conversation — content is NOT sent (metadata only)
1152
+ const session = agent.session({ userId: 'user-42' });
1153
+ const { sessionId, messageIds } = await session.run(async (s) => {
1154
+ const msgIds: string[] = [];
1155
+ msgIds.push(s.trackUserMessage('Why was I charged twice?'));
1156
+ msgIds.push(
1157
+ s.trackAiMessage(
1158
+ aiResponse.content,
1159
+ 'gpt-4o',
1160
+ 'openai',
1161
+ latencyMs,
1162
+ ),
1163
+ );
1164
+ return { sessionId: s.sessionId, messageIds: msgIds };
1165
+ });
1166
+
1167
+ // 3. Run your eval pipeline on the raw messages (e.g., your own LLM judge)
1168
+ const evalResults = await myEvalPipeline(conversationHistory);
1169
+
1170
+ // 4. Ship enrichments back to Amplitude
1171
+ const enrichments = new SessionEnrichments({
1172
+ qualityScore: evalResults.quality,
1173
+ sentimentScore: evalResults.sentiment,
1174
+ overallOutcome: evalResults.outcome,
1175
+ topicClassifications: {
1176
+ 'billing': new TopicClassification({
1177
+ topic: 'billing-dispute',
1178
+ confidence: 0.92,
1179
+ }),
1180
+ },
1181
+ rubricScores: [
1182
+ new RubricScore({ name: 'accuracy', score: 4, maxScore: 5 }),
1183
+ new RubricScore({ name: 'helpfulness', score: 5, maxScore: 5 }),
1184
+ ],
1185
+ messageLabels: {
1186
+ [messageIds[0]]: [
1187
+ new MessageLabel({ key: 'intent', value: 'billing-dispute', confidence: 0.94 }),
1188
+ ],
1189
+ },
1190
+ customMetadata: { eval_model: 'gpt-4o-judge-v2' },
1191
+ });
1192
+
1193
+ agent.trackSessionEnrichment(enrichments, { sessionId });
1194
+ ```
1195
+
1196
+ This produces the same Amplitude event properties as Amplitude's built-in server-side enrichment (topics, rubrics, outcomes, message labels), but sourced from your pipeline. Use it when compliance requires zero-content transmission, or when you need custom evaluation logic beyond what the built-in enrichment provides.
1197
+
911
1198
  ### Available Enrichment Fields
912
1199
 
913
1200
  - **Quality & Sentiment**: `qualityScore`, `sentimentScore`
@@ -924,7 +1211,11 @@ agent.trackSessionEnrichment(enrichments, {
924
1211
 
925
1212
  ### Message Labels
926
1213
 
927
- Attach classification labels to individual messages within a session:
1214
+ Attach classification labels to individual messages within a session. Labels are flexible key-value pairs for filtering and segmentation in Amplitude.
1215
+
1216
+ **Common use cases:** routing tags (`flow`, `surface`), classifier output (`intent`, `sentiment`, `toxicity`), business context (`tier`, `plan`).
1217
+
1218
+ **Inline labels** (at tracking time):
928
1219
 
929
1220
  ```typescript
930
1221
  import { MessageLabel } from '@amplitude/ai';
@@ -945,6 +1236,29 @@ s.trackUserMessage('I want to cancel my subscription', {
945
1236
  });
946
1237
  ```
947
1238
 
1239
+ **Retrospective labels** (after the session, from a background pipeline):
1240
+
1241
+ When classifier results arrive after the session ends, attach them via `SessionEnrichments.messageLabels`, keyed by the `messageId` returned from tracking calls:
1242
+
1243
+ ```typescript
1244
+ import { MessageLabel, SessionEnrichments } from '@amplitude/ai';
1245
+
1246
+ const enrichments = new SessionEnrichments({
1247
+ messageLabels: {
1248
+ [userMsgId]: [
1249
+ new MessageLabel({ key: 'intent', value: 'cancellation', confidence: 0.94 }),
1250
+ ],
1251
+ [aiMsgId]: [
1252
+ new MessageLabel({ key: 'quality', value: 'good', confidence: 0.91 }),
1253
+ ],
1254
+ },
1255
+ });
1256
+
1257
+ agent.trackSessionEnrichment(enrichments, { sessionId: 'sess-abc123' });
1258
+ ```
1259
+
1260
+ Labels are emitted as `[Agent] Message Labels` on the event. In Amplitude, filter or group by label key/value to build charts like "messages by intent" or "sessions where flow=onboarding".
1261
+
948
1262
  ## Debug and Dry-Run Modes
949
1263
 
950
1264
  ### Debug Mode
@@ -1139,18 +1453,67 @@ const handler = new AmplitudeCallbackHandler({
1139
1453
 
1140
1454
  ### OpenTelemetry
1141
1455
 
1456
+ Two exporters add Amplitude as a destination alongside your existing trace backend (Datadog, Honeycomb, Jaeger, etc.):
1457
+
1142
1458
  ```typescript
1143
- import { AmplitudeAgentExporter, AmplitudeGenAIExporter } from '@amplitude/ai';
1459
+ import {
1460
+ AmplitudeAgentExporter,
1461
+ AmplitudeGenAIExporter,
1462
+ } from '@amplitude/ai';
1463
+ import { NodeTracerProvider } from '@opentelemetry/sdk-trace-node';
1464
+ import {
1465
+ BatchSpanProcessor,
1466
+ SimpleSpanProcessor,
1467
+ } from '@opentelemetry/sdk-trace-base';
1144
1468
 
1145
- // Exporters for OTLP-compatible pipelines
1469
+ const provider = new NodeTracerProvider();
1470
+
1471
+ // GenAI exporter — converts gen_ai.* spans into Amplitude AI events
1472
+ provider.addSpanProcessor(
1473
+ new BatchSpanProcessor(
1474
+ new AmplitudeGenAIExporter({
1475
+ apiKey: process.env.AMPLITUDE_AI_API_KEY!,
1476
+ }),
1477
+ ),
1478
+ );
1479
+
1480
+ // Agent exporter — converts agent.* spans into Amplitude session events
1481
+ provider.addSpanProcessor(
1482
+ new SimpleSpanProcessor(
1483
+ new AmplitudeAgentExporter({
1484
+ apiKey: process.env.AMPLITUDE_AI_API_KEY!,
1485
+ }),
1486
+ ),
1487
+ );
1488
+
1489
+ provider.register();
1146
1490
  ```
1147
1491
 
1148
- Spans with `gen_ai.provider.name` or `gen_ai.system` are treated as GenAI spans; other spans are ignored.
1492
+ Only spans with `gen_ai.provider.name` or `gen_ai.system` attributes are processed; all other spans are silently ignored. This means it's safe to add the exporter to a pipeline that produces mixed (GenAI + HTTP + DB) spans.
1493
+
1494
+ **Attribute mapping reference:**
1149
1495
 
1150
- **What the OTEL bridge maps:** model (prefers `gen_ai.response.model`), provider, tokens (input/output/total), cache tokens (`gen_ai.usage.cache_read.input_tokens`, `gen_ai.usage.cache_creation.input_tokens`), cache-aware cost, latency, temperature, top_p, max_output_tokens, finish reason, errors, user messages (from `gen_ai.input.messages`).
1496
+ | OTEL Span Attribute | Amplitude Event Property | Notes |
1497
+ | --- | --- | --- |
1498
+ | `gen_ai.response.model` / `gen_ai.request.model` | `[Agent] Model` | Response model preferred |
1499
+ | `gen_ai.system` / `gen_ai.provider.name` | `[Agent] Provider` | |
1500
+ | `gen_ai.usage.input_tokens` | `[Agent] Input Tokens` | |
1501
+ | `gen_ai.usage.output_tokens` | `[Agent] Output Tokens` | |
1502
+ | `gen_ai.usage.total_tokens` | `[Agent] Total Tokens` | Derived if not present |
1503
+ | `gen_ai.usage.cache_read.input_tokens` | `[Agent] Cache Read Tokens` | |
1504
+ | `gen_ai.usage.cache_creation.input_tokens` | `[Agent] Cache Creation Tokens` | |
1505
+ | `gen_ai.request.temperature` | `[Agent] Temperature` | |
1506
+ | `gen_ai.request.top_p` | `[Agent] Top P` | |
1507
+ | `gen_ai.request.max_output_tokens` | `[Agent] Max Output Tokens` | |
1508
+ | `gen_ai.response.finish_reasons` | `[Agent] Finish Reason` | |
1509
+ | `gen_ai.input.messages` | `[Agent] LLM Message` | Only if content mode allows |
1510
+ | Span duration | `[Agent] Latency Ms` | |
1511
+ | Span status ERROR | `[Agent] Is Error`, `[Agent] Error Message` | |
1151
1512
 
1152
1513
  **Not available via OTEL (use native wrappers):** reasoning content/tokens, TTFB, streaming detection, implicit feedback, file attachments, event graph linking (parent_message_id).
1153
1514
 
1515
+ **When to use OTEL vs. native wrappers:** If you already have `@opentelemetry/instrumentation-openai` or similar producing GenAI spans, the OTEL bridge gives you Amplitude analytics with zero code changes. For richer tracking (implicit feedback, streaming metrics, attachments), use the native `wrapOpenAI()`/`wrapAnthropic()` wrappers alongside OTEL.
1516
+
1154
1517
  ### LlamaIndex
1155
1518
 
1156
1519
  ```typescript
@@ -1180,6 +1543,73 @@ import { AmplitudeCrewAIHooks } from '@amplitude/ai';
1180
1543
 
1181
1544
  In Node.js, `AmplitudeCrewAIHooks` throws a `ProviderError` by design. Use LangChain or OpenTelemetry integrations instead.
1182
1545
 
1546
+ ## Data Flow
1547
+
1548
+ How events flow from your application to Amplitude charts:
1549
+
1550
+ ```
1551
+ Your Application
1552
+ ├── wrapOpenAI() / wrapAnthropic() ─── auto-emits ──┐
1553
+ ├── session.trackUserMessage() ─── manual ──────┤
1554
+ ├── session.trackAiMessage() ─── manual ──────┤
1555
+ ├── agent.trackToolCall() ─── manual ──────┤
1556
+ ├── agent.trackSessionEnrichment() ─── manual ──────┤
1557
+ └── OTEL exporter (AmplitudeGenAI...) ─── bridge ──────┤
1558
+
1559
+ AmplitudeAI client ◄──────┘
1560
+
1561
+ ├── validate (if enabled)
1562
+ ├── apply middleware chain
1563
+ ├── batch events
1564
+
1565
+
1566
+ Amplitude HTTP API
1567
+
1568
+ ┌─────────────┴──────────────┐
1569
+ │ │
1570
+ Amplitude Charts LLM Enrichment
1571
+ (immediate querying) Pipeline (async)
1572
+
1573
+
1574
+ [Agent] Session Evaluation
1575
+ [Agent] Score events
1576
+ (topic, rubric, outcome)
1577
+ ```
1578
+
1579
+ **Key points:**
1580
+ - All paths converge at the `AmplitudeAI` client, which batches and sends events.
1581
+ - Events are available for charting within seconds of ingestion.
1582
+ - The LLM Enrichment Pipeline runs asynchronously after session close (only when `contentMode: 'full'`). It produces server-side events like `[Agent] Session Evaluation` and `[Agent] Score`.
1583
+ - With `contentMode: 'customer_enriched'`, the enrichment pipeline is skipped — you provide your own enrichments via `trackSessionEnrichment()`.
1584
+
1585
+ ## Which Integration Should I Use?
1586
+
1587
+ Start here and pick the first tier that satisfies your analytics needs:
1588
+
1589
+ ```
1590
+ Do you need per-user analytics
1591
+ (funnels, cohorts, retention)?
1592
+
1593
+ ┌─── No ──┴── Yes ──┐
1594
+ │ │
1595
+ Tier 0: Zero-Code Do you need session
1596
+ (CLI auto-patch) grouping & enrichment?
1597
+ Cost, latency, │
1598
+ tokens, errors. ┌─ No ──┴── Yes ──┐
1599
+ │ │
1600
+ Tier 1: Events Do you control
1601
+ Per-call tracking the LLM call site?
1602
+ + userId │
1603
+ ┌── Yes ──┴── No ──┐
1604
+ │ │
1605
+ Tier 2: Sessions Tier 3: OTEL Bridge
1606
+ session.run() Add exporter to
1607
+ Full enrichment existing OTEL pipeline
1608
+ Implicit feedback Limited to OTEL attrs
1609
+ ```
1610
+
1611
+ **Rule of thumb:** If you own the LLM call site, start with **Tier 2** (sessions). If you don't (e.g., a third-party framework exports OTEL spans), use **Tier 3** (OTEL bridge). If you just want aggregate cost monitoring without user analytics, **Tier 0** (zero-code) is ready in 60 seconds.
1612
+
1183
1613
  ## Integration Patterns
1184
1614
 
1185
1615
  ### Pattern A: Single-Request API Endpoint
@@ -1 +1 @@
1
- {"version":3,"file":"server.d.ts","names":[],"sources":["../../src/mcp/server.ts"],"sourcesContent":[],"mappings":";;;KAqCK,WAAA;cAaC,iFAGS;AAlDqD,cA2H9D,YAzFU,EAAA,GAAA,GAyFS,SAzFT;AAAA,cAqaV,YAxZA,EAAA,GAAA,GAwZyB,OAhW9B,CAAA,IAAA,CArDc"}
1
+ {"version":3,"file":"server.d.ts","names":[],"sources":["../../src/mcp/server.ts"],"sourcesContent":[],"mappings":";;;KAqCK,WAAA;cAaC,iFAGS;AAlDqD,cA2H9D,YAzFU,EAAA,GAAA,GAyFS,SAzFT;AAAA,cAqaV,YAxZA,EAAA,GAAA,GAwZyB,OAhW9B,CAAA,IAAA,CAAA"}
@@ -190,7 +190,7 @@ const createServer = () => {
190
190
  text: JSON.stringify({
191
191
  query,
192
192
  total: selected.length,
193
- results: selected.map(({ priority, ...rest }) => rest)
193
+ results: selected.map(({ priority: _priority, ...rest }) => rest)
194
194
  }, null, 2)
195
195
  }] };
196
196
  });
@@ -1 +1 @@
1
- {"version":3,"file":"server.js","names":["_catalogCache: EventCatalog | undefined","z.string","z\n .enum","z\n .number","sources: Array<{ name: string; path: string }>","results: Array<{\n source: string;\n heading: string;\n snippet: string;\n line: number;\n priority: number;\n }>","content: string"],"sources":["../../src/mcp/server.ts"],"sourcesContent":["import { readFileSync } from 'node:fs';\nimport { dirname, join } from 'node:path';\nimport { fileURLToPath } from 'node:url';\nimport { McpServer } from '@modelcontextprotocol/sdk/server/mcp.js';\nimport { StdioServerTransport } from '@modelcontextprotocol/sdk/server/stdio.js';\nimport { z } from 'zod';\nimport {\n MCP_PROMPTS,\n MCP_RESOURCES,\n MCP_SERVER_NAME,\n MCP_TOOLS,\n} from './contract.js';\nimport { getIntegrationPatterns } from './patterns.js';\nimport { analyzeFileInstrumentation } from './validate-file.js';\n\ntype EventSchema = {\n event_type: string;\n description?: string;\n event_properties?: Record<string, unknown>;\n};\n\ntype EventCatalog = {\n events?: EventSchema[];\n};\n\nconst packageRoot = join(dirname(fileURLToPath(import.meta.url)), '..', '..');\n\nlet _catalogCache: EventCatalog | undefined;\n\nconst readEventCatalog = (): EventCatalog => {\n if (_catalogCache) return _catalogCache;\n const filePath = join(packageRoot, 'data', 'agent_event_catalog.json');\n const raw = readFileSync(filePath, 'utf8');\n _catalogCache = JSON.parse(raw) as EventCatalog;\n return _catalogCache;\n};\n\ntype ContentTier = 'full' | 'metadata_only' | 'customer_enriched';\n\nconst normalizeContentTier = (value: unknown): ContentTier => {\n if (\n value === 'full' ||\n value === 'metadata_only' ||\n value === 'customer_enriched'\n ) {\n return value;\n }\n return 'full';\n};\n\nconst buildInstrumentationGuidance = (\n framework: string,\n provider: string,\n contentTier: ContentTier,\n): string[] => {\n const providerSetup =\n provider === 'openai' || provider === 'anthropic'\n ? `use ${provider} wrapper/swap integration (for example: \\`new ${provider === 'openai' ? 'OpenAI' : 'Anthropic'}({ amplitude: ai, ... })\\`)`\n : `use the ${provider} provider wrapper class from @amplitude/ai`;\n\n const frameworkStep =\n framework === 'express' || framework === 'koa' || framework === 'fastify'\n ? 'wire `createAmplitudeAIMiddleware()` for per-request identity propagation and run LLM calls inside session context'\n : framework === 'next' || framework === 'nextjs'\n ? 'wrap route handlers in `session.run()` so every LLM call inherits user/session lineage'\n : 'attach `ai.agent(...).session(...)` where request or conversation identity exists';\n\n const tierGuidance =\n contentTier === 'full'\n ? 'Content tier (`full`): maximum insight and automatic server enrichments. Prefer `redactPii: true` (+ optional `customRedactionPatterns`).'\n : contentTier === 'metadata_only'\n ? 'Content tier (`metadata_only`): no content leaves your infrastructure; keep token/cost/latency/session analytics.'\n : 'Content tier (`customer_enriched`): no content leaves infra; send your own labels via `trackSessionEnrichment(...)` for advanced analytics.';\n\n const nextForTier =\n contentTier === 'metadata_only'\n ? 'if you need quality/topic analytics without sending content, move to `customer_enriched` and emit structured enrichments.'\n : contentTier === 'customer_enriched'\n ? 'if policy allows and you want zero eval-code overhead, consider `full` with redaction for automatic server enrichments.'\n : 'if policy prohibits content egress, switch to `metadata_only` or `customer_enriched` without changing session instrumentation.';\n\n return [\n `Framework: ${framework}`,\n `Provider: ${provider}`,\n `Content tier: ${contentTier}`,\n '',\n 'Now:',\n '1) install @amplitude/ai and @amplitude/analytics-node',\n '2) initialize `AmplitudeAI` with API key',\n `3) ${providerSetup}`,\n '4) bind session context with `const session = ai.agent(...).session(...)` and run calls in `session.run(...)`',\n `5) ${frameworkStep}`,\n '',\n 'Next:',\n `- ${nextForTier}`,\n '- treat `patch({ amplitudeAI: ai })` as migration quickstart, not steady-state production.',\n '',\n 'Why:',\n '- session lineage unlocks scoring, enrichments, and reliable product-to-AI funnels.',\n '- wrapper/swap integration preserves fidelity while keeping implementation effort low.',\n `- ${tierGuidance}`,\n '- privacy controls apply before events leave your process.',\n '',\n 'Validate:',\n '- run `amplitude-ai doctor` and verify [Agent] session-scoped events.',\n ];\n};\n\nconst headingPriority = (heading: string): number => {\n const normalized = heading.toLowerCase();\n if (\n normalized.includes('choose your integration tier') ||\n normalized.includes('privacy & content control')\n ) {\n return 3;\n }\n if (\n normalized.includes('boundagent') ||\n normalized.includes('bound agent') ||\n normalized.includes('session')\n ) {\n return 2;\n }\n return 1;\n};\n\nconst createServer = (): McpServer => {\n const server = new McpServer({\n name: MCP_SERVER_NAME,\n version: '0.1.0',\n });\n\n server.registerTool(\n MCP_TOOLS.getEventSchema,\n {\n title: 'Get Event Schema',\n description:\n 'Return Amplitude AI event schema and event-property definitions.',\n inputSchema: {\n event_type: z.string().optional(),\n },\n },\n // biome-ignore lint/suspicious/noExplicitAny: SDK callback type is intentionally broad.\n async (args: any) => {\n const eventType =\n typeof args?.event_type === 'string' ? args.event_type : undefined;\n const catalog = readEventCatalog();\n const events = catalog.events ?? [];\n const selected = eventType\n ? events.filter((event) => event.event_type === eventType)\n : events;\n\n return {\n content: [\n {\n type: 'text',\n text: JSON.stringify(\n {\n total: selected.length,\n events: selected,\n },\n null,\n 2,\n ),\n },\n ],\n };\n },\n );\n\n server.registerTool(\n MCP_TOOLS.getIntegrationPattern,\n {\n title: 'Get Integration Pattern',\n description:\n 'Return canonical instrumentation patterns for @amplitude/ai.',\n inputSchema: {\n id: z.string().optional(),\n },\n },\n // biome-ignore lint/suspicious/noExplicitAny: SDK callback type is intentionally broad.\n async (args: any) => {\n const id = typeof args?.id === 'string' ? args.id : undefined;\n const patterns = getIntegrationPatterns();\n const selected = id\n ? patterns.filter((pattern) => pattern.id === id)\n : patterns;\n return {\n content: [\n {\n type: 'text',\n text: JSON.stringify({ patterns: selected }, null, 2),\n },\n ],\n };\n },\n );\n\n server.registerTool(\n MCP_TOOLS.validateSetup,\n {\n title: 'Validate Setup',\n description: 'Validate environment variables required for @amplitude/ai.',\n inputSchema: {},\n },\n async () => {\n const required = ['AMPLITUDE_AI_API_KEY'];\n const missing = required.filter((name) => !process.env[name]);\n const status = missing.length === 0 ? 'ok' : 'missing_env';\n return {\n content: [\n {\n type: 'text',\n text: JSON.stringify(\n {\n status,\n missing_env: missing,\n },\n null,\n 2,\n ),\n },\n ],\n };\n },\n );\n\n server.registerTool(\n MCP_TOOLS.suggestInstrumentation,\n {\n title: 'Suggest Instrumentation',\n description:\n 'Suggest practical next instrumentation steps from minimal hints.',\n inputSchema: {\n framework: z.string().optional(),\n provider: z.string().optional(),\n content_tier: z\n .enum(['full', 'metadata_only', 'customer_enriched'])\n .optional()\n .describe('Desired content tier (default: full)'),\n },\n },\n // biome-ignore lint/suspicious/noExplicitAny: SDK callback type is intentionally broad.\n async (args: any) => {\n const framework =\n typeof args?.framework === 'string' ? args.framework : 'node';\n const provider =\n typeof args?.provider === 'string' ? args.provider : 'openai';\n const contentTier = normalizeContentTier(args?.content_tier);\n const guidance = buildInstrumentationGuidance(\n framework,\n provider,\n contentTier,\n );\n\n return {\n content: [\n {\n type: 'text',\n text: guidance.join('\\n'),\n },\n ],\n };\n },\n );\n\n server.registerTool(\n MCP_TOOLS.validateFile,\n {\n title: 'Validate File Instrumentation',\n description:\n 'Analyze source code to detect LLM call sites and report which are instrumented vs uninstrumented.',\n inputSchema: {\n source: z.string().describe('Source code content to analyze'),\n },\n },\n // biome-ignore lint/suspicious/noExplicitAny: SDK callback type is intentionally broad.\n async (args: any) => {\n const source = typeof args?.source === 'string' ? args.source : '';\n const result = analyzeFileInstrumentation(source);\n return {\n content: [\n {\n type: 'text',\n text: JSON.stringify(result, null, 2),\n },\n ],\n };\n },\n );\n\n server.registerTool(\n MCP_TOOLS.searchDocs,\n {\n title: 'Search Documentation',\n description:\n 'Search the SDK README and API reference for a keyword or phrase. Returns matching sections with surrounding context.',\n inputSchema: {\n query: z.string().describe('Keyword or phrase to search for'),\n max_results: z\n .number()\n .optional()\n .describe('Maximum number of results to return (default: 5)'),\n },\n },\n // biome-ignore lint/suspicious/noExplicitAny: SDK callback type is intentionally broad.\n async (args: any) => {\n const query =\n typeof args?.query === 'string' ? args.query.toLowerCase() : '';\n const maxResults =\n typeof args?.max_results === 'number' ? args.max_results : 5;\n\n if (!query) {\n return {\n content: [\n {\n type: 'text',\n text: JSON.stringify({ error: 'query is required' }),\n },\n ],\n };\n }\n\n const sources: Array<{ name: string; path: string }> = [\n { name: 'README.md', path: join(packageRoot, 'README.md') },\n { name: 'llms-full.txt', path: join(packageRoot, 'llms-full.txt') },\n ];\n\n const results: Array<{\n source: string;\n heading: string;\n snippet: string;\n line: number;\n priority: number;\n }> = [];\n\n for (const { name, path } of sources) {\n let content: string;\n try {\n content = readFileSync(path, 'utf8');\n } catch {\n continue;\n }\n const lines = content.split('\\n');\n let currentHeading = '(top)';\n for (let i = 0; i < lines.length; i++) {\n const line = lines[i] ?? '';\n if (line.startsWith('#')) {\n currentHeading = line.replace(/^#+\\s*/, '');\n }\n if (line.toLowerCase().includes(query)) {\n const start = Math.max(0, i - 2);\n const end = Math.min(lines.length, i + 3);\n results.push({\n source: name,\n heading: currentHeading,\n snippet: lines.slice(start, end).join('\\n'),\n line: i + 1,\n priority: headingPriority(currentHeading),\n });\n }\n }\n }\n\n results.sort((a, b) => b.priority - a.priority || a.line - b.line);\n const selected = results.slice(0, maxResults);\n\n return {\n content: [\n {\n type: 'text',\n text: JSON.stringify(\n {\n query,\n total: selected.length,\n results: selected.map(({ priority, ...rest }) => rest),\n },\n null,\n 2,\n ),\n },\n ],\n };\n },\n );\n\n server.registerResource(\n 'event_schema',\n MCP_RESOURCES.eventSchema,\n {\n title: 'Amplitude AI Event Schema',\n description: 'Current event and property catalog for @amplitude/ai.',\n mimeType: 'application/json',\n },\n async () => ({\n contents: [\n {\n uri: MCP_RESOURCES.eventSchema,\n mimeType: 'application/json',\n text: JSON.stringify(readEventCatalog(), null, 2),\n },\n ],\n }),\n );\n\n server.registerResource(\n 'integration_patterns',\n MCP_RESOURCES.integrationPatterns,\n {\n title: 'Amplitude AI Integration Patterns',\n description: 'Canonical setup patterns used by docs and agent workflows.',\n mimeType: 'application/json',\n },\n async () => ({\n contents: [\n {\n uri: MCP_RESOURCES.integrationPatterns,\n mimeType: 'application/json',\n text: JSON.stringify({ patterns: getIntegrationPatterns() }, null, 2),\n },\n ],\n }),\n );\n\n server.registerPrompt(\n MCP_PROMPTS.instrumentApp,\n {\n title: 'Instrument App',\n description:\n 'Guided checklist for instrumenting an app with @amplitude/ai from zero to validated setup.',\n argsSchema: {\n framework: z.string().optional(),\n provider: z.string().optional(),\n },\n },\n // biome-ignore lint/suspicious/noExplicitAny: SDK callback type is intentionally broad.\n async (args: any) => {\n const framework =\n typeof args?.framework === 'string' ? args.framework : 'node';\n const provider =\n typeof args?.provider === 'string' ? args.provider : 'openai';\n return {\n messages: [\n {\n role: 'user',\n content: {\n type: 'text',\n text: `Instrument this ${framework} app with ${provider} using @amplitude/ai. Start from the production default: wrapper/swap integration plus ai.agent(...).session(...), choose a content tier (full/metadata_only/customer_enriched) with privacy defaults, and treat patch() only as a migration fallback. Then validate setup and annotate any uninstrumented call sites.`,\n },\n },\n ],\n };\n },\n );\n\n return server;\n};\n\nconst runMcpServer = async (): Promise<void> => {\n const server = createServer();\n const transport = new StdioServerTransport();\n await server.connect(transport);\n};\n\nexport { buildInstrumentationGuidance, createServer, runMcpServer };\n"],"mappings":";;;;;;;;;;;AAyBA,MAAM,cAAc,KAAK,QAAQ,cAAc,OAAO,KAAK,IAAI,CAAC,EAAE,MAAM,KAAK;AAE7E,IAAIA;AAEJ,MAAM,yBAAuC;AAC3C,KAAI,cAAe,QAAO;CAE1B,MAAM,MAAM,aADK,KAAK,aAAa,QAAQ,2BAA2B,EACnC,OAAO;AAC1C,iBAAgB,KAAK,MAAM,IAAI;AAC/B,QAAO;;AAKT,MAAM,wBAAwB,UAAgC;AAC5D,KACE,UAAU,UACV,UAAU,mBACV,UAAU,oBAEV,QAAO;AAET,QAAO;;AAGT,MAAM,gCACJ,WACA,UACA,gBACa;CACb,MAAM,gBACJ,aAAa,YAAY,aAAa,cAClC,OAAO,SAAS,gDAAgD,aAAa,WAAW,WAAW,YAAY,+BAC/G,WAAW,SAAS;CAE1B,MAAM,gBACJ,cAAc,aAAa,cAAc,SAAS,cAAc,YAC5D,uHACA,cAAc,UAAU,cAAc,WACpC,2FACA;CAER,MAAM,eACJ,gBAAgB,SACZ,8IACA,gBAAgB,kBACd,sHACA;CAER,MAAM,cACJ,gBAAgB,kBACZ,8HACA,gBAAgB,sBACd,4HACA;AAER,QAAO;EACL,cAAc;EACd,aAAa;EACb,iBAAiB;EACjB;EACA;EACA;EACA;EACA,MAAM;EACN;EACA,MAAM;EACN;EACA;EACA,KAAK;EACL;EACA;EACA;EACA;EACA;EACA,KAAK;EACL;EACA;EACA;EACA;EACD;;AAGH,MAAM,mBAAmB,YAA4B;CACnD,MAAM,aAAa,QAAQ,aAAa;AACxC,KACE,WAAW,SAAS,+BAA+B,IACnD,WAAW,SAAS,4BAA4B,CAEhD,QAAO;AAET,KACE,WAAW,SAAS,aAAa,IACjC,WAAW,SAAS,cAAc,IAClC,WAAW,SAAS,UAAU,CAE9B,QAAO;AAET,QAAO;;AAGT,MAAM,qBAAgC;CACpC,MAAM,SAAS,IAAI,UAAU;EAC3B,MAAM;EACN,SAAS;EACV,CAAC;AAEF,QAAO,aACL,UAAU,gBACV;EACE,OAAO;EACP,aACE;EACF,aAAa,EACX,YAAYC,YAAU,CAAC,UAAU,EAClC;EACF,EAED,OAAO,SAAc;EACnB,MAAM,YACJ,OAAO,MAAM,eAAe,WAAW,KAAK,aAAa;EAE3D,MAAM,SADU,kBAAkB,CACX,UAAU,EAAE;EACnC,MAAM,WAAW,YACb,OAAO,QAAQ,UAAU,MAAM,eAAe,UAAU,GACxD;AAEJ,SAAO,EACL,SAAS,CACP;GACE,MAAM;GACN,MAAM,KAAK,UACT;IACE,OAAO,SAAS;IAChB,QAAQ;IACT,EACD,MACA,EACD;GACF,CACF,EACF;GAEJ;AAED,QAAO,aACL,UAAU,uBACV;EACE,OAAO;EACP,aACE;EACF,aAAa,EACX,IAAIA,YAAU,CAAC,UAAU,EAC1B;EACF,EAED,OAAO,SAAc;EACnB,MAAM,KAAK,OAAO,MAAM,OAAO,WAAW,KAAK,KAAK;EACpD,MAAM,WAAW,wBAAwB;EACzC,MAAM,WAAW,KACb,SAAS,QAAQ,YAAY,QAAQ,OAAO,GAAG,GAC/C;AACJ,SAAO,EACL,SAAS,CACP;GACE,MAAM;GACN,MAAM,KAAK,UAAU,EAAE,UAAU,UAAU,EAAE,MAAM,EAAE;GACtD,CACF,EACF;GAEJ;AAED,QAAO,aACL,UAAU,eACV;EACE,OAAO;EACP,aAAa;EACb,aAAa,EAAE;EAChB,EACD,YAAY;EAEV,MAAM,UADW,CAAC,uBAAuB,CAChB,QAAQ,SAAS,CAAC,QAAQ,IAAI,MAAM;EAC7D,MAAM,SAAS,QAAQ,WAAW,IAAI,OAAO;AAC7C,SAAO,EACL,SAAS,CACP;GACE,MAAM;GACN,MAAM,KAAK,UACT;IACE;IACA,aAAa;IACd,EACD,MACA,EACD;GACF,CACF,EACF;GAEJ;AAED,QAAO,aACL,UAAU,wBACV;EACE,OAAO;EACP,aACE;EACF,aAAa;GACX,WAAWA,YAAU,CAAC,UAAU;GAChC,UAAUA,YAAU,CAAC,UAAU;GAC/B,cAAcC,SACN;IAAC;IAAQ;IAAiB;IAAoB,CAAC,CACpD,UAAU,CACV,SAAS,uCAAuC;GACpD;EACF,EAED,OAAO,SAAc;AAYnB,SAAO,EACL,SAAS,CACP;GACE,MAAM;GACN,MAVW,6BAJf,OAAO,MAAM,cAAc,WAAW,KAAK,YAAY,QAEvD,OAAO,MAAM,aAAa,WAAW,KAAK,WAAW,UACnC,qBAAqB,MAAM,aAAa,CAK3D,CAMoB,KAAK,KAAK;GAC1B,CACF,EACF;GAEJ;AAED,QAAO,aACL,UAAU,cACV;EACE,OAAO;EACP,aACE;EACF,aAAa,EACX,QAAQD,YAAU,CAAC,SAAS,iCAAiC,EAC9D;EACF,EAED,OAAO,SAAc;EAEnB,MAAM,SAAS,2BADA,OAAO,MAAM,WAAW,WAAW,KAAK,SAAS,GACf;AACjD,SAAO,EACL,SAAS,CACP;GACE,MAAM;GACN,MAAM,KAAK,UAAU,QAAQ,MAAM,EAAE;GACtC,CACF,EACF;GAEJ;AAED,QAAO,aACL,UAAU,YACV;EACE,OAAO;EACP,aACE;EACF,aAAa;GACX,OAAOA,YAAU,CAAC,SAAS,kCAAkC;GAC7D,aAAaE,YACF,CACR,UAAU,CACV,SAAS,mDAAmD;GAChE;EACF,EAED,OAAO,SAAc;EACnB,MAAM,QACJ,OAAO,MAAM,UAAU,WAAW,KAAK,MAAM,aAAa,GAAG;EAC/D,MAAM,aACJ,OAAO,MAAM,gBAAgB,WAAW,KAAK,cAAc;AAE7D,MAAI,CAAC,MACH,QAAO,EACL,SAAS,CACP;GACE,MAAM;GACN,MAAM,KAAK,UAAU,EAAE,OAAO,qBAAqB,CAAC;GACrD,CACF,EACF;EAGH,MAAMC,UAAiD,CACrD;GAAE,MAAM;GAAa,MAAM,KAAK,aAAa,YAAY;GAAE,EAC3D;GAAE,MAAM;GAAiB,MAAM,KAAK,aAAa,gBAAgB;GAAE,CACpE;EAED,MAAMC,UAMD,EAAE;AAEP,OAAK,MAAM,EAAE,MAAM,UAAU,SAAS;GACpC,IAAIC;AACJ,OAAI;AACF,cAAU,aAAa,MAAM,OAAO;WAC9B;AACN;;GAEF,MAAM,QAAQ,QAAQ,MAAM,KAAK;GACjC,IAAI,iBAAiB;AACrB,QAAK,IAAI,IAAI,GAAG,IAAI,MAAM,QAAQ,KAAK;IACrC,MAAM,OAAO,MAAM,MAAM;AACzB,QAAI,KAAK,WAAW,IAAI,CACtB,kBAAiB,KAAK,QAAQ,UAAU,GAAG;AAE7C,QAAI,KAAK,aAAa,CAAC,SAAS,MAAM,EAAE;KACtC,MAAM,QAAQ,KAAK,IAAI,GAAG,IAAI,EAAE;KAChC,MAAM,MAAM,KAAK,IAAI,MAAM,QAAQ,IAAI,EAAE;AACzC,aAAQ,KAAK;MACX,QAAQ;MACR,SAAS;MACT,SAAS,MAAM,MAAM,OAAO,IAAI,CAAC,KAAK,KAAK;MAC3C,MAAM,IAAI;MACV,UAAU,gBAAgB,eAAe;MAC1C,CAAC;;;;AAKR,UAAQ,MAAM,GAAG,MAAM,EAAE,WAAW,EAAE,YAAY,EAAE,OAAO,EAAE,KAAK;EAClE,MAAM,WAAW,QAAQ,MAAM,GAAG,WAAW;AAE7C,SAAO,EACL,SAAS,CACP;GACE,MAAM;GACN,MAAM,KAAK,UACT;IACE;IACA,OAAO,SAAS;IAChB,SAAS,SAAS,KAAK,EAAE,UAAU,GAAG,WAAW,KAAK;IACvD,EACD,MACA,EACD;GACF,CACF,EACF;GAEJ;AAED,QAAO,iBACL,gBACA,cAAc,aACd;EACE,OAAO;EACP,aAAa;EACb,UAAU;EACX,EACD,aAAa,EACX,UAAU,CACR;EACE,KAAK,cAAc;EACnB,UAAU;EACV,MAAM,KAAK,UAAU,kBAAkB,EAAE,MAAM,EAAE;EAClD,CACF,EACF,EACF;AAED,QAAO,iBACL,wBACA,cAAc,qBACd;EACE,OAAO;EACP,aAAa;EACb,UAAU;EACX,EACD,aAAa,EACX,UAAU,CACR;EACE,KAAK,cAAc;EACnB,UAAU;EACV,MAAM,KAAK,UAAU,EAAE,UAAU,wBAAwB,EAAE,EAAE,MAAM,EAAE;EACtE,CACF,EACF,EACF;AAED,QAAO,eACL,YAAY,eACZ;EACE,OAAO;EACP,aACE;EACF,YAAY;GACV,WAAWL,YAAU,CAAC,UAAU;GAChC,UAAUA,YAAU,CAAC,UAAU;GAChC;EACF,EAED,OAAO,SAAc;AAKnB,SAAO,EACL,UAAU,CACR;GACE,MAAM;GACN,SAAS;IACP,MAAM;IACN,MAAM,mBATZ,OAAO,MAAM,cAAc,WAAW,KAAK,YAAY,OASd,YAPzC,OAAO,MAAM,aAAa,WAAW,KAAK,WAAW,SAOS;IACzD;GACF,CACF,EACF;GAEJ;AAED,QAAO;;AAGT,MAAM,eAAe,YAA2B;CAC9C,MAAM,SAAS,cAAc;CAC7B,MAAM,YAAY,IAAI,sBAAsB;AAC5C,OAAM,OAAO,QAAQ,UAAU"}
1
+ {"version":3,"file":"server.js","names":["_catalogCache: EventCatalog | undefined","z.string","z\n .enum","z\n .number","sources: Array<{ name: string; path: string }>","results: Array<{\n source: string;\n heading: string;\n snippet: string;\n line: number;\n priority: number;\n }>","content: string"],"sources":["../../src/mcp/server.ts"],"sourcesContent":["import { readFileSync } from 'node:fs';\nimport { dirname, join } from 'node:path';\nimport { fileURLToPath } from 'node:url';\nimport { McpServer } from '@modelcontextprotocol/sdk/server/mcp.js';\nimport { StdioServerTransport } from '@modelcontextprotocol/sdk/server/stdio.js';\nimport { z } from 'zod';\nimport {\n MCP_PROMPTS,\n MCP_RESOURCES,\n MCP_SERVER_NAME,\n MCP_TOOLS,\n} from './contract.js';\nimport { getIntegrationPatterns } from './patterns.js';\nimport { analyzeFileInstrumentation } from './validate-file.js';\n\ntype EventSchema = {\n event_type: string;\n description?: string;\n event_properties?: Record<string, unknown>;\n};\n\ntype EventCatalog = {\n events?: EventSchema[];\n};\n\nconst packageRoot = join(dirname(fileURLToPath(import.meta.url)), '..', '..');\n\nlet _catalogCache: EventCatalog | undefined;\n\nconst readEventCatalog = (): EventCatalog => {\n if (_catalogCache) return _catalogCache;\n const filePath = join(packageRoot, 'data', 'agent_event_catalog.json');\n const raw = readFileSync(filePath, 'utf8');\n _catalogCache = JSON.parse(raw) as EventCatalog;\n return _catalogCache;\n};\n\ntype ContentTier = 'full' | 'metadata_only' | 'customer_enriched';\n\nconst normalizeContentTier = (value: unknown): ContentTier => {\n if (\n value === 'full' ||\n value === 'metadata_only' ||\n value === 'customer_enriched'\n ) {\n return value;\n }\n return 'full';\n};\n\nconst buildInstrumentationGuidance = (\n framework: string,\n provider: string,\n contentTier: ContentTier,\n): string[] => {\n const providerSetup =\n provider === 'openai' || provider === 'anthropic'\n ? `use ${provider} wrapper/swap integration (for example: \\`new ${provider === 'openai' ? 'OpenAI' : 'Anthropic'}({ amplitude: ai, ... })\\`)`\n : `use the ${provider} provider wrapper class from @amplitude/ai`;\n\n const frameworkStep =\n framework === 'express' || framework === 'koa' || framework === 'fastify'\n ? 'wire `createAmplitudeAIMiddleware()` for per-request identity propagation and run LLM calls inside session context'\n : framework === 'next' || framework === 'nextjs'\n ? 'wrap route handlers in `session.run()` so every LLM call inherits user/session lineage'\n : 'attach `ai.agent(...).session(...)` where request or conversation identity exists';\n\n const tierGuidance =\n contentTier === 'full'\n ? 'Content tier (`full`): maximum insight and automatic server enrichments. Prefer `redactPii: true` (+ optional `customRedactionPatterns`).'\n : contentTier === 'metadata_only'\n ? 'Content tier (`metadata_only`): no content leaves your infrastructure; keep token/cost/latency/session analytics.'\n : 'Content tier (`customer_enriched`): no content leaves infra; send your own labels via `trackSessionEnrichment(...)` for advanced analytics.';\n\n const nextForTier =\n contentTier === 'metadata_only'\n ? 'if you need quality/topic analytics without sending content, move to `customer_enriched` and emit structured enrichments.'\n : contentTier === 'customer_enriched'\n ? 'if policy allows and you want zero eval-code overhead, consider `full` with redaction for automatic server enrichments.'\n : 'if policy prohibits content egress, switch to `metadata_only` or `customer_enriched` without changing session instrumentation.';\n\n return [\n `Framework: ${framework}`,\n `Provider: ${provider}`,\n `Content tier: ${contentTier}`,\n '',\n 'Now:',\n '1) install @amplitude/ai and @amplitude/analytics-node',\n '2) initialize `AmplitudeAI` with API key',\n `3) ${providerSetup}`,\n '4) bind session context with `const session = ai.agent(...).session(...)` and run calls in `session.run(...)`',\n `5) ${frameworkStep}`,\n '',\n 'Next:',\n `- ${nextForTier}`,\n '- treat `patch({ amplitudeAI: ai })` as migration quickstart, not steady-state production.',\n '',\n 'Why:',\n '- session lineage unlocks scoring, enrichments, and reliable product-to-AI funnels.',\n '- wrapper/swap integration preserves fidelity while keeping implementation effort low.',\n `- ${tierGuidance}`,\n '- privacy controls apply before events leave your process.',\n '',\n 'Validate:',\n '- run `amplitude-ai doctor` and verify [Agent] session-scoped events.',\n ];\n};\n\nconst headingPriority = (heading: string): number => {\n const normalized = heading.toLowerCase();\n if (\n normalized.includes('choose your integration tier') ||\n normalized.includes('privacy & content control')\n ) {\n return 3;\n }\n if (\n normalized.includes('boundagent') ||\n normalized.includes('bound agent') ||\n normalized.includes('session')\n ) {\n return 2;\n }\n return 1;\n};\n\nconst createServer = (): McpServer => {\n const server = new McpServer({\n name: MCP_SERVER_NAME,\n version: '0.1.0',\n });\n\n server.registerTool(\n MCP_TOOLS.getEventSchema,\n {\n title: 'Get Event Schema',\n description:\n 'Return Amplitude AI event schema and event-property definitions.',\n inputSchema: {\n event_type: z.string().optional(),\n },\n },\n // biome-ignore lint/suspicious/noExplicitAny: SDK callback type is intentionally broad.\n async (args: any) => {\n const eventType =\n typeof args?.event_type === 'string' ? args.event_type : undefined;\n const catalog = readEventCatalog();\n const events = catalog.events ?? [];\n const selected = eventType\n ? events.filter((event) => event.event_type === eventType)\n : events;\n\n return {\n content: [\n {\n type: 'text',\n text: JSON.stringify(\n {\n total: selected.length,\n events: selected,\n },\n null,\n 2,\n ),\n },\n ],\n };\n },\n );\n\n server.registerTool(\n MCP_TOOLS.getIntegrationPattern,\n {\n title: 'Get Integration Pattern',\n description:\n 'Return canonical instrumentation patterns for @amplitude/ai.',\n inputSchema: {\n id: z.string().optional(),\n },\n },\n // biome-ignore lint/suspicious/noExplicitAny: SDK callback type is intentionally broad.\n async (args: any) => {\n const id = typeof args?.id === 'string' ? args.id : undefined;\n const patterns = getIntegrationPatterns();\n const selected = id\n ? patterns.filter((pattern) => pattern.id === id)\n : patterns;\n return {\n content: [\n {\n type: 'text',\n text: JSON.stringify({ patterns: selected }, null, 2),\n },\n ],\n };\n },\n );\n\n server.registerTool(\n MCP_TOOLS.validateSetup,\n {\n title: 'Validate Setup',\n description: 'Validate environment variables required for @amplitude/ai.',\n inputSchema: {},\n },\n async () => {\n const required = ['AMPLITUDE_AI_API_KEY'];\n const missing = required.filter((name) => !process.env[name]);\n const status = missing.length === 0 ? 'ok' : 'missing_env';\n return {\n content: [\n {\n type: 'text',\n text: JSON.stringify(\n {\n status,\n missing_env: missing,\n },\n null,\n 2,\n ),\n },\n ],\n };\n },\n );\n\n server.registerTool(\n MCP_TOOLS.suggestInstrumentation,\n {\n title: 'Suggest Instrumentation',\n description:\n 'Suggest practical next instrumentation steps from minimal hints.',\n inputSchema: {\n framework: z.string().optional(),\n provider: z.string().optional(),\n content_tier: z\n .enum(['full', 'metadata_only', 'customer_enriched'])\n .optional()\n .describe('Desired content tier (default: full)'),\n },\n },\n // biome-ignore lint/suspicious/noExplicitAny: SDK callback type is intentionally broad.\n async (args: any) => {\n const framework =\n typeof args?.framework === 'string' ? args.framework : 'node';\n const provider =\n typeof args?.provider === 'string' ? args.provider : 'openai';\n const contentTier = normalizeContentTier(args?.content_tier);\n const guidance = buildInstrumentationGuidance(\n framework,\n provider,\n contentTier,\n );\n\n return {\n content: [\n {\n type: 'text',\n text: guidance.join('\\n'),\n },\n ],\n };\n },\n );\n\n server.registerTool(\n MCP_TOOLS.validateFile,\n {\n title: 'Validate File Instrumentation',\n description:\n 'Analyze source code to detect LLM call sites and report which are instrumented vs uninstrumented.',\n inputSchema: {\n source: z.string().describe('Source code content to analyze'),\n },\n },\n // biome-ignore lint/suspicious/noExplicitAny: SDK callback type is intentionally broad.\n async (args: any) => {\n const source = typeof args?.source === 'string' ? args.source : '';\n const result = analyzeFileInstrumentation(source);\n return {\n content: [\n {\n type: 'text',\n text: JSON.stringify(result, null, 2),\n },\n ],\n };\n },\n );\n\n server.registerTool(\n MCP_TOOLS.searchDocs,\n {\n title: 'Search Documentation',\n description:\n 'Search the SDK README and API reference for a keyword or phrase. Returns matching sections with surrounding context.',\n inputSchema: {\n query: z.string().describe('Keyword or phrase to search for'),\n max_results: z\n .number()\n .optional()\n .describe('Maximum number of results to return (default: 5)'),\n },\n },\n // biome-ignore lint/suspicious/noExplicitAny: SDK callback type is intentionally broad.\n async (args: any) => {\n const query =\n typeof args?.query === 'string' ? args.query.toLowerCase() : '';\n const maxResults =\n typeof args?.max_results === 'number' ? args.max_results : 5;\n\n if (!query) {\n return {\n content: [\n {\n type: 'text',\n text: JSON.stringify({ error: 'query is required' }),\n },\n ],\n };\n }\n\n const sources: Array<{ name: string; path: string }> = [\n { name: 'README.md', path: join(packageRoot, 'README.md') },\n { name: 'llms-full.txt', path: join(packageRoot, 'llms-full.txt') },\n ];\n\n const results: Array<{\n source: string;\n heading: string;\n snippet: string;\n line: number;\n priority: number;\n }> = [];\n\n for (const { name, path } of sources) {\n let content: string;\n try {\n content = readFileSync(path, 'utf8');\n } catch {\n continue;\n }\n const lines = content.split('\\n');\n let currentHeading = '(top)';\n for (let i = 0; i < lines.length; i++) {\n const line = lines[i] ?? '';\n if (line.startsWith('#')) {\n currentHeading = line.replace(/^#+\\s*/, '');\n }\n if (line.toLowerCase().includes(query)) {\n const start = Math.max(0, i - 2);\n const end = Math.min(lines.length, i + 3);\n results.push({\n source: name,\n heading: currentHeading,\n snippet: lines.slice(start, end).join('\\n'),\n line: i + 1,\n priority: headingPriority(currentHeading),\n });\n }\n }\n }\n\n results.sort((a, b) => b.priority - a.priority || a.line - b.line);\n const selected = results.slice(0, maxResults);\n\n return {\n content: [\n {\n type: 'text',\n text: JSON.stringify(\n {\n query,\n total: selected.length,\n results: selected.map(({ priority: _priority, ...rest }) => rest),\n },\n null,\n 2,\n ),\n },\n ],\n };\n },\n );\n\n server.registerResource(\n 'event_schema',\n MCP_RESOURCES.eventSchema,\n {\n title: 'Amplitude AI Event Schema',\n description: 'Current event and property catalog for @amplitude/ai.',\n mimeType: 'application/json',\n },\n async () => ({\n contents: [\n {\n uri: MCP_RESOURCES.eventSchema,\n mimeType: 'application/json',\n text: JSON.stringify(readEventCatalog(), null, 2),\n },\n ],\n }),\n );\n\n server.registerResource(\n 'integration_patterns',\n MCP_RESOURCES.integrationPatterns,\n {\n title: 'Amplitude AI Integration Patterns',\n description: 'Canonical setup patterns used by docs and agent workflows.',\n mimeType: 'application/json',\n },\n async () => ({\n contents: [\n {\n uri: MCP_RESOURCES.integrationPatterns,\n mimeType: 'application/json',\n text: JSON.stringify({ patterns: getIntegrationPatterns() }, null, 2),\n },\n ],\n }),\n );\n\n server.registerPrompt(\n MCP_PROMPTS.instrumentApp,\n {\n title: 'Instrument App',\n description:\n 'Guided checklist for instrumenting an app with @amplitude/ai from zero to validated setup.',\n argsSchema: {\n framework: z.string().optional(),\n provider: z.string().optional(),\n },\n },\n // biome-ignore lint/suspicious/noExplicitAny: SDK callback type is intentionally broad.\n async (args: any) => {\n const framework =\n typeof args?.framework === 'string' ? args.framework : 'node';\n const provider =\n typeof args?.provider === 'string' ? args.provider : 'openai';\n return {\n messages: [\n {\n role: 'user',\n content: {\n type: 'text',\n text: `Instrument this ${framework} app with ${provider} using @amplitude/ai. Start from the production default: wrapper/swap integration plus ai.agent(...).session(...), choose a content tier (full/metadata_only/customer_enriched) with privacy defaults, and treat patch() only as a migration fallback. Then validate setup and annotate any uninstrumented call sites.`,\n },\n },\n ],\n };\n },\n );\n\n return server;\n};\n\nconst runMcpServer = async (): Promise<void> => {\n const server = createServer();\n const transport = new StdioServerTransport();\n await server.connect(transport);\n};\n\nexport { buildInstrumentationGuidance, createServer, runMcpServer };\n"],"mappings":";;;;;;;;;;;AAyBA,MAAM,cAAc,KAAK,QAAQ,cAAc,OAAO,KAAK,IAAI,CAAC,EAAE,MAAM,KAAK;AAE7E,IAAIA;AAEJ,MAAM,yBAAuC;AAC3C,KAAI,cAAe,QAAO;CAE1B,MAAM,MAAM,aADK,KAAK,aAAa,QAAQ,2BAA2B,EACnC,OAAO;AAC1C,iBAAgB,KAAK,MAAM,IAAI;AAC/B,QAAO;;AAKT,MAAM,wBAAwB,UAAgC;AAC5D,KACE,UAAU,UACV,UAAU,mBACV,UAAU,oBAEV,QAAO;AAET,QAAO;;AAGT,MAAM,gCACJ,WACA,UACA,gBACa;CACb,MAAM,gBACJ,aAAa,YAAY,aAAa,cAClC,OAAO,SAAS,gDAAgD,aAAa,WAAW,WAAW,YAAY,+BAC/G,WAAW,SAAS;CAE1B,MAAM,gBACJ,cAAc,aAAa,cAAc,SAAS,cAAc,YAC5D,uHACA,cAAc,UAAU,cAAc,WACpC,2FACA;CAER,MAAM,eACJ,gBAAgB,SACZ,8IACA,gBAAgB,kBACd,sHACA;CAER,MAAM,cACJ,gBAAgB,kBACZ,8HACA,gBAAgB,sBACd,4HACA;AAER,QAAO;EACL,cAAc;EACd,aAAa;EACb,iBAAiB;EACjB;EACA;EACA;EACA;EACA,MAAM;EACN;EACA,MAAM;EACN;EACA;EACA,KAAK;EACL;EACA;EACA;EACA;EACA;EACA,KAAK;EACL;EACA;EACA;EACA;EACD;;AAGH,MAAM,mBAAmB,YAA4B;CACnD,MAAM,aAAa,QAAQ,aAAa;AACxC,KACE,WAAW,SAAS,+BAA+B,IACnD,WAAW,SAAS,4BAA4B,CAEhD,QAAO;AAET,KACE,WAAW,SAAS,aAAa,IACjC,WAAW,SAAS,cAAc,IAClC,WAAW,SAAS,UAAU,CAE9B,QAAO;AAET,QAAO;;AAGT,MAAM,qBAAgC;CACpC,MAAM,SAAS,IAAI,UAAU;EAC3B,MAAM;EACN,SAAS;EACV,CAAC;AAEF,QAAO,aACL,UAAU,gBACV;EACE,OAAO;EACP,aACE;EACF,aAAa,EACX,YAAYC,YAAU,CAAC,UAAU,EAClC;EACF,EAED,OAAO,SAAc;EACnB,MAAM,YACJ,OAAO,MAAM,eAAe,WAAW,KAAK,aAAa;EAE3D,MAAM,SADU,kBAAkB,CACX,UAAU,EAAE;EACnC,MAAM,WAAW,YACb,OAAO,QAAQ,UAAU,MAAM,eAAe,UAAU,GACxD;AAEJ,SAAO,EACL,SAAS,CACP;GACE,MAAM;GACN,MAAM,KAAK,UACT;IACE,OAAO,SAAS;IAChB,QAAQ;IACT,EACD,MACA,EACD;GACF,CACF,EACF;GAEJ;AAED,QAAO,aACL,UAAU,uBACV;EACE,OAAO;EACP,aACE;EACF,aAAa,EACX,IAAIA,YAAU,CAAC,UAAU,EAC1B;EACF,EAED,OAAO,SAAc;EACnB,MAAM,KAAK,OAAO,MAAM,OAAO,WAAW,KAAK,KAAK;EACpD,MAAM,WAAW,wBAAwB;EACzC,MAAM,WAAW,KACb,SAAS,QAAQ,YAAY,QAAQ,OAAO,GAAG,GAC/C;AACJ,SAAO,EACL,SAAS,CACP;GACE,MAAM;GACN,MAAM,KAAK,UAAU,EAAE,UAAU,UAAU,EAAE,MAAM,EAAE;GACtD,CACF,EACF;GAEJ;AAED,QAAO,aACL,UAAU,eACV;EACE,OAAO;EACP,aAAa;EACb,aAAa,EAAE;EAChB,EACD,YAAY;EAEV,MAAM,UADW,CAAC,uBAAuB,CAChB,QAAQ,SAAS,CAAC,QAAQ,IAAI,MAAM;EAC7D,MAAM,SAAS,QAAQ,WAAW,IAAI,OAAO;AAC7C,SAAO,EACL,SAAS,CACP;GACE,MAAM;GACN,MAAM,KAAK,UACT;IACE;IACA,aAAa;IACd,EACD,MACA,EACD;GACF,CACF,EACF;GAEJ;AAED,QAAO,aACL,UAAU,wBACV;EACE,OAAO;EACP,aACE;EACF,aAAa;GACX,WAAWA,YAAU,CAAC,UAAU;GAChC,UAAUA,YAAU,CAAC,UAAU;GAC/B,cAAcC,SACN;IAAC;IAAQ;IAAiB;IAAoB,CAAC,CACpD,UAAU,CACV,SAAS,uCAAuC;GACpD;EACF,EAED,OAAO,SAAc;AAYnB,SAAO,EACL,SAAS,CACP;GACE,MAAM;GACN,MAVW,6BAJf,OAAO,MAAM,cAAc,WAAW,KAAK,YAAY,QAEvD,OAAO,MAAM,aAAa,WAAW,KAAK,WAAW,UACnC,qBAAqB,MAAM,aAAa,CAK3D,CAMoB,KAAK,KAAK;GAC1B,CACF,EACF;GAEJ;AAED,QAAO,aACL,UAAU,cACV;EACE,OAAO;EACP,aACE;EACF,aAAa,EACX,QAAQD,YAAU,CAAC,SAAS,iCAAiC,EAC9D;EACF,EAED,OAAO,SAAc;EAEnB,MAAM,SAAS,2BADA,OAAO,MAAM,WAAW,WAAW,KAAK,SAAS,GACf;AACjD,SAAO,EACL,SAAS,CACP;GACE,MAAM;GACN,MAAM,KAAK,UAAU,QAAQ,MAAM,EAAE;GACtC,CACF,EACF;GAEJ;AAED,QAAO,aACL,UAAU,YACV;EACE,OAAO;EACP,aACE;EACF,aAAa;GACX,OAAOA,YAAU,CAAC,SAAS,kCAAkC;GAC7D,aAAaE,YACF,CACR,UAAU,CACV,SAAS,mDAAmD;GAChE;EACF,EAED,OAAO,SAAc;EACnB,MAAM,QACJ,OAAO,MAAM,UAAU,WAAW,KAAK,MAAM,aAAa,GAAG;EAC/D,MAAM,aACJ,OAAO,MAAM,gBAAgB,WAAW,KAAK,cAAc;AAE7D,MAAI,CAAC,MACH,QAAO,EACL,SAAS,CACP;GACE,MAAM;GACN,MAAM,KAAK,UAAU,EAAE,OAAO,qBAAqB,CAAC;GACrD,CACF,EACF;EAGH,MAAMC,UAAiD,CACrD;GAAE,MAAM;GAAa,MAAM,KAAK,aAAa,YAAY;GAAE,EAC3D;GAAE,MAAM;GAAiB,MAAM,KAAK,aAAa,gBAAgB;GAAE,CACpE;EAED,MAAMC,UAMD,EAAE;AAEP,OAAK,MAAM,EAAE,MAAM,UAAU,SAAS;GACpC,IAAIC;AACJ,OAAI;AACF,cAAU,aAAa,MAAM,OAAO;WAC9B;AACN;;GAEF,MAAM,QAAQ,QAAQ,MAAM,KAAK;GACjC,IAAI,iBAAiB;AACrB,QAAK,IAAI,IAAI,GAAG,IAAI,MAAM,QAAQ,KAAK;IACrC,MAAM,OAAO,MAAM,MAAM;AACzB,QAAI,KAAK,WAAW,IAAI,CACtB,kBAAiB,KAAK,QAAQ,UAAU,GAAG;AAE7C,QAAI,KAAK,aAAa,CAAC,SAAS,MAAM,EAAE;KACtC,MAAM,QAAQ,KAAK,IAAI,GAAG,IAAI,EAAE;KAChC,MAAM,MAAM,KAAK,IAAI,MAAM,QAAQ,IAAI,EAAE;AACzC,aAAQ,KAAK;MACX,QAAQ;MACR,SAAS;MACT,SAAS,MAAM,MAAM,OAAO,IAAI,CAAC,KAAK,KAAK;MAC3C,MAAM,IAAI;MACV,UAAU,gBAAgB,eAAe;MAC1C,CAAC;;;;AAKR,UAAQ,MAAM,GAAG,MAAM,EAAE,WAAW,EAAE,YAAY,EAAE,OAAO,EAAE,KAAK;EAClE,MAAM,WAAW,QAAQ,MAAM,GAAG,WAAW;AAE7C,SAAO,EACL,SAAS,CACP;GACE,MAAM;GACN,MAAM,KAAK,UACT;IACE;IACA,OAAO,SAAS;IAChB,SAAS,SAAS,KAAK,EAAE,UAAU,WAAW,GAAG,WAAW,KAAK;IAClE,EACD,MACA,EACD;GACF,CACF,EACF;GAEJ;AAED,QAAO,iBACL,gBACA,cAAc,aACd;EACE,OAAO;EACP,aAAa;EACb,UAAU;EACX,EACD,aAAa,EACX,UAAU,CACR;EACE,KAAK,cAAc;EACnB,UAAU;EACV,MAAM,KAAK,UAAU,kBAAkB,EAAE,MAAM,EAAE;EAClD,CACF,EACF,EACF;AAED,QAAO,iBACL,wBACA,cAAc,qBACd;EACE,OAAO;EACP,aAAa;EACb,UAAU;EACX,EACD,aAAa,EACX,UAAU,CACR;EACE,KAAK,cAAc;EACnB,UAAU;EACV,MAAM,KAAK,UAAU,EAAE,UAAU,wBAAwB,EAAE,EAAE,MAAM,EAAE;EACtE,CACF,EACF,EACF;AAED,QAAO,eACL,YAAY,eACZ;EACE,OAAO;EACP,aACE;EACF,YAAY;GACV,WAAWL,YAAU,CAAC,UAAU;GAChC,UAAUA,YAAU,CAAC,UAAU;GAChC;EACF,EAED,OAAO,SAAc;AAKnB,SAAO,EACL,UAAU,CACR;GACE,MAAM;GACN,SAAS;IACP,MAAM;IACN,MAAM,mBATZ,OAAO,MAAM,cAAc,WAAW,KAAK,YAAY,OASd,YAPzC,OAAO,MAAM,aAAa,WAAW,KAAK,WAAW,SAOS;IACzD;GACF,CACF,EACF;GAEJ;AAED,QAAO;;AAGT,MAAM,eAAe,YAA2B;CAC9C,MAAM,SAAS,cAAc;CAC7B,MAAM,YAAY,IAAI,sBAAsB;AAC5C,OAAM,OAAO,QAAQ,UAAU"}
@@ -1 +1 @@
1
- {"version":3,"file":"patching.d.ts","names":[],"sources":["../src/patching.ts"],"sourcesContent":[],"mappings":";;;;AAgLgB,iBAjJA,gBAAA,CAAA,CAkJU,EAAA,MAAA,EAAA;AAuHV,iBApNA,WAAA,CAqND,OAAW,EAAA;EA6CV,WAAA,EAjQD,WAiQa;EAgGZ,gBAAK,CAAA,EAAA,OACN;EAiEC,MAAA,CAAA,EAAA,OAAO;AAUvB,CAAA,CAAA,EAAgB,IAAA;AAIA,iBA7ZA,cAAA,CA6ZgB,OAAA,EAAA;EAIhB,WAAA,EAhaD,WAgac;EAIb,MAAA,CAAA,EAAA,OAAA;AAIhB,CAAA,CAAA,EAAgB,IAAA;AAIA,iBAhYA,gBAAA,CAgYkB,OAAA,EAAA;eA/XnB;;;iBAyBC,WAAA;eACD;;;iBAuHC,YAAA;eACD;;;iBA6CC,YAAA;eACD;;;;;;;;;;;;;;;;;;iBA+FC,KAAA;eACD;YACH;;iBAgEI,OAAA,CAAA;iBAUA,aAAA,CAAA;iBAIA,gBAAA,CAAA;iBAIA,aAAA,CAAA;iBAIA,cAAA,CAAA;iBAIA,cAAA,CAAA;iBAIA,kBAAA,CAAA"}
1
+ {"version":3,"file":"patching.d.ts","names":[],"sources":["../src/patching.ts"],"sourcesContent":[],"mappings":";;;;AAgLgB,iBAjJA,gBAAA,CAAA,CAkJD,EAAW,MAAA,EAAA;AAuHV,iBApNA,WAAA,CAqND,OAAA,EAAW;EA6CV,WAAA,EAjQD,WAiQa;EAgGZ,gBAAK,CAAA,EAAA,OACN;EAiEC,MAAA,CAAA,EAAA,OAAO;AAUvB,CAAA,CAAA,EAAgB,IAAA;AAIA,iBA7ZA,cAAA,CA6ZgB,OAAA,EAAA;EAIhB,WAAA,EAhaD,WAgac;EAIb,MAAA,CAAA,EAAA,OAAA;AAIhB,CAAA,CAAA,EAAgB,IAAA;AAIA,iBAhYA,gBAAA,CAgYkB,OAAA,EAAA;eA/XnB;;;iBAyBC,WAAA;eACD;;;iBAuHC,YAAA;eACD;;;iBA6CC,YAAA;eACD;;;;;;;;;;;;;;;;;;iBA+FC,KAAA;eACD;YACH;;iBAgEI,OAAA,CAAA;iBAUA,aAAA,CAAA;iBAIA,gBAAA,CAAA;iBAIA,aAAA,CAAA;iBAIA,cAAA,CAAA;iBAIA,cAAA,CAAA;iBAIA,kBAAA,CAAA"}
@@ -1 +1 @@
1
- {"version":3,"file":"anthropic.d.ts","names":[],"sources":["../../src/providers/anthropic.ts"],"sourcesContent":[],"mappings":";;;;;;AAyBM,cADO,mBACW,EAAM,OAAA;AAI9B,cAJM,gBAI2B,EAJT,MAKX,CAAA,MAAA,EAAA,OAEK,CAAA,GAAA,IAAA;AAOA,UAVD,gBAAA,CAUC;EAA0B,SAAA,EAT/B,aAS+B;EAGvB,MAAA,CAAA,EAAA,MAAA;EAGE,aAAA,CAAA,EAbL,aAaK,GAAA,IAAA;EAmCP,gBAAA,CAAA,EAAA,OAAA;EAxCN;EAAc,eAAA,CAAA,EAAA,OAAA;AA6CxB;AACa,cAhDA,SAgDA,CAAA,gBA/CK,MA+CL,CAAA,MAAA,EAAA,OAAA,CAAA,GA/C+B,MA+C/B,CAAA,MAAA,EAAA,OAAA,CAAA,CAAA,SA9CH,cAAA,CA8CG;EAQA,QAAA,OAAA;EACE,SAAA,QAAA,EArDM,eAqDN;EACI,QAAA,iBAAA;EAeP,WAAA,CAAA,OAAA,EAlEW,gBAkEX;EACa,IAAA,MAAA,CAAA,CAAA,EAhCT,OAgCS;;AACQ,cA5BpB,eAAA,CA4BoB;EAA5B,SAAA,EA3BQ,MA2BR,CAAA,MAAA,EAAA,OAAA,CAAA;EAAO,QAAA,QAAA;EA+SI,QAAA,UAAA;EAmBA,QAAA,cAAA;EACC,QAAA,iBAAA;EAAN,WAAA,CAAA,MAAA,EAAA,OAAA,EAAA,OAAA,EAtVE,OAsVF,EAAA,SAAA,EArVI,aAqVJ,EAAA,aAAA,EApVQ,aAoVR,GAAA,IAAA,EAAA,gBAAA,EAAA,OAAA;EAIQ,MAAA,CAAA,MAAA,EAzUP,MAyUO,CAAA,MAAA,EAAA,OAAA,CAAA,EAAA,kBAAA,CAAA,EAxUM,oBAwUN,CAAA,EAvUd,OAuUc,CAvUN,iBAuUM,GAvUc,aAuUd,CAAA,OAAA,CAAA,CAAA;EAAN,QAAA,WAAA;EAAK,QAAA,mBAAA;;;iBAxBF,4BAAA;iBAmBA,uBAAA,UACL,MAAM;;;aAIJ,MAAM"}
1
+ {"version":3,"file":"anthropic.d.ts","names":[],"sources":["../../src/providers/anthropic.ts"],"sourcesContent":[],"mappings":";;;;;;AAwBM,cADO,mBACW,EAAA,OAAM;AAI9B,cAJM,gBAI2B,EAJT,MAKX,CAAA,MAAA,EAAA,OAEK,CAAA,GAAA,IAAA;AAOA,UAVD,gBAAA,CAUC;EAA0B,SAAA,EAT/B,aAS+B;EAGvB,MAAA,CAAA,EAAA,MAAA;EAGE,aAAA,CAAA,EAbL,aAaK,GAAA,IAAA;EAmCP,gBAAA,CAAA,EAAA,OAAA;EAxCN;EAAc,eAAA,CAAA,EAAA,OAAA;AA6CxB;AACa,cAhDA,SAgDA,CAAA,gBA/CK,MA+CL,CAAA,MAAA,EAAA,OAAA,CAAA,GA/C+B,MA+C/B,CAAA,MAAA,EAAA,OAAA,CAAA,CAAA,SA9CH,cAAA,CA8CG;EAQA,QAAA,OAAA;EACE,SAAA,QAAA,EArDM,eAqDN;EACI,QAAA,iBAAA;EAeP,WAAA,CAAA,OAAA,EAlEW,gBAkEX;EACa,IAAA,MAAA,CAAA,CAAA,EAhCT,OAgCS;;AACQ,cA5BpB,eAAA,CA4BoB;EAA5B,SAAA,EA3BQ,MA2BR,CAAA,MAAA,EAAA,OAAA,CAAA;EAAO,QAAA,QAAA;EA+SI,QAAA,UAAA;EAmBA,QAAA,cAAA;EACC,QAAA,iBAAA;EAAN,WAAA,CAAA,MAAA,EAAA,OAAA,EAAA,OAAA,EAtVE,OAsVF,EAAA,SAAA,EArVI,aAqVJ,EAAA,aAAA,EApVQ,aAoVR,GAAA,IAAA,EAAA,gBAAA,EAAA,OAAA;EAIQ,MAAA,CAAA,MAAA,EAzUP,MAyUO,CAAA,MAAA,EAAA,OAAA,CAAA,EAAA,kBAAA,CAAA,EAxUM,oBAwUN,CAAA,EAvUd,OAuUc,CAvUN,iBAuUM,GAvUc,aAuUd,CAAA,OAAA,CAAA,CAAA;EAAN,QAAA,WAAA;EAAK,QAAA,mBAAA;;;iBAxBF,4BAAA;iBAmBA,uBAAA,UACL,MAAM;;;aAIJ,MAAM"}
@@ -1 +1 @@
1
- {"version":3,"file":"anthropic.js","names":["_AnthropicModule: Record<string, unknown> | null","clientOpts: Record<string, unknown>","costUsd: number | null","reasoning: string | undefined","toolCalls: Array<Record<string, unknown>>"],"sources":["../../src/providers/anthropic.ts"],"sourcesContent":["/**\n * Anthropic provider wrapper with automatic tracking.\n */\n\nimport type { PrivacyConfig } from '../core/privacy.js';\nimport { trackUserMessage } from '../core/tracking.js';\nimport { getDefaultPropagateContext, injectContext } from '../propagation.js';\nimport {\n resolveAmplitude,\n type AmplitudeLike,\n type AmplitudeOrAI,\n type AnthropicResponse,\n type TrackFn,\n} from '../types.js';\nimport { calculateCost } from '../utils/costs.js';\nimport { tryRequire } from '../utils/resolve-module.js';\nimport { StreamingAccumulator } from '../utils/streaming.js';\nimport {\n applySessionContext,\n BaseAIProvider,\n type ProviderTrackOptions,\n} from './base.js';\n\nconst _resolved = tryRequire('@anthropic-ai/sdk');\nexport const ANTHROPIC_AVAILABLE = _resolved != null;\nconst _AnthropicModule: Record<string, unknown> | null = _resolved;\n\nexport { _AnthropicModule };\n\nexport interface AnthropicOptions {\n amplitude: AmplitudeOrAI;\n apiKey?: string;\n privacyConfig?: PrivacyConfig | null;\n propagateContext?: boolean;\n /** Pass the `@anthropic-ai/sdk` module directly to bypass `tryRequire` (required in bundler environments). */\n anthropicModule?: unknown;\n}\n\nexport class Anthropic<\n TClient extends Record<string, unknown> = Record<string, unknown>,\n> extends BaseAIProvider {\n private _client: TClient;\n readonly messages: WrappedMessages;\n private _propagateContext: boolean;\n\n constructor(options: AnthropicOptions) {\n super({\n amplitude: options.amplitude,\n privacyConfig: options.privacyConfig,\n providerName: 'anthropic',\n });\n\n const mod =\n (options.anthropicModule as Record<string, unknown> | null) ??\n _AnthropicModule;\n if (mod == null) {\n throw new Error(\n '@anthropic-ai/sdk package is required. Install it with: npm install @anthropic-ai/sdk — or pass the module directly via the anthropicModule option.',\n );\n }\n\n const AnthropicSDK = mod.Anthropic as new (\n opts: Record<string, unknown>,\n ) => unknown;\n\n const clientOpts: Record<string, unknown> = {};\n if (options.apiKey) clientOpts.apiKey = options.apiKey;\n\n this._client = new AnthropicSDK(clientOpts) as TClient;\n this._propagateContext =\n options.propagateContext ?? getDefaultPropagateContext();\n this.messages = new WrappedMessages(\n this._client,\n this.trackFn(),\n this._amplitude,\n this._privacyConfig,\n this._propagateContext,\n );\n }\n\n get client(): TClient {\n return this._client;\n }\n}\n\nexport class WrappedMessages {\n _original: Record<string, unknown>;\n private _trackFn: TrackFn;\n private _amplitude: AmplitudeLike;\n private _privacyConfig: PrivacyConfig | null;\n private _propagateContext: boolean;\n\n constructor(\n client: unknown,\n trackFn: TrackFn,\n amplitude: AmplitudeLike,\n privacyConfig: PrivacyConfig | null,\n propagateContext: boolean,\n ) {\n const clientObj = client as Record<string, unknown>;\n this._original = clientObj.messages as Record<string, unknown>;\n this._trackFn =\n typeof trackFn === 'function'\n ? trackFn\n : (trackFn as unknown as { trackFn(): TrackFn }).trackFn();\n this._amplitude = amplitude;\n this._privacyConfig = privacyConfig;\n this._propagateContext = propagateContext;\n }\n\n async create(\n params: Record<string, unknown>,\n amplitudeOverrides?: ProviderTrackOptions,\n ): Promise<AnthropicResponse | AsyncIterable<unknown>> {\n const createFn = this._original.create as (\n ...args: unknown[]\n ) => Promise<unknown>;\n const startTime = performance.now();\n const requestParams = this._withContextHeaders(params);\n const ctx = applySessionContext(amplitudeOverrides);\n\n try {\n this._trackInputMessages(\n requestParams.messages as unknown,\n ctx,\n amplitudeOverrides?.trackInputMessages ?? true,\n );\n const response = await createFn.call(this._original, requestParams);\n\n if (requestParams.stream === true && _isAsyncIterable(response)) {\n return this._wrapStream(\n response as AsyncIterable<unknown>,\n requestParams,\n startTime,\n ctx,\n );\n }\n\n const latencyMs = performance.now() - startTime;\n\n const resp = response as AnthropicResponse;\n const usage = resp.usage;\n const extracted = extractAnthropicContent(\n resp.content as unknown as Array<Record<string, unknown>> | undefined,\n );\n const firstTextBlock = resp.content?.find((b) => b.type === 'text');\n const modelName = String(resp.model ?? requestParams.model ?? 'unknown');\n\n const cacheRead = usage?.cache_read_input_tokens ?? 0;\n const cacheCreation = usage?.cache_creation_input_tokens ?? 0;\n const rawInput = usage?.input_tokens ?? 0;\n const normalizedInput =\n cacheRead || cacheCreation\n ? rawInput + cacheRead + cacheCreation\n : rawInput;\n\n let costUsd: number | null = null;\n if (usage?.input_tokens != null && usage?.output_tokens != null) {\n try {\n costUsd = calculateCost({\n modelName,\n inputTokens: normalizedInput,\n outputTokens: usage.output_tokens,\n cacheReadInputTokens: cacheRead,\n cacheCreationInputTokens: cacheCreation,\n });\n } catch {\n // cost calculation is best-effort\n }\n }\n\n this._trackFn({\n userId: ctx.userId ?? 'unknown',\n modelName,\n provider: 'anthropic',\n responseContent: String(firstTextBlock?.text ?? ''),\n reasoningContent: extracted.reasoning,\n latencyMs,\n sessionId: ctx.sessionId,\n traceId: ctx.traceId,\n turnId: ctx.turnId ?? undefined,\n agentId: ctx.agentId,\n env: ctx.env,\n inputTokens: normalizedInput || undefined,\n outputTokens: usage?.output_tokens,\n cacheReadInputTokens: cacheRead || undefined,\n cacheCreationInputTokens: cacheCreation || undefined,\n totalCostUsd: costUsd,\n finishReason: resp.stop_reason,\n toolCalls:\n extracted.toolCalls.length > 0 ? extracted.toolCalls : undefined,\n isStreaming: false,\n systemPrompt: extractAnthropicSystemPrompt(requestParams.system),\n temperature: requestParams.temperature as number | undefined,\n maxOutputTokens: requestParams.max_tokens as number | undefined,\n topP: requestParams.top_p as number | undefined,\n });\n\n return response as AnthropicResponse;\n } catch (error) {\n const latencyMs = performance.now() - startTime;\n this._trackFn({\n userId: ctx.userId ?? 'unknown',\n modelName: String(requestParams.model ?? 'unknown'),\n provider: 'anthropic',\n responseContent: '',\n latencyMs,\n sessionId: ctx.sessionId,\n traceId: ctx.traceId,\n agentId: ctx.agentId,\n env: ctx.env,\n isError: true,\n errorMessage: error instanceof Error ? error.message : String(error),\n });\n\n throw error;\n }\n }\n\n private async *_wrapStream(\n stream: AsyncIterable<unknown>,\n params: Record<string, unknown>,\n _startTime: number,\n sessionCtx: ProviderTrackOptions,\n ): AsyncGenerator<unknown> {\n const accumulator = new StreamingAccumulator();\n accumulator.model = String(params.model ?? 'unknown');\n let reasoningContent = '';\n\n try {\n for await (const event of stream) {\n const evt = event as Record<string, unknown>;\n const type = evt.type as string | undefined;\n\n if (type === 'content_block_delta') {\n const delta = evt.delta as Record<string, unknown> | undefined;\n if (delta?.type === 'text_delta' && delta.text != null) {\n accumulator.addContent(String(delta.text));\n } else if (\n delta?.type === 'thinking_delta' &&\n delta.thinking != null\n ) {\n reasoningContent += String(delta.thinking);\n }\n } else if (type === 'content_block_start') {\n const block = evt.content_block as\n | Record<string, unknown>\n | undefined;\n if (block?.type === 'tool_use') {\n accumulator.addToolCall({\n type: 'function',\n id: block.id,\n function: {\n name: String(block.name ?? ''),\n arguments:\n typeof block.input === 'string'\n ? block.input\n : JSON.stringify(block.input ?? {}),\n },\n });\n }\n } else if (type === 'message_delta') {\n const delta = evt.delta as Record<string, unknown> | undefined;\n if (delta?.stop_reason != null) {\n accumulator.finishReason = String(delta.stop_reason);\n }\n const usage = evt.usage as Record<string, number> | undefined;\n if (usage != null) {\n accumulator.setUsage({\n outputTokens: usage.output_tokens,\n });\n }\n } else if (type === 'message_start') {\n const message = evt.message as Record<string, unknown> | undefined;\n if (message?.model != null) {\n accumulator.model = String(message.model);\n }\n const usage = message?.usage as Record<string, number> | undefined;\n if (usage != null) {\n accumulator.setUsage({\n inputTokens: usage.input_tokens,\n cacheReadTokens: usage.cache_read_input_tokens,\n cacheCreationTokens: usage.cache_creation_input_tokens,\n });\n }\n }\n\n yield event;\n }\n } catch (error) {\n accumulator.setError(\n error instanceof Error ? error.message : String(error),\n );\n throw error;\n } finally {\n const state = accumulator.getState();\n const modelName = String(accumulator.model ?? params.model ?? 'unknown');\n\n const streamCacheRead = state.cacheReadTokens ?? 0;\n const streamCacheCreation = state.cacheCreationTokens ?? 0;\n const streamRawInput = state.inputTokens ?? 0;\n const streamNormalizedInput =\n streamCacheRead || streamCacheCreation\n ? streamRawInput + streamCacheRead + streamCacheCreation\n : streamRawInput;\n\n let costUsd: number | null = null;\n if (state.inputTokens != null && state.outputTokens != null) {\n try {\n costUsd = calculateCost({\n modelName,\n inputTokens: streamNormalizedInput,\n outputTokens: state.outputTokens,\n cacheReadInputTokens: streamCacheRead,\n cacheCreationInputTokens: streamCacheCreation,\n });\n } catch {\n // cost calculation is best-effort\n }\n }\n\n this._trackFn({\n userId: sessionCtx.userId ?? 'unknown',\n modelName,\n provider: 'anthropic',\n responseContent: state.content,\n latencyMs: accumulator.elapsedMs,\n sessionId: sessionCtx.sessionId,\n traceId: sessionCtx.traceId,\n turnId: sessionCtx.turnId ?? undefined,\n agentId: sessionCtx.agentId,\n env: sessionCtx.env,\n inputTokens: streamNormalizedInput || undefined,\n outputTokens: state.outputTokens,\n cacheReadInputTokens: streamCacheRead || undefined,\n cacheCreationInputTokens: streamCacheCreation || undefined,\n totalCostUsd: costUsd,\n finishReason: state.finishReason,\n toolCalls: state.toolCalls.length > 0 ? state.toolCalls : undefined,\n providerTtfbMs: state.ttfbMs,\n isStreaming: true,\n isError: state.isError,\n errorMessage: state.errorMessage,\n reasoningContent: reasoningContent || undefined,\n systemPrompt: extractAnthropicSystemPrompt(params.system),\n temperature: params.temperature as number | undefined,\n maxOutputTokens: params.max_tokens as number | undefined,\n topP: params.top_p as number | undefined,\n });\n }\n }\n\n private _withContextHeaders(\n params: Record<string, unknown>,\n ): Record<string, unknown> {\n if (!this._propagateContext) return params;\n const existing = (params.extra_headers ?? params.headers) as\n | Record<string, string>\n | undefined;\n const injected = injectContext(existing);\n return {\n ...params,\n extra_headers: injected,\n };\n }\n\n private _trackInputMessages(\n messages: unknown,\n ctx: ProviderTrackOptions,\n shouldTrackInputMessages: boolean,\n ): void {\n if (!shouldTrackInputMessages) return;\n if (ctx.userId == null || ctx.sessionId == null) return;\n if (!Array.isArray(messages)) return;\n for (const msg of messages) {\n const role = (msg as Record<string, unknown>)?.role;\n if (role !== 'user') continue;\n const rawContent = (msg as Record<string, unknown>)?.content;\n const content = Array.isArray(rawContent)\n ? rawContent\n .map((part) => {\n if (typeof part === 'string') return part;\n const text = (part as Record<string, unknown>)?.text;\n return typeof text === 'string' ? text : '';\n })\n .join('')\n : typeof rawContent === 'string'\n ? rawContent\n : '';\n if (!content) continue;\n trackUserMessage({\n amplitude: this._amplitude,\n userId: ctx.userId,\n messageContent: content,\n sessionId: ctx.sessionId,\n traceId: ctx.traceId,\n turnId: ctx.turnId ?? undefined,\n agentId: ctx.agentId,\n parentAgentId: ctx.parentAgentId,\n customerOrgId: ctx.customerOrgId,\n env: ctx.env,\n groups: ctx.groups,\n privacyConfig: this._privacyConfig,\n });\n }\n }\n}\n\nfunction _isAsyncIterable(value: unknown): value is AsyncIterable<unknown> {\n return (\n value != null &&\n typeof (value as Record<symbol, unknown>)[Symbol.asyncIterator] ===\n 'function'\n );\n}\n\nexport function extractAnthropicSystemPrompt(\n system: unknown,\n): string | undefined {\n if (typeof system === 'string') return system;\n if (Array.isArray(system)) {\n return system\n .map((block) => {\n if (typeof block === 'string') return block;\n if (typeof block === 'object' && block != null) {\n return String((block as Record<string, unknown>).text ?? '');\n }\n return '';\n })\n .filter(Boolean)\n .join('\\n');\n }\n return undefined;\n}\n\nexport function extractAnthropicContent(\n content: Array<Record<string, unknown>> | undefined,\n): {\n text: string;\n reasoning: string | undefined;\n toolCalls: Array<Record<string, unknown>>;\n} {\n let text = '';\n let reasoning: string | undefined;\n const toolCalls: Array<Record<string, unknown>> = [];\n\n if (!content) return { text, reasoning, toolCalls };\n\n for (const block of content) {\n if (block.type === 'text') {\n text += String(block.text ?? '');\n } else if (block.type === 'thinking') {\n reasoning = String(block.thinking ?? '');\n } else if (block.type === 'tool_use') {\n toolCalls.push({\n type: 'function',\n id: block.id,\n function: {\n name: String(block.name ?? ''),\n arguments:\n typeof block.input === 'string'\n ? block.input\n : JSON.stringify(block.input ?? {}),\n },\n });\n }\n }\n\n return { text, reasoning, toolCalls };\n}\n"],"mappings":";;;;;;;;AAuBA,MAAM,YAAY,WAAW,oBAAoB;AACjD,MAAa,sBAAsB,aAAa;AAChD,MAAMA,mBAAmD;AAazD,IAAa,YAAb,cAEU,eAAe;CACvB,AAAQ;CACR,AAAS;CACT,AAAQ;CAER,YAAY,SAA2B;AACrC,QAAM;GACJ,WAAW,QAAQ;GACnB,eAAe,QAAQ;GACvB,cAAc;GACf,CAAC;EAEF,MAAM,MACH,QAAQ,mBACT;AACF,MAAI,OAAO,KACT,OAAM,IAAI,MACR,sJACD;EAGH,MAAM,eAAe,IAAI;EAIzB,MAAMC,aAAsC,EAAE;AAC9C,MAAI,QAAQ,OAAQ,YAAW,SAAS,QAAQ;AAEhD,OAAK,UAAU,IAAI,aAAa,WAAW;AAC3C,OAAK,oBACH,QAAQ,oBAAoB,4BAA4B;AAC1D,OAAK,WAAW,IAAI,gBAClB,KAAK,SACL,KAAK,SAAS,EACd,KAAK,YACL,KAAK,gBACL,KAAK,kBACN;;CAGH,IAAI,SAAkB;AACpB,SAAO,KAAK;;;AAIhB,IAAa,kBAAb,MAA6B;CAC3B;CACA,AAAQ;CACR,AAAQ;CACR,AAAQ;CACR,AAAQ;CAER,YACE,QACA,SACA,WACA,eACA,kBACA;AAEA,OAAK,YADa,OACS;AAC3B,OAAK,WACH,OAAO,YAAY,aACf,UACC,QAA8C,SAAS;AAC9D,OAAK,aAAa;AAClB,OAAK,iBAAiB;AACtB,OAAK,oBAAoB;;CAG3B,MAAM,OACJ,QACA,oBACqD;EACrD,MAAM,WAAW,KAAK,UAAU;EAGhC,MAAM,YAAY,YAAY,KAAK;EACnC,MAAM,gBAAgB,KAAK,oBAAoB,OAAO;EACtD,MAAM,MAAM,oBAAoB,mBAAmB;AAEnD,MAAI;AACF,QAAK,oBACH,cAAc,UACd,KACA,oBAAoB,sBAAsB,KAC3C;GACD,MAAM,WAAW,MAAM,SAAS,KAAK,KAAK,WAAW,cAAc;AAEnE,OAAI,cAAc,WAAW,QAAQ,iBAAiB,SAAS,CAC7D,QAAO,KAAK,YACV,UACA,eACA,WACA,IACD;GAGH,MAAM,YAAY,YAAY,KAAK,GAAG;GAEtC,MAAM,OAAO;GACb,MAAM,QAAQ,KAAK;GACnB,MAAM,YAAY,wBAChB,KAAK,QACN;GACD,MAAM,iBAAiB,KAAK,SAAS,MAAM,MAAM,EAAE,SAAS,OAAO;GACnE,MAAM,YAAY,OAAO,KAAK,SAAS,cAAc,SAAS,UAAU;GAExE,MAAM,YAAY,OAAO,2BAA2B;GACpD,MAAM,gBAAgB,OAAO,+BAA+B;GAC5D,MAAM,WAAW,OAAO,gBAAgB;GACxC,MAAM,kBACJ,aAAa,gBACT,WAAW,YAAY,gBACvB;GAEN,IAAIC,UAAyB;AAC7B,OAAI,OAAO,gBAAgB,QAAQ,OAAO,iBAAiB,KACzD,KAAI;AACF,cAAU,cAAc;KACtB;KACA,aAAa;KACb,cAAc,MAAM;KACpB,sBAAsB;KACtB,0BAA0B;KAC3B,CAAC;WACI;AAKV,QAAK,SAAS;IACZ,QAAQ,IAAI,UAAU;IACtB;IACA,UAAU;IACV,iBAAiB,OAAO,gBAAgB,QAAQ,GAAG;IACnD,kBAAkB,UAAU;IAC5B;IACA,WAAW,IAAI;IACf,SAAS,IAAI;IACb,QAAQ,IAAI,UAAU;IACtB,SAAS,IAAI;IACb,KAAK,IAAI;IACT,aAAa,mBAAmB;IAChC,cAAc,OAAO;IACrB,sBAAsB,aAAa;IACnC,0BAA0B,iBAAiB;IAC3C,cAAc;IACd,cAAc,KAAK;IACnB,WACE,UAAU,UAAU,SAAS,IAAI,UAAU,YAAY;IACzD,aAAa;IACb,cAAc,6BAA6B,cAAc,OAAO;IAChE,aAAa,cAAc;IAC3B,iBAAiB,cAAc;IAC/B,MAAM,cAAc;IACrB,CAAC;AAEF,UAAO;WACA,OAAO;GACd,MAAM,YAAY,YAAY,KAAK,GAAG;AACtC,QAAK,SAAS;IACZ,QAAQ,IAAI,UAAU;IACtB,WAAW,OAAO,cAAc,SAAS,UAAU;IACnD,UAAU;IACV,iBAAiB;IACjB;IACA,WAAW,IAAI;IACf,SAAS,IAAI;IACb,SAAS,IAAI;IACb,KAAK,IAAI;IACT,SAAS;IACT,cAAc,iBAAiB,QAAQ,MAAM,UAAU,OAAO,MAAM;IACrE,CAAC;AAEF,SAAM;;;CAIV,OAAe,YACb,QACA,QACA,YACA,YACyB;EACzB,MAAM,cAAc,IAAI,sBAAsB;AAC9C,cAAY,QAAQ,OAAO,OAAO,SAAS,UAAU;EACrD,IAAI,mBAAmB;AAEvB,MAAI;AACF,cAAW,MAAM,SAAS,QAAQ;IAChC,MAAM,MAAM;IACZ,MAAM,OAAO,IAAI;AAEjB,QAAI,SAAS,uBAAuB;KAClC,MAAM,QAAQ,IAAI;AAClB,SAAI,OAAO,SAAS,gBAAgB,MAAM,QAAQ,KAChD,aAAY,WAAW,OAAO,MAAM,KAAK,CAAC;cAE1C,OAAO,SAAS,oBAChB,MAAM,YAAY,KAElB,qBAAoB,OAAO,MAAM,SAAS;eAEnC,SAAS,uBAAuB;KACzC,MAAM,QAAQ,IAAI;AAGlB,SAAI,OAAO,SAAS,WAClB,aAAY,YAAY;MACtB,MAAM;MACN,IAAI,MAAM;MACV,UAAU;OACR,MAAM,OAAO,MAAM,QAAQ,GAAG;OAC9B,WACE,OAAO,MAAM,UAAU,WACnB,MAAM,QACN,KAAK,UAAU,MAAM,SAAS,EAAE,CAAC;OACxC;MACF,CAAC;eAEK,SAAS,iBAAiB;KACnC,MAAM,QAAQ,IAAI;AAClB,SAAI,OAAO,eAAe,KACxB,aAAY,eAAe,OAAO,MAAM,YAAY;KAEtD,MAAM,QAAQ,IAAI;AAClB,SAAI,SAAS,KACX,aAAY,SAAS,EACnB,cAAc,MAAM,eACrB,CAAC;eAEK,SAAS,iBAAiB;KACnC,MAAM,UAAU,IAAI;AACpB,SAAI,SAAS,SAAS,KACpB,aAAY,QAAQ,OAAO,QAAQ,MAAM;KAE3C,MAAM,QAAQ,SAAS;AACvB,SAAI,SAAS,KACX,aAAY,SAAS;MACnB,aAAa,MAAM;MACnB,iBAAiB,MAAM;MACvB,qBAAqB,MAAM;MAC5B,CAAC;;AAIN,UAAM;;WAED,OAAO;AACd,eAAY,SACV,iBAAiB,QAAQ,MAAM,UAAU,OAAO,MAAM,CACvD;AACD,SAAM;YACE;GACR,MAAM,QAAQ,YAAY,UAAU;GACpC,MAAM,YAAY,OAAO,YAAY,SAAS,OAAO,SAAS,UAAU;GAExE,MAAM,kBAAkB,MAAM,mBAAmB;GACjD,MAAM,sBAAsB,MAAM,uBAAuB;GACzD,MAAM,iBAAiB,MAAM,eAAe;GAC5C,MAAM,wBACJ,mBAAmB,sBACf,iBAAiB,kBAAkB,sBACnC;GAEN,IAAIA,UAAyB;AAC7B,OAAI,MAAM,eAAe,QAAQ,MAAM,gBAAgB,KACrD,KAAI;AACF,cAAU,cAAc;KACtB;KACA,aAAa;KACb,cAAc,MAAM;KACpB,sBAAsB;KACtB,0BAA0B;KAC3B,CAAC;WACI;AAKV,QAAK,SAAS;IACZ,QAAQ,WAAW,UAAU;IAC7B;IACA,UAAU;IACV,iBAAiB,MAAM;IACvB,WAAW,YAAY;IACvB,WAAW,WAAW;IACtB,SAAS,WAAW;IACpB,QAAQ,WAAW,UAAU;IAC7B,SAAS,WAAW;IACpB,KAAK,WAAW;IAChB,aAAa,yBAAyB;IACtC,cAAc,MAAM;IACpB,sBAAsB,mBAAmB;IACzC,0BAA0B,uBAAuB;IACjD,cAAc;IACd,cAAc,MAAM;IACpB,WAAW,MAAM,UAAU,SAAS,IAAI,MAAM,YAAY;IAC1D,gBAAgB,MAAM;IACtB,aAAa;IACb,SAAS,MAAM;IACf,cAAc,MAAM;IACpB,kBAAkB,oBAAoB;IACtC,cAAc,6BAA6B,OAAO,OAAO;IACzD,aAAa,OAAO;IACpB,iBAAiB,OAAO;IACxB,MAAM,OAAO;IACd,CAAC;;;CAIN,AAAQ,oBACN,QACyB;AACzB,MAAI,CAAC,KAAK,kBAAmB,QAAO;EAIpC,MAAM,WAAW,cAHC,OAAO,iBAAiB,OAAO,QAGT;AACxC,SAAO;GACL,GAAG;GACH,eAAe;GAChB;;CAGH,AAAQ,oBACN,UACA,KACA,0BACM;AACN,MAAI,CAAC,yBAA0B;AAC/B,MAAI,IAAI,UAAU,QAAQ,IAAI,aAAa,KAAM;AACjD,MAAI,CAAC,MAAM,QAAQ,SAAS,CAAE;AAC9B,OAAK,MAAM,OAAO,UAAU;AAE1B,OADc,KAAiC,SAClC,OAAQ;GACrB,MAAM,aAAc,KAAiC;GACrD,MAAM,UAAU,MAAM,QAAQ,WAAW,GACrC,WACG,KAAK,SAAS;AACb,QAAI,OAAO,SAAS,SAAU,QAAO;IACrC,MAAM,OAAQ,MAAkC;AAChD,WAAO,OAAO,SAAS,WAAW,OAAO;KACzC,CACD,KAAK,GAAG,GACX,OAAO,eAAe,WACpB,aACA;AACN,OAAI,CAAC,QAAS;AACd,oBAAiB;IACf,WAAW,KAAK;IAChB,QAAQ,IAAI;IACZ,gBAAgB;IAChB,WAAW,IAAI;IACf,SAAS,IAAI;IACb,QAAQ,IAAI,UAAU;IACtB,SAAS,IAAI;IACb,eAAe,IAAI;IACnB,eAAe,IAAI;IACnB,KAAK,IAAI;IACT,QAAQ,IAAI;IACZ,eAAe,KAAK;IACrB,CAAC;;;;AAKR,SAAS,iBAAiB,OAAiD;AACzE,QACE,SAAS,QACT,OAAQ,MAAkC,OAAO,mBAC/C;;AAIN,SAAgB,6BACd,QACoB;AACpB,KAAI,OAAO,WAAW,SAAU,QAAO;AACvC,KAAI,MAAM,QAAQ,OAAO,CACvB,QAAO,OACJ,KAAK,UAAU;AACd,MAAI,OAAO,UAAU,SAAU,QAAO;AACtC,MAAI,OAAO,UAAU,YAAY,SAAS,KACxC,QAAO,OAAQ,MAAkC,QAAQ,GAAG;AAE9D,SAAO;GACP,CACD,OAAO,QAAQ,CACf,KAAK,KAAK;;AAKjB,SAAgB,wBACd,SAKA;CACA,IAAI,OAAO;CACX,IAAIC;CACJ,MAAMC,YAA4C,EAAE;AAEpD,KAAI,CAAC,QAAS,QAAO;EAAE;EAAM;EAAW;EAAW;AAEnD,MAAK,MAAM,SAAS,QAClB,KAAI,MAAM,SAAS,OACjB,SAAQ,OAAO,MAAM,QAAQ,GAAG;UACvB,MAAM,SAAS,WACxB,aAAY,OAAO,MAAM,YAAY,GAAG;UAC/B,MAAM,SAAS,WACxB,WAAU,KAAK;EACb,MAAM;EACN,IAAI,MAAM;EACV,UAAU;GACR,MAAM,OAAO,MAAM,QAAQ,GAAG;GAC9B,WACE,OAAO,MAAM,UAAU,WACnB,MAAM,QACN,KAAK,UAAU,MAAM,SAAS,EAAE,CAAC;GACxC;EACF,CAAC;AAIN,QAAO;EAAE;EAAM;EAAW;EAAW"}
1
+ {"version":3,"file":"anthropic.js","names":["_AnthropicModule: Record<string, unknown> | null","clientOpts: Record<string, unknown>","costUsd: number | null","reasoning: string | undefined","toolCalls: Array<Record<string, unknown>>"],"sources":["../../src/providers/anthropic.ts"],"sourcesContent":["/**\n * Anthropic provider wrapper with automatic tracking.\n */\n\nimport type { PrivacyConfig } from '../core/privacy.js';\nimport { trackUserMessage } from '../core/tracking.js';\nimport { getDefaultPropagateContext, injectContext } from '../propagation.js';\nimport type {\n AmplitudeLike,\n AmplitudeOrAI,\n AnthropicResponse,\n TrackFn,\n} from '../types.js';\nimport { calculateCost } from '../utils/costs.js';\nimport { tryRequire } from '../utils/resolve-module.js';\nimport { StreamingAccumulator } from '../utils/streaming.js';\nimport {\n applySessionContext,\n BaseAIProvider,\n type ProviderTrackOptions,\n} from './base.js';\n\nconst _resolved = tryRequire('@anthropic-ai/sdk');\nexport const ANTHROPIC_AVAILABLE = _resolved != null;\nconst _AnthropicModule: Record<string, unknown> | null = _resolved;\n\nexport { _AnthropicModule };\n\nexport interface AnthropicOptions {\n amplitude: AmplitudeOrAI;\n apiKey?: string;\n privacyConfig?: PrivacyConfig | null;\n propagateContext?: boolean;\n /** Pass the `@anthropic-ai/sdk` module directly to bypass `tryRequire` (required in bundler environments). */\n anthropicModule?: unknown;\n}\n\nexport class Anthropic<\n TClient extends Record<string, unknown> = Record<string, unknown>,\n> extends BaseAIProvider {\n private _client: TClient;\n readonly messages: WrappedMessages;\n private _propagateContext: boolean;\n\n constructor(options: AnthropicOptions) {\n super({\n amplitude: options.amplitude,\n privacyConfig: options.privacyConfig,\n providerName: 'anthropic',\n });\n\n const mod =\n (options.anthropicModule as Record<string, unknown> | null) ??\n _AnthropicModule;\n if (mod == null) {\n throw new Error(\n '@anthropic-ai/sdk package is required. Install it with: npm install @anthropic-ai/sdk — or pass the module directly via the anthropicModule option.',\n );\n }\n\n const AnthropicSDK = mod.Anthropic as new (\n opts: Record<string, unknown>,\n ) => unknown;\n\n const clientOpts: Record<string, unknown> = {};\n if (options.apiKey) clientOpts.apiKey = options.apiKey;\n\n this._client = new AnthropicSDK(clientOpts) as TClient;\n this._propagateContext =\n options.propagateContext ?? getDefaultPropagateContext();\n this.messages = new WrappedMessages(\n this._client,\n this.trackFn(),\n this._amplitude,\n this._privacyConfig,\n this._propagateContext,\n );\n }\n\n get client(): TClient {\n return this._client;\n }\n}\n\nexport class WrappedMessages {\n _original: Record<string, unknown>;\n private _trackFn: TrackFn;\n private _amplitude: AmplitudeLike;\n private _privacyConfig: PrivacyConfig | null;\n private _propagateContext: boolean;\n\n constructor(\n client: unknown,\n trackFn: TrackFn,\n amplitude: AmplitudeLike,\n privacyConfig: PrivacyConfig | null,\n propagateContext: boolean,\n ) {\n const clientObj = client as Record<string, unknown>;\n this._original = clientObj.messages as Record<string, unknown>;\n this._trackFn =\n typeof trackFn === 'function'\n ? trackFn\n : (trackFn as unknown as { trackFn(): TrackFn }).trackFn();\n this._amplitude = amplitude;\n this._privacyConfig = privacyConfig;\n this._propagateContext = propagateContext;\n }\n\n async create(\n params: Record<string, unknown>,\n amplitudeOverrides?: ProviderTrackOptions,\n ): Promise<AnthropicResponse | AsyncIterable<unknown>> {\n const createFn = this._original.create as (\n ...args: unknown[]\n ) => Promise<unknown>;\n const startTime = performance.now();\n const requestParams = this._withContextHeaders(params);\n const ctx = applySessionContext(amplitudeOverrides);\n\n try {\n this._trackInputMessages(\n requestParams.messages as unknown,\n ctx,\n amplitudeOverrides?.trackInputMessages ?? true,\n );\n const response = await createFn.call(this._original, requestParams);\n\n if (requestParams.stream === true && _isAsyncIterable(response)) {\n return this._wrapStream(\n response as AsyncIterable<unknown>,\n requestParams,\n startTime,\n ctx,\n );\n }\n\n const latencyMs = performance.now() - startTime;\n\n const resp = response as AnthropicResponse;\n const usage = resp.usage;\n const extracted = extractAnthropicContent(\n resp.content as unknown as Array<Record<string, unknown>> | undefined,\n );\n const firstTextBlock = resp.content?.find((b) => b.type === 'text');\n const modelName = String(resp.model ?? requestParams.model ?? 'unknown');\n\n const cacheRead = usage?.cache_read_input_tokens ?? 0;\n const cacheCreation = usage?.cache_creation_input_tokens ?? 0;\n const rawInput = usage?.input_tokens ?? 0;\n const normalizedInput =\n cacheRead || cacheCreation\n ? rawInput + cacheRead + cacheCreation\n : rawInput;\n\n let costUsd: number | null = null;\n if (usage?.input_tokens != null && usage?.output_tokens != null) {\n try {\n costUsd = calculateCost({\n modelName,\n inputTokens: normalizedInput,\n outputTokens: usage.output_tokens,\n cacheReadInputTokens: cacheRead,\n cacheCreationInputTokens: cacheCreation,\n });\n } catch {\n // cost calculation is best-effort\n }\n }\n\n this._trackFn({\n userId: ctx.userId ?? 'unknown',\n modelName,\n provider: 'anthropic',\n responseContent: String(firstTextBlock?.text ?? ''),\n reasoningContent: extracted.reasoning,\n latencyMs,\n sessionId: ctx.sessionId,\n traceId: ctx.traceId,\n turnId: ctx.turnId ?? undefined,\n agentId: ctx.agentId,\n env: ctx.env,\n inputTokens: normalizedInput || undefined,\n outputTokens: usage?.output_tokens,\n cacheReadInputTokens: cacheRead || undefined,\n cacheCreationInputTokens: cacheCreation || undefined,\n totalCostUsd: costUsd,\n finishReason: resp.stop_reason,\n toolCalls:\n extracted.toolCalls.length > 0 ? extracted.toolCalls : undefined,\n isStreaming: false,\n systemPrompt: extractAnthropicSystemPrompt(requestParams.system),\n temperature: requestParams.temperature as number | undefined,\n maxOutputTokens: requestParams.max_tokens as number | undefined,\n topP: requestParams.top_p as number | undefined,\n });\n\n return response as AnthropicResponse;\n } catch (error) {\n const latencyMs = performance.now() - startTime;\n this._trackFn({\n userId: ctx.userId ?? 'unknown',\n modelName: String(requestParams.model ?? 'unknown'),\n provider: 'anthropic',\n responseContent: '',\n latencyMs,\n sessionId: ctx.sessionId,\n traceId: ctx.traceId,\n agentId: ctx.agentId,\n env: ctx.env,\n isError: true,\n errorMessage: error instanceof Error ? error.message : String(error),\n });\n\n throw error;\n }\n }\n\n private async *_wrapStream(\n stream: AsyncIterable<unknown>,\n params: Record<string, unknown>,\n _startTime: number,\n sessionCtx: ProviderTrackOptions,\n ): AsyncGenerator<unknown> {\n const accumulator = new StreamingAccumulator();\n accumulator.model = String(params.model ?? 'unknown');\n let reasoningContent = '';\n\n try {\n for await (const event of stream) {\n const evt = event as Record<string, unknown>;\n const type = evt.type as string | undefined;\n\n if (type === 'content_block_delta') {\n const delta = evt.delta as Record<string, unknown> | undefined;\n if (delta?.type === 'text_delta' && delta.text != null) {\n accumulator.addContent(String(delta.text));\n } else if (\n delta?.type === 'thinking_delta' &&\n delta.thinking != null\n ) {\n reasoningContent += String(delta.thinking);\n }\n } else if (type === 'content_block_start') {\n const block = evt.content_block as\n | Record<string, unknown>\n | undefined;\n if (block?.type === 'tool_use') {\n accumulator.addToolCall({\n type: 'function',\n id: block.id,\n function: {\n name: String(block.name ?? ''),\n arguments:\n typeof block.input === 'string'\n ? block.input\n : JSON.stringify(block.input ?? {}),\n },\n });\n }\n } else if (type === 'message_delta') {\n const delta = evt.delta as Record<string, unknown> | undefined;\n if (delta?.stop_reason != null) {\n accumulator.finishReason = String(delta.stop_reason);\n }\n const usage = evt.usage as Record<string, number> | undefined;\n if (usage != null) {\n accumulator.setUsage({\n outputTokens: usage.output_tokens,\n });\n }\n } else if (type === 'message_start') {\n const message = evt.message as Record<string, unknown> | undefined;\n if (message?.model != null) {\n accumulator.model = String(message.model);\n }\n const usage = message?.usage as Record<string, number> | undefined;\n if (usage != null) {\n accumulator.setUsage({\n inputTokens: usage.input_tokens,\n cacheReadTokens: usage.cache_read_input_tokens,\n cacheCreationTokens: usage.cache_creation_input_tokens,\n });\n }\n }\n\n yield event;\n }\n } catch (error) {\n accumulator.setError(\n error instanceof Error ? error.message : String(error),\n );\n throw error;\n } finally {\n const state = accumulator.getState();\n const modelName = String(accumulator.model ?? params.model ?? 'unknown');\n\n const streamCacheRead = state.cacheReadTokens ?? 0;\n const streamCacheCreation = state.cacheCreationTokens ?? 0;\n const streamRawInput = state.inputTokens ?? 0;\n const streamNormalizedInput =\n streamCacheRead || streamCacheCreation\n ? streamRawInput + streamCacheRead + streamCacheCreation\n : streamRawInput;\n\n let costUsd: number | null = null;\n if (state.inputTokens != null && state.outputTokens != null) {\n try {\n costUsd = calculateCost({\n modelName,\n inputTokens: streamNormalizedInput,\n outputTokens: state.outputTokens,\n cacheReadInputTokens: streamCacheRead,\n cacheCreationInputTokens: streamCacheCreation,\n });\n } catch {\n // cost calculation is best-effort\n }\n }\n\n this._trackFn({\n userId: sessionCtx.userId ?? 'unknown',\n modelName,\n provider: 'anthropic',\n responseContent: state.content,\n latencyMs: accumulator.elapsedMs,\n sessionId: sessionCtx.sessionId,\n traceId: sessionCtx.traceId,\n turnId: sessionCtx.turnId ?? undefined,\n agentId: sessionCtx.agentId,\n env: sessionCtx.env,\n inputTokens: streamNormalizedInput || undefined,\n outputTokens: state.outputTokens,\n cacheReadInputTokens: streamCacheRead || undefined,\n cacheCreationInputTokens: streamCacheCreation || undefined,\n totalCostUsd: costUsd,\n finishReason: state.finishReason,\n toolCalls: state.toolCalls.length > 0 ? state.toolCalls : undefined,\n providerTtfbMs: state.ttfbMs,\n isStreaming: true,\n isError: state.isError,\n errorMessage: state.errorMessage,\n reasoningContent: reasoningContent || undefined,\n systemPrompt: extractAnthropicSystemPrompt(params.system),\n temperature: params.temperature as number | undefined,\n maxOutputTokens: params.max_tokens as number | undefined,\n topP: params.top_p as number | undefined,\n });\n }\n }\n\n private _withContextHeaders(\n params: Record<string, unknown>,\n ): Record<string, unknown> {\n if (!this._propagateContext) return params;\n const existing = (params.extra_headers ?? params.headers) as\n | Record<string, string>\n | undefined;\n const injected = injectContext(existing);\n return {\n ...params,\n extra_headers: injected,\n };\n }\n\n private _trackInputMessages(\n messages: unknown,\n ctx: ProviderTrackOptions,\n shouldTrackInputMessages: boolean,\n ): void {\n if (!shouldTrackInputMessages) return;\n if (ctx.userId == null || ctx.sessionId == null) return;\n if (!Array.isArray(messages)) return;\n for (const msg of messages) {\n const role = (msg as Record<string, unknown>)?.role;\n if (role !== 'user') continue;\n const rawContent = (msg as Record<string, unknown>)?.content;\n const content = Array.isArray(rawContent)\n ? rawContent\n .map((part) => {\n if (typeof part === 'string') return part;\n const text = (part as Record<string, unknown>)?.text;\n return typeof text === 'string' ? text : '';\n })\n .join('')\n : typeof rawContent === 'string'\n ? rawContent\n : '';\n if (!content) continue;\n trackUserMessage({\n amplitude: this._amplitude,\n userId: ctx.userId,\n messageContent: content,\n sessionId: ctx.sessionId,\n traceId: ctx.traceId,\n turnId: ctx.turnId ?? undefined,\n agentId: ctx.agentId,\n parentAgentId: ctx.parentAgentId,\n customerOrgId: ctx.customerOrgId,\n env: ctx.env,\n groups: ctx.groups,\n privacyConfig: this._privacyConfig,\n });\n }\n }\n}\n\nfunction _isAsyncIterable(value: unknown): value is AsyncIterable<unknown> {\n return (\n value != null &&\n typeof (value as Record<symbol, unknown>)[Symbol.asyncIterator] ===\n 'function'\n );\n}\n\nexport function extractAnthropicSystemPrompt(\n system: unknown,\n): string | undefined {\n if (typeof system === 'string') return system;\n if (Array.isArray(system)) {\n return system\n .map((block) => {\n if (typeof block === 'string') return block;\n if (typeof block === 'object' && block != null) {\n return String((block as Record<string, unknown>).text ?? '');\n }\n return '';\n })\n .filter(Boolean)\n .join('\\n');\n }\n return undefined;\n}\n\nexport function extractAnthropicContent(\n content: Array<Record<string, unknown>> | undefined,\n): {\n text: string;\n reasoning: string | undefined;\n toolCalls: Array<Record<string, unknown>>;\n} {\n let text = '';\n let reasoning: string | undefined;\n const toolCalls: Array<Record<string, unknown>> = [];\n\n if (!content) return { text, reasoning, toolCalls };\n\n for (const block of content) {\n if (block.type === 'text') {\n text += String(block.text ?? '');\n } else if (block.type === 'thinking') {\n reasoning = String(block.thinking ?? '');\n } else if (block.type === 'tool_use') {\n toolCalls.push({\n type: 'function',\n id: block.id,\n function: {\n name: String(block.name ?? ''),\n arguments:\n typeof block.input === 'string'\n ? block.input\n : JSON.stringify(block.input ?? {}),\n },\n });\n }\n }\n\n return { text, reasoning, toolCalls };\n}\n"],"mappings":";;;;;;;;AAsBA,MAAM,YAAY,WAAW,oBAAoB;AACjD,MAAa,sBAAsB,aAAa;AAChD,MAAMA,mBAAmD;AAazD,IAAa,YAAb,cAEU,eAAe;CACvB,AAAQ;CACR,AAAS;CACT,AAAQ;CAER,YAAY,SAA2B;AACrC,QAAM;GACJ,WAAW,QAAQ;GACnB,eAAe,QAAQ;GACvB,cAAc;GACf,CAAC;EAEF,MAAM,MACH,QAAQ,mBACT;AACF,MAAI,OAAO,KACT,OAAM,IAAI,MACR,sJACD;EAGH,MAAM,eAAe,IAAI;EAIzB,MAAMC,aAAsC,EAAE;AAC9C,MAAI,QAAQ,OAAQ,YAAW,SAAS,QAAQ;AAEhD,OAAK,UAAU,IAAI,aAAa,WAAW;AAC3C,OAAK,oBACH,QAAQ,oBAAoB,4BAA4B;AAC1D,OAAK,WAAW,IAAI,gBAClB,KAAK,SACL,KAAK,SAAS,EACd,KAAK,YACL,KAAK,gBACL,KAAK,kBACN;;CAGH,IAAI,SAAkB;AACpB,SAAO,KAAK;;;AAIhB,IAAa,kBAAb,MAA6B;CAC3B;CACA,AAAQ;CACR,AAAQ;CACR,AAAQ;CACR,AAAQ;CAER,YACE,QACA,SACA,WACA,eACA,kBACA;AAEA,OAAK,YADa,OACS;AAC3B,OAAK,WACH,OAAO,YAAY,aACf,UACC,QAA8C,SAAS;AAC9D,OAAK,aAAa;AAClB,OAAK,iBAAiB;AACtB,OAAK,oBAAoB;;CAG3B,MAAM,OACJ,QACA,oBACqD;EACrD,MAAM,WAAW,KAAK,UAAU;EAGhC,MAAM,YAAY,YAAY,KAAK;EACnC,MAAM,gBAAgB,KAAK,oBAAoB,OAAO;EACtD,MAAM,MAAM,oBAAoB,mBAAmB;AAEnD,MAAI;AACF,QAAK,oBACH,cAAc,UACd,KACA,oBAAoB,sBAAsB,KAC3C;GACD,MAAM,WAAW,MAAM,SAAS,KAAK,KAAK,WAAW,cAAc;AAEnE,OAAI,cAAc,WAAW,QAAQ,iBAAiB,SAAS,CAC7D,QAAO,KAAK,YACV,UACA,eACA,WACA,IACD;GAGH,MAAM,YAAY,YAAY,KAAK,GAAG;GAEtC,MAAM,OAAO;GACb,MAAM,QAAQ,KAAK;GACnB,MAAM,YAAY,wBAChB,KAAK,QACN;GACD,MAAM,iBAAiB,KAAK,SAAS,MAAM,MAAM,EAAE,SAAS,OAAO;GACnE,MAAM,YAAY,OAAO,KAAK,SAAS,cAAc,SAAS,UAAU;GAExE,MAAM,YAAY,OAAO,2BAA2B;GACpD,MAAM,gBAAgB,OAAO,+BAA+B;GAC5D,MAAM,WAAW,OAAO,gBAAgB;GACxC,MAAM,kBACJ,aAAa,gBACT,WAAW,YAAY,gBACvB;GAEN,IAAIC,UAAyB;AAC7B,OAAI,OAAO,gBAAgB,QAAQ,OAAO,iBAAiB,KACzD,KAAI;AACF,cAAU,cAAc;KACtB;KACA,aAAa;KACb,cAAc,MAAM;KACpB,sBAAsB;KACtB,0BAA0B;KAC3B,CAAC;WACI;AAKV,QAAK,SAAS;IACZ,QAAQ,IAAI,UAAU;IACtB;IACA,UAAU;IACV,iBAAiB,OAAO,gBAAgB,QAAQ,GAAG;IACnD,kBAAkB,UAAU;IAC5B;IACA,WAAW,IAAI;IACf,SAAS,IAAI;IACb,QAAQ,IAAI,UAAU;IACtB,SAAS,IAAI;IACb,KAAK,IAAI;IACT,aAAa,mBAAmB;IAChC,cAAc,OAAO;IACrB,sBAAsB,aAAa;IACnC,0BAA0B,iBAAiB;IAC3C,cAAc;IACd,cAAc,KAAK;IACnB,WACE,UAAU,UAAU,SAAS,IAAI,UAAU,YAAY;IACzD,aAAa;IACb,cAAc,6BAA6B,cAAc,OAAO;IAChE,aAAa,cAAc;IAC3B,iBAAiB,cAAc;IAC/B,MAAM,cAAc;IACrB,CAAC;AAEF,UAAO;WACA,OAAO;GACd,MAAM,YAAY,YAAY,KAAK,GAAG;AACtC,QAAK,SAAS;IACZ,QAAQ,IAAI,UAAU;IACtB,WAAW,OAAO,cAAc,SAAS,UAAU;IACnD,UAAU;IACV,iBAAiB;IACjB;IACA,WAAW,IAAI;IACf,SAAS,IAAI;IACb,SAAS,IAAI;IACb,KAAK,IAAI;IACT,SAAS;IACT,cAAc,iBAAiB,QAAQ,MAAM,UAAU,OAAO,MAAM;IACrE,CAAC;AAEF,SAAM;;;CAIV,OAAe,YACb,QACA,QACA,YACA,YACyB;EACzB,MAAM,cAAc,IAAI,sBAAsB;AAC9C,cAAY,QAAQ,OAAO,OAAO,SAAS,UAAU;EACrD,IAAI,mBAAmB;AAEvB,MAAI;AACF,cAAW,MAAM,SAAS,QAAQ;IAChC,MAAM,MAAM;IACZ,MAAM,OAAO,IAAI;AAEjB,QAAI,SAAS,uBAAuB;KAClC,MAAM,QAAQ,IAAI;AAClB,SAAI,OAAO,SAAS,gBAAgB,MAAM,QAAQ,KAChD,aAAY,WAAW,OAAO,MAAM,KAAK,CAAC;cAE1C,OAAO,SAAS,oBAChB,MAAM,YAAY,KAElB,qBAAoB,OAAO,MAAM,SAAS;eAEnC,SAAS,uBAAuB;KACzC,MAAM,QAAQ,IAAI;AAGlB,SAAI,OAAO,SAAS,WAClB,aAAY,YAAY;MACtB,MAAM;MACN,IAAI,MAAM;MACV,UAAU;OACR,MAAM,OAAO,MAAM,QAAQ,GAAG;OAC9B,WACE,OAAO,MAAM,UAAU,WACnB,MAAM,QACN,KAAK,UAAU,MAAM,SAAS,EAAE,CAAC;OACxC;MACF,CAAC;eAEK,SAAS,iBAAiB;KACnC,MAAM,QAAQ,IAAI;AAClB,SAAI,OAAO,eAAe,KACxB,aAAY,eAAe,OAAO,MAAM,YAAY;KAEtD,MAAM,QAAQ,IAAI;AAClB,SAAI,SAAS,KACX,aAAY,SAAS,EACnB,cAAc,MAAM,eACrB,CAAC;eAEK,SAAS,iBAAiB;KACnC,MAAM,UAAU,IAAI;AACpB,SAAI,SAAS,SAAS,KACpB,aAAY,QAAQ,OAAO,QAAQ,MAAM;KAE3C,MAAM,QAAQ,SAAS;AACvB,SAAI,SAAS,KACX,aAAY,SAAS;MACnB,aAAa,MAAM;MACnB,iBAAiB,MAAM;MACvB,qBAAqB,MAAM;MAC5B,CAAC;;AAIN,UAAM;;WAED,OAAO;AACd,eAAY,SACV,iBAAiB,QAAQ,MAAM,UAAU,OAAO,MAAM,CACvD;AACD,SAAM;YACE;GACR,MAAM,QAAQ,YAAY,UAAU;GACpC,MAAM,YAAY,OAAO,YAAY,SAAS,OAAO,SAAS,UAAU;GAExE,MAAM,kBAAkB,MAAM,mBAAmB;GACjD,MAAM,sBAAsB,MAAM,uBAAuB;GACzD,MAAM,iBAAiB,MAAM,eAAe;GAC5C,MAAM,wBACJ,mBAAmB,sBACf,iBAAiB,kBAAkB,sBACnC;GAEN,IAAIA,UAAyB;AAC7B,OAAI,MAAM,eAAe,QAAQ,MAAM,gBAAgB,KACrD,KAAI;AACF,cAAU,cAAc;KACtB;KACA,aAAa;KACb,cAAc,MAAM;KACpB,sBAAsB;KACtB,0BAA0B;KAC3B,CAAC;WACI;AAKV,QAAK,SAAS;IACZ,QAAQ,WAAW,UAAU;IAC7B;IACA,UAAU;IACV,iBAAiB,MAAM;IACvB,WAAW,YAAY;IACvB,WAAW,WAAW;IACtB,SAAS,WAAW;IACpB,QAAQ,WAAW,UAAU;IAC7B,SAAS,WAAW;IACpB,KAAK,WAAW;IAChB,aAAa,yBAAyB;IACtC,cAAc,MAAM;IACpB,sBAAsB,mBAAmB;IACzC,0BAA0B,uBAAuB;IACjD,cAAc;IACd,cAAc,MAAM;IACpB,WAAW,MAAM,UAAU,SAAS,IAAI,MAAM,YAAY;IAC1D,gBAAgB,MAAM;IACtB,aAAa;IACb,SAAS,MAAM;IACf,cAAc,MAAM;IACpB,kBAAkB,oBAAoB;IACtC,cAAc,6BAA6B,OAAO,OAAO;IACzD,aAAa,OAAO;IACpB,iBAAiB,OAAO;IACxB,MAAM,OAAO;IACd,CAAC;;;CAIN,AAAQ,oBACN,QACyB;AACzB,MAAI,CAAC,KAAK,kBAAmB,QAAO;EAIpC,MAAM,WAAW,cAHC,OAAO,iBAAiB,OAAO,QAGT;AACxC,SAAO;GACL,GAAG;GACH,eAAe;GAChB;;CAGH,AAAQ,oBACN,UACA,KACA,0BACM;AACN,MAAI,CAAC,yBAA0B;AAC/B,MAAI,IAAI,UAAU,QAAQ,IAAI,aAAa,KAAM;AACjD,MAAI,CAAC,MAAM,QAAQ,SAAS,CAAE;AAC9B,OAAK,MAAM,OAAO,UAAU;AAE1B,OADc,KAAiC,SAClC,OAAQ;GACrB,MAAM,aAAc,KAAiC;GACrD,MAAM,UAAU,MAAM,QAAQ,WAAW,GACrC,WACG,KAAK,SAAS;AACb,QAAI,OAAO,SAAS,SAAU,QAAO;IACrC,MAAM,OAAQ,MAAkC;AAChD,WAAO,OAAO,SAAS,WAAW,OAAO;KACzC,CACD,KAAK,GAAG,GACX,OAAO,eAAe,WACpB,aACA;AACN,OAAI,CAAC,QAAS;AACd,oBAAiB;IACf,WAAW,KAAK;IAChB,QAAQ,IAAI;IACZ,gBAAgB;IAChB,WAAW,IAAI;IACf,SAAS,IAAI;IACb,QAAQ,IAAI,UAAU;IACtB,SAAS,IAAI;IACb,eAAe,IAAI;IACnB,eAAe,IAAI;IACnB,KAAK,IAAI;IACT,QAAQ,IAAI;IACZ,eAAe,KAAK;IACrB,CAAC;;;;AAKR,SAAS,iBAAiB,OAAiD;AACzE,QACE,SAAS,QACT,OAAQ,MAAkC,OAAO,mBAC/C;;AAIN,SAAgB,6BACd,QACoB;AACpB,KAAI,OAAO,WAAW,SAAU,QAAO;AACvC,KAAI,MAAM,QAAQ,OAAO,CACvB,QAAO,OACJ,KAAK,UAAU;AACd,MAAI,OAAO,UAAU,SAAU,QAAO;AACtC,MAAI,OAAO,UAAU,YAAY,SAAS,KACxC,QAAO,OAAQ,MAAkC,QAAQ,GAAG;AAE9D,SAAO;GACP,CACD,OAAO,QAAQ,CACf,KAAK,KAAK;;AAKjB,SAAgB,wBACd,SAKA;CACA,IAAI,OAAO;CACX,IAAIC;CACJ,MAAMC,YAA4C,EAAE;AAEpD,KAAI,CAAC,QAAS,QAAO;EAAE;EAAM;EAAW;EAAW;AAEnD,MAAK,MAAM,SAAS,QAClB,KAAI,MAAM,SAAS,OACjB,SAAQ,OAAO,MAAM,QAAQ,GAAG;UACvB,MAAM,SAAS,WACxB,aAAY,OAAO,MAAM,YAAY,GAAG;UAC/B,MAAM,SAAS,WACxB,WAAU,KAAK;EACb,MAAM;EACN,IAAI,MAAM;EACV,UAAU;GACR,MAAM,OAAO,MAAM,QAAQ,GAAG;GAC9B,WACE,OAAO,MAAM,UAAU,WACnB,MAAM,QACN,KAAK,UAAU,MAAM,SAAS,EAAE,CAAC;GACxC;EACF,CAAC;AAIN,QAAO;EAAE;EAAM;EAAW;EAAW"}
@@ -1 +1 @@
1
- {"version":3,"file":"mistral.d.ts","names":[],"sources":["../../src/providers/mistral.ts"],"sourcesContent":[],"mappings":";;;;;;AAaM,cADO,iBACS,EAAM,OAAA;AAI5B,cAJM,cAIyB,EAJT,MAKT,CAAA,MAAA,EAAA,OAEK,CAAA,GAAA,IAAA;AAOD,UAVA,cAAA,CAUA;EAEM,SAAA,EAXV,aAWU;EAJM,MAAA,CAAA,EAAA,MAAA;EAAc,aAAA,CAAA,EALzB,aAKyB,GAAA,IAAA;EAoC9B;EAI2B,aAAA,CAAA,EAAA,OAAA;;AAQW,cAhDtC,OAAA,SAAgB,cAAA,CAgDsB;EA2F5B,QAAA,OAAA;EAA0B,SAAA,IAAA,EAzIhC,WAyIgC;EAAO,WAAA,CAAA,OAAA,EAvIjC,cAuIiC;;;cAvG3C,WAAA;;;wCAI2B;mBAQf,0BAA0B;iBA2F5B,0BAA0B"}
1
+ {"version":3,"file":"mistral.d.ts","names":[],"sources":["../../src/providers/mistral.ts"],"sourcesContent":[],"mappings":";;;;;;AAaM,cADO,iBACS,EAAA,OAAM;AAI5B,cAJM,cAIyB,EAJT,MAKT,CAAA,MAAA,EAAA,OAEK,CAAA,GAAA,IAAA;AAOD,UAVA,cAAA,CAUA;EAEM,SAAA,EAXV,aAWU;EAJM,MAAA,CAAA,EAAA,MAAA;EAAc,aAAA,CAAA,EALzB,aAKyB,GAAA,IAAA;EAoC9B;EAI2B,aAAA,CAAA,EAAA,OAAA;;AAQW,cAhDtC,OAAA,SAAgB,cAAA,CAgDsB;EA2F5B,QAAA,OAAA;EAA0B,SAAA,IAAA,EAzIhC,WAyIgC;EAAO,WAAA,CAAA,OAAA,EAvIjC,cAuIiC;;;cAvG3C,WAAA;;;wCAI2B;mBAQf,0BAA0B;iBA2F5B,0BAA0B"}
package/llms-full.txt CHANGED
@@ -1,5 +1,5 @@
1
1
  # llms-full.txt
2
- # @amplitude/ai 0.1.0 — Detailed API Reference for LLM Agents
2
+ # @amplitude/ai 0.1.1 — Detailed API Reference for LLM Agents
3
3
  # Use this file for instrumentation guidance. See llms.txt for discovery.
4
4
 
5
5
  ## Core API
package/llms.txt CHANGED
@@ -1,7 +1,7 @@
1
1
  <!-- GENERATED FILE: do not edit manually -->
2
2
  # llms.txt
3
3
  package=@amplitude/ai
4
- version=0.1.0
4
+ version=0.1.1
5
5
 
6
6
  [mcp.tools]
7
7
  get_event_schema
package/mcp.schema.json CHANGED
@@ -1,7 +1,7 @@
1
1
  {
2
2
  "generated": true,
3
3
  "package": "@amplitude/ai",
4
- "version": "0.1.0",
4
+ "version": "0.1.1",
5
5
  "prompt": "instrument_app",
6
6
  "tools": [
7
7
  "get_event_schema",
package/package.json CHANGED
@@ -1,6 +1,6 @@
1
1
  {
2
2
  "name": "@amplitude/ai",
3
- "version": "0.1.0",
3
+ "version": "0.1.1",
4
4
  "private": false,
5
5
  "description": "Amplitude AI SDK - LLM usage tracking for Amplitude Analytics",
6
6
  "keywords": [