@agentgov/sdk 0.1.1 → 0.1.2

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
Files changed (2) hide show
  1. package/README.md +88 -275
  2. package/package.json +2 -2
package/README.md CHANGED
@@ -1,75 +1,23 @@
1
1
  # @agentgov/sdk
2
2
 
3
- Official SDK for AgentGov — AI Agent Governance Platform.
3
+ [![npm version](https://img.shields.io/npm/v/@agentgov/sdk)](https://www.npmjs.com/package/@agentgov/sdk)
4
+ [![license](https://img.shields.io/npm/l/@agentgov/sdk)](https://github.com/agentgov-co/agentgov/blob/main/LICENSE)
4
5
 
5
- Automatically trace your AI agent operations with minimal code changes. Supports OpenAI, Vercel AI SDK, streaming, tool calls, and more.
6
+ Observability SDK for AI agents. Trace LLM calls, tool usage, and agent steps with minimal code changes.
6
7
 
7
- ## Features
8
-
9
- - **OpenAI Integration** — Automatic tracing with `wrapOpenAI()`
10
- - **Vercel AI SDK** — Support for `generateText`, `streamText`, `generateObject`, `embed`
11
- - **Streaming Support** — Full tracking of streaming responses
12
- - **Tool Calls** — Automatic span creation for tool/function calls
13
- - **Cost Estimation** — Built-in pricing for common models
14
- - **Batching** — High-throughput mode with `queueTrace()` / `queueSpan()`
15
- - **Context Management** — `withTrace()` / `withSpan()` helpers
8
+ Supports OpenAI, Vercel AI SDK, streaming, tool calls, and cost tracking.
16
9
 
17
10
  ## Installation
18
11
 
19
12
  ```bash
20
13
  npm install @agentgov/sdk
21
- # or
22
- pnpm add @agentgov/sdk
23
- ```
24
-
25
- ## Authentication
26
-
27
- The SDK uses API keys for authentication. Get your API key from the AgentGov dashboard:
28
-
29
- 1. Go to **Settings → API Keys**
30
- 2. Click **Create API Key**
31
- 3. Copy the key (it's only shown once!)
32
-
33
- API keys have the format `ag_live_xxxxxxxxxxxx` (production) or `ag_test_xxxxxxxxxxxx` (testing).
34
-
35
- ```typescript
36
- import { AgentGov } from "@agentgov/sdk";
37
-
38
- const ag = new AgentGov({
39
- apiKey: process.env.AGENTGOV_API_KEY!, // ag_live_xxx or ag_test_xxx
40
- projectId: process.env.AGENTGOV_PROJECT_ID!,
41
- });
42
14
  ```
43
15
 
44
- ### API Key Scopes
45
-
46
- API keys can be scoped to:
47
- - **All projects** — Access all projects in your organization
48
- - **Specific project** — Access only the specified project
49
-
50
- ### Error Handling for Auth
51
-
52
- ```typescript
53
- import { AgentGov, AgentGovAPIError } from "@agentgov/sdk";
54
-
55
- try {
56
- const trace = await ag.trace({ name: "My Trace" });
57
- } catch (error) {
58
- if (error instanceof AgentGovAPIError) {
59
- if (error.statusCode === 401) {
60
- console.error("Invalid API key");
61
- } else if (error.statusCode === 403) {
62
- console.error("Access denied - check API key permissions");
63
- } else if (error.statusCode === 429) {
64
- console.error("Rate limit exceeded");
65
- }
66
- }
67
- }
68
- ```
16
+ > Requires Node.js >= 18
69
17
 
70
18
  ## Quick Start
71
19
 
72
- ### OpenAI Integration
20
+ ### OpenAI
73
21
 
74
22
  ```typescript
75
23
  import { AgentGov } from "@agentgov/sdk";
@@ -80,28 +28,16 @@ const ag = new AgentGov({
80
28
  projectId: process.env.AGENTGOV_PROJECT_ID!,
81
29
  });
82
30
 
83
- // Wrap your OpenAI client
84
31
  const openai = ag.wrapOpenAI(new OpenAI());
85
32
 
86
- // All calls are automatically traced - including streaming!
33
+ // All calls are automatically traced including streaming and tool calls
87
34
  const response = await openai.chat.completions.create({
88
35
  model: "gpt-4o",
89
36
  messages: [{ role: "user", content: "Hello!" }],
90
37
  });
91
-
92
- // Streaming also works
93
- const stream = await openai.chat.completions.create({
94
- model: "gpt-4o",
95
- messages: [{ role: "user", content: "Write a poem" }],
96
- stream: true,
97
- });
98
-
99
- for await (const chunk of stream) {
100
- process.stdout.write(chunk.choices[0]?.delta?.content || "");
101
- }
102
38
  ```
103
39
 
104
- ### Vercel AI SDK Integration
40
+ ### Vercel AI SDK
105
41
 
106
42
  ```typescript
107
43
  import { AgentGov } from "@agentgov/sdk";
@@ -113,295 +49,172 @@ const ag = new AgentGov({
113
49
  projectId: process.env.AGENTGOV_PROJECT_ID!,
114
50
  });
115
51
 
116
- // Wrap Vercel AI SDK functions
117
52
  const tracedGenerateText = ag.wrapGenerateText(generateText);
118
53
  const tracedStreamText = ag.wrapStreamText(streamText);
119
54
 
120
- // Use them like normal
121
55
  const { text } = await tracedGenerateText({
122
56
  model: openai("gpt-4o"),
123
57
  prompt: "Hello!",
124
58
  });
125
-
126
- // Streaming
127
- const { textStream } = await tracedStreamText({
128
- model: openai("gpt-4o"),
129
- prompt: "Write a story",
130
- });
131
-
132
- for await (const chunk of textStream) {
133
- process.stdout.write(chunk);
134
- }
135
59
  ```
136
60
 
137
61
  ### Manual Tracing
138
62
 
139
63
  ```typescript
140
- import { AgentGov } from "@agentgov/sdk";
141
-
142
- const ag = new AgentGov({
143
- apiKey: "ag_xxx",
144
- projectId: "your-project-id",
145
- });
146
-
147
- // Using withTrace helper (recommended)
148
64
  const result = await ag.withTrace({ name: "My Agent Pipeline" }, async () => {
149
- // Nested spans
150
65
  const docs = await ag.withSpan(
151
66
  { name: "Retrieve Documents", type: "RETRIEVAL" },
152
- async () => {
153
- return ["doc1", "doc2"];
154
- }
67
+ async () => fetchDocs()
155
68
  );
156
69
 
157
70
  const response = await ag.withSpan(
158
71
  { name: "Generate Response", type: "LLM_CALL", model: "gpt-4o" },
159
- async (span) => {
160
- // Update span with metrics
161
- await ag.endSpan(span.id, {
162
- promptTokens: 150,
163
- outputTokens: 50,
164
- cost: 0.01,
165
- });
166
- return { content: "Hello!" };
167
- }
72
+ async () => generateResponse(docs)
168
73
  );
169
74
 
170
75
  return response;
171
76
  });
172
77
  ```
173
78
 
174
- ### High-Throughput Batching
79
+ ## Authentication
80
+
81
+ Get your API key from the AgentGov dashboard (**Settings > API Keys**).
82
+
83
+ Keys use the format `ag_live_xxx` (production) or `ag_test_xxx` (testing).
175
84
 
176
85
  ```typescript
177
86
  const ag = new AgentGov({
178
- apiKey: "ag_xxx",
179
- projectId: "xxx",
180
- batchSize: 10, // Flush after 10 items
181
- flushInterval: 5000, // Or after 5 seconds
182
- });
183
-
184
- // Queue items (don't await immediately)
185
- const tracePromise = ag.queueTrace({ name: "Batch Trace" });
186
- const spanPromise = ag.queueSpan({
187
- traceId: "...",
188
- name: "Batch Span",
189
- type: "CUSTOM",
87
+ apiKey: "ag_live_xxxxxxxxxxxx",
88
+ projectId: "your-project-id",
190
89
  });
191
-
192
- // Force flush when needed
193
- await ag.flush();
194
-
195
- // Or shutdown gracefully
196
- await ag.shutdown();
197
90
  ```
198
91
 
199
92
  ## Configuration
200
93
 
201
94
  ```typescript
202
- interface AgentGovConfig {
203
- /** API key from AgentGov dashboard (ag_xxx) */
204
- apiKey: string;
205
-
206
- /** Project ID */
207
- projectId: string;
208
-
209
- /** API base URL (default: https://api.agentgov.co) */
210
- baseUrl?: string;
211
-
212
- /** Enable debug logging */
213
- debug?: boolean;
214
-
215
- /** Flush interval in ms (default: 5000) */
216
- flushInterval?: number;
217
-
218
- /** Max batch size before auto-flush (default: 10) */
219
- batchSize?: number;
220
-
221
- /** Max retry attempts for failed API requests (default: 3) */
222
- maxRetries?: number;
95
+ const ag = new AgentGov({
96
+ apiKey: string, // Required. API key from dashboard
97
+ projectId: string, // Required. Project ID
98
+ baseUrl: string, // Default: "https://api.agentgov.co"
99
+ debug: boolean, // Default: false
100
+ flushInterval: number, // Default: 5000 (ms)
101
+ batchSize: number, // Default: 10
102
+ maxRetries: number, // Default: 3
103
+ retryDelay: number, // Default: 1000 (ms)
104
+ timeout: number, // Default: 30000 (ms)
105
+ onError: (error, ctx) => void, // Optional error callback
106
+ });
107
+ ```
223
108
 
224
- /** Base delay in ms for exponential backoff (default: 1000) */
225
- retryDelay?: number;
109
+ ## Wrapper Options
226
110
 
227
- /** Request timeout in ms (default: 30000) */
228
- timeout?: number;
111
+ Both OpenAI and Vercel AI wrappers accept options:
229
112
 
230
- /** Callback for batch flush errors (optional) */
231
- onError?: (error: Error, context: { operation: string; itemCount?: number }) => void;
232
- }
113
+ ```typescript
114
+ const openai = ag.wrapOpenAI(new OpenAI(), {
115
+ traceNamePrefix: "my-agent",
116
+ autoTrace: true,
117
+ captureInput: true,
118
+ captureOutput: true,
119
+ traceToolCalls: true,
120
+ });
233
121
  ```
234
122
 
235
- ### Error Callback
123
+ ## Batching
236
124
 
237
- Handle batch flush errors with the `onError` callback:
125
+ For high-throughput scenarios:
238
126
 
239
127
  ```typescript
240
128
  const ag = new AgentGov({
241
129
  apiKey: "ag_xxx",
242
130
  projectId: "xxx",
243
- onError: (error, context) => {
244
- console.error(`[AgentGov] ${context.operation} failed:`, error.message);
245
- // Send to your error tracking service
246
- Sentry.captureException(error, { extra: context });
247
- },
131
+ batchSize: 10,
132
+ flushInterval: 5000,
248
133
  });
249
- ```
250
134
 
251
- By default, errors during batch flush are:
252
- - Logged to console in `debug` mode
253
- - Silently dropped in production (to not affect your app)
135
+ ag.queueTrace({ name: "Batch Trace" });
136
+ ag.queueSpan({ traceId: "...", name: "Batch Span", type: "CUSTOM" });
137
+
138
+ await ag.flush(); // Force flush
139
+ await ag.shutdown(); // Flush and cleanup
140
+ ```
254
141
 
255
142
  ## Error Handling
256
143
 
257
- The SDK includes built-in retry logic with exponential backoff:
144
+ Built-in retry with exponential backoff. Retries on `429`, `408`, and `5xx`. No retries on `400`, `401`, `403`, `404`.
258
145
 
259
146
  ```typescript
260
147
  import { AgentGov, AgentGovAPIError } from "@agentgov/sdk";
261
148
 
262
- const ag = new AgentGov({
263
- apiKey: "ag_xxx",
264
- projectId: "xxx",
265
- maxRetries: 3, // Retry up to 3 times
266
- retryDelay: 1000, // Start with 1s delay
267
- timeout: 30000, // 30s request timeout
268
- });
269
-
270
149
  try {
271
150
  const trace = await ag.trace({ name: "My Trace" });
272
151
  } catch (error) {
273
152
  if (error instanceof AgentGovAPIError) {
274
- console.log(`Status: ${error.statusCode}`);
275
- console.log(`Retryable: ${error.retryable}`);
153
+ console.log(error.statusCode, error.retryable);
276
154
  }
277
155
  }
278
156
  ```
279
157
 
280
- **Automatic retries for:**
281
-
282
- - `429` - Rate limited (respects `Retry-After` header)
283
- - `408` - Request timeout
284
- - `5xx` - Server errors
285
-
286
- **No retries for:**
287
-
288
- - `400` - Bad request
289
- - `401` - Unauthorized
290
- - `403` - Forbidden
291
- - `404` - Not found
292
-
293
- ## Wrapper Options
294
-
295
- ### OpenAI Options
296
-
297
- ```typescript
298
- const openai = ag.wrapOpenAI(new OpenAI(), {
299
- traceNamePrefix: "my-agent", // Custom trace name prefix
300
- autoTrace: true, // Auto-create trace for each call
301
- captureInput: true, // Include prompts in trace
302
- captureOutput: true, // Include responses in trace
303
- traceToolCalls: true, // Create spans for tool calls
304
- });
305
- ```
306
-
307
- ### Vercel AI Options
308
-
309
- ```typescript
310
- const tracedFn = ag.wrapGenerateText(generateText, {
311
- traceNamePrefix: "vercel-ai",
312
- autoTrace: true,
313
- captureInput: true,
314
- captureOutput: true,
315
- traceToolCalls: true,
316
- });
317
- ```
318
-
319
158
  ## Span Types
320
159
 
321
- | Type | Description |
322
- | ------------ | ------------------------------------- |
323
- | `LLM_CALL` | Call to LLM (OpenAI, Anthropic, etc.) |
324
- | `TOOL_CALL` | Tool/function execution |
325
- | `AGENT_STEP` | High-level agent step |
326
- | `RETRIEVAL` | RAG retrieval |
327
- | `EMBEDDING` | Embedding generation |
328
- | `CUSTOM` | Custom span type |
160
+ | Type | Description |
161
+ | ------------ | ------------------------- |
162
+ | `LLM_CALL` | LLM API call |
163
+ | `TOOL_CALL` | Tool/function execution |
164
+ | `AGENT_STEP` | High-level agent step |
165
+ | `RETRIEVAL` | RAG document retrieval |
166
+ | `EMBEDDING` | Embedding generation |
167
+ | `CUSTOM` | Custom span |
329
168
 
330
169
  ## Cost Estimation
331
170
 
332
- Built-in pricing for common models:
171
+ Built-in pricing for OpenAI (GPT-5, GPT-4, o-series) and Anthropic (Claude 4, 3.5, 3) models:
333
172
 
334
173
  ```typescript
335
174
  import { estimateCost } from "@agentgov/sdk";
336
175
 
337
- const cost = estimateCost("gpt-4o", 1000, 500);
338
- // Returns: 0.0075 (USD)
176
+ estimateCost("gpt-4o", 1000, 500); // 0.0075 (USD)
339
177
  ```
340
178
 
341
- **Supported models (January 2026):**
342
-
343
- - OpenAI GPT-5: gpt-5.2, gpt-5.2-pro, gpt-5
344
- - OpenAI GPT-4: gpt-4.1, gpt-4.1-mini, gpt-4o, gpt-4o-mini
345
- - OpenAI o-Series: o4-mini, o3-pro, o3, o3-mini, o1, o1-mini
346
- - OpenAI Legacy: gpt-4-turbo, gpt-4, gpt-3.5-turbo
347
- - Anthropic: claude-sonnet-4, claude-3.5-sonnet, claude-3.5-haiku, claude-3-opus, claude-3-sonnet, claude-3-haiku
348
- - Embeddings: text-embedding-3-small, text-embedding-3-large, text-embedding-ada-002
349
-
350
179
  ## API Reference
351
180
 
352
- ### AgentGov Class
353
-
354
- | Method | Description |
355
- | ------------------------- | ---------------------------------------- |
356
- | `wrapOpenAI(client)` | Wrap OpenAI client for auto-tracing |
357
- | `wrapGenerateText(fn)` | Wrap Vercel AI generateText |
358
- | `wrapStreamText(fn)` | Wrap Vercel AI streamText |
359
- | `wrapGenerateObject(fn)` | Wrap Vercel AI generateObject |
360
- | `wrapEmbed(fn)` | Wrap Vercel AI embed |
361
- | `wrapEmbedMany(fn)` | Wrap Vercel AI embedMany |
362
- | `trace(input)` | Create a new trace |
363
- | `endTrace(id, update)` | End a trace |
364
- | `span(input)` | Create a span |
365
- | `endSpan(id, update)` | End a span |
366
- | `withTrace(input, fn)` | Execute function within trace context |
367
- | `withSpan(input, fn)` | Execute function within span context |
368
- | `queueTrace(input)` | Queue trace creation (batched) |
369
- | `queueSpan(input)` | Queue span creation (batched) |
370
- | `flush()` | Force flush queued items |
371
- | `shutdown()` | Flush and cleanup |
372
- | `getContext()` | Get current trace context |
373
- | `setContext(ctx)` | Set trace context (distributed tracing) |
374
- | `getTrace(id)` | Fetch trace by ID |
375
- | `getSpan(id)` | Fetch span by ID |
181
+ | Method | Description |
182
+ | --- | --- |
183
+ | `wrapOpenAI(client, opts?)` | Auto-trace OpenAI calls |
184
+ | `wrapGenerateText(fn, opts?)` | Wrap Vercel AI `generateText` |
185
+ | `wrapStreamText(fn, opts?)` | Wrap Vercel AI `streamText` |
186
+ | `wrapGenerateObject(fn, opts?)` | Wrap Vercel AI `generateObject` |
187
+ | `wrapEmbed(fn, opts?)` | Wrap Vercel AI `embed` |
188
+ | `wrapEmbedMany(fn, opts?)` | Wrap Vercel AI `embedMany` |
189
+ | `trace(input?)` | Create a trace |
190
+ | `endTrace(id, update?)` | End a trace |
191
+ | `span(input)` | Create a span |
192
+ | `endSpan(id, update?)` | End a span |
193
+ | `withTrace(input, fn)` | Run function within trace context |
194
+ | `withSpan(input, fn)` | Run function within span context |
195
+ | `queueTrace(input)` | Queue trace (batched) |
196
+ | `queueSpan(input)` | Queue span (batched) |
197
+ | `flush()` | Force flush queued items |
198
+ | `shutdown()` | Flush and cleanup |
199
+ | `getContext()` | Get current trace context |
200
+ | `setContext(ctx)` | Set trace context |
201
+ | `getTrace(id)` | Fetch trace by ID |
202
+ | `getSpan(id)` | Fetch span by ID |
376
203
 
377
204
  ## TypeScript
378
205
 
379
- Full TypeScript support:
380
-
381
206
  ```typescript
382
207
  import type {
383
208
  AgentGovConfig,
384
- Trace,
385
- Span,
386
- SpanType,
387
- TraceContext,
388
- WrapOpenAIOptions,
389
- WrapVercelAIOptions,
209
+ Trace, TraceInput, TraceStatus, TraceContext,
210
+ Span, SpanInput, SpanUpdate, SpanStatus, SpanType,
211
+ WrapOpenAIOptions, WrapVercelAIOptions,
390
212
  } from "@agentgov/sdk";
391
213
  ```
392
214
 
393
215
  ## Examples
394
216
 
395
- See the [examples](../../examples) directory:
396
-
397
- - `openai-example.ts` — Basic OpenAI integration
398
- - `streaming-example.ts` — Streaming responses
399
- - `vercel-ai-example.ts` — Vercel AI SDK integration
400
- - `manual-tracing.ts` — Manual span creation
401
-
402
- ## Documentation
403
-
404
- [docs.agentgov.co](https://docs.agentgov.co)
217
+ See the [examples](https://github.com/agentgov-co/agentgov/tree/main/examples) directory.
405
218
 
406
219
  ## License
407
220
 
package/package.json CHANGED
@@ -1,6 +1,6 @@
1
1
  {
2
2
  "name": "@agentgov/sdk",
3
- "version": "0.1.1",
3
+ "version": "0.1.2",
4
4
  "description": "AgentGov SDK - AI agent observability and EU AI Act compliance",
5
5
  "type": "module",
6
6
  "private": false,
@@ -38,7 +38,7 @@
38
38
  },
39
39
  "repository": {
40
40
  "type": "git",
41
- "url": "https://github.com/amadc2207/agentgov"
41
+ "url": "https://github.com/agentgov-co/agentgov"
42
42
  },
43
43
  "homepage": "https://agentgov.co",
44
44
  "keywords": [