@runtypelabs/a2a-aisdk-example 0.0.1 → 0.2.4

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
package/LICENSE ADDED
@@ -0,0 +1,21 @@
1
+ MIT License
2
+
3
+ Copyright (c) 2025 Runtype Labs
4
+
5
+ Permission is hereby granted, free of charge, to any person obtaining a copy
6
+ of this software and associated documentation files (the "Software"), to deal
7
+ in the Software without restriction, including without limitation the rights
8
+ to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
9
+ copies of the Software, and to permit persons to whom the Software is
10
+ furnished to do so, subject to the following conditions:
11
+
12
+ The above copyright notice and this permission notice shall be included in all
13
+ copies or substantial portions of the Software.
14
+
15
+ THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
16
+ IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
17
+ FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
18
+ AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
19
+ LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
20
+ OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
21
+ SOFTWARE.
package/README.md ADDED
@@ -0,0 +1,375 @@
1
+ # @runtypelabs/a2a-aisdk-example
2
+
3
+ Reference A2A agent implementation with **deterministic time tools** and LLM-powered chat.
4
+
5
+ ## Why time tools?
6
+
7
+ LLMs can't reliably:
8
+
9
+ - Tell you what day "next Tuesday" is
10
+ - Calculate dates ("30 days from now")
11
+ - Convert timezones correctly
12
+ - Know what time it is
13
+
14
+ This agent includes deterministic time skills that compute answers from the system clock — no generation, no hallucination. Other agents can call these tools via A2A to get reliable temporal data.
15
+
16
+ ## Quick Start
17
+
18
+ ### Run an A2A Server (Echo Mode - for testing)
19
+
20
+ ```bash
21
+ # Using CLI
22
+ npx @runtypelabs/a2a-aisdk-example serve --echo
23
+ ```
24
+
25
+ This starts a server at:
26
+
27
+ - Agent Card: `http://localhost:9999/.well-known/agent.json`
28
+ - A2A Endpoint: `http://localhost:9999/a2a`
29
+
30
+ ### Test a time skill
31
+
32
+ ```bash
33
+ curl -X POST http://localhost:9999/a2a \
34
+ -H "Content-Type: application/json" \
35
+ -d '{
36
+ "jsonrpc": "2.0",
37
+ "id": "1",
38
+ "method": "tasks/send",
39
+ "params": {
40
+ "skill": "time/day_of_week",
41
+ "message": {
42
+ "role": "user",
43
+ "parts": [{"type": "data", "data": {"date": "2025-02-04"}}]
44
+ }
45
+ }
46
+ }'
47
+ # Returns: { "result": { "day": "Tuesday", ... }, "computed": { "method": "deterministic" } }
48
+ ```
49
+
50
+ ### Run with LLM
51
+
52
+ ```bash
53
+ # OpenAI
54
+ OPENAI_API_KEY=sk-xxx npx @runtypelabs/a2a-aisdk-example serve
55
+
56
+ # Anthropic
57
+ ANTHROPIC_API_KEY=sk-xxx npx @runtypelabs/a2a-aisdk-example serve --provider anthropic --model claude-3-haiku-20240307
58
+ ```
59
+
60
+ ### Test an A2A Endpoint
61
+
62
+ The server must be running first. Use two terminals:
63
+
64
+ **Terminal 1** — start the server:
65
+
66
+ ```bash
67
+ # Echo mode (no API key needed)
68
+ npx @runtypelabs/a2a-aisdk-example serve --echo
69
+
70
+ # Or with LLM (requires OPENAI_API_KEY or ANTHROPIC_API_KEY)
71
+ OPENAI_API_KEY=sk-xxx npx @runtypelabs/a2a-aisdk-example serve
72
+ ```
73
+
74
+ **Terminal 2** — run the test:
75
+
76
+ ```bash
77
+ # Test echo (works with echo mode)
78
+ npx @runtypelabs/a2a-aisdk-example test http://localhost:9999
79
+
80
+ # Test with streaming (chat requires LLM mode + API key)
81
+ npx @runtypelabs/a2a-aisdk-example test http://localhost:9999 --stream --skill chat --message "What is AI?"
82
+ ```
83
+
84
+ > Run `npx @runtypelabs/a2a-aisdk-example --help` to see all commands and options.
85
+
86
+ ### Test Runtype A2A Surface
87
+
88
+ ```bash
89
+ npx @runtypelabs/a2a-aisdk-example test-runtype \
90
+ --product-id prod_xxx \
91
+ --surface-id surf_xxx \
92
+ --api-key a2a_xxx \
93
+ --environment local \
94
+ --message "Hello!"
95
+ ```
96
+
97
+ ## Available Skills
98
+
99
+ ### Time Tools (Deterministic)
100
+
101
+ | Skill | Description |
102
+ | -------------------- | ------------------------------------- |
103
+ | `time/now` | Current time with timezone |
104
+ | `time/parse` | Parse "next Tuesday 3pm" to timestamp |
105
+ | `time/convert` | Convert between timezones |
106
+ | `time/add` | Add days/weeks/months to a date |
107
+ | `time/diff` | Duration between two dates |
108
+ | `time/day_of_week` | What day is this date? |
109
+ | `time/is_past` | Is this timestamp in the past? |
110
+ | `time/business_days` | Add/subtract business days |
111
+
112
+ ### General (LLM-Powered)
113
+
114
+ | Skill | Description |
115
+ | --------- | -------------------- |
116
+ | `chat` | Conversational AI |
117
+ | `analyze` | Content analysis |
118
+ | `echo` | Echo input (testing) |
119
+
120
+ `chat` can invoke skills tagged with `tool` (for example the deterministic `time/*` skills) through AI SDK tool calling.
121
+
122
+ Time tools return structured responses with a `computed.method: "deterministic"` field and `usage: "Use this value directly. Do not recalculate."` guidance for calling agents.
123
+
124
+ Example prompt that triggers tool use from `chat`:
125
+
126
+ ```bash
127
+ npx @runtypelabs/a2a-aisdk-example test http://localhost:9999 \
128
+ --skill chat \
129
+ --message "What day of the week is 2026-02-09 in UTC?"
130
+ ```
131
+
132
+ ## Connecting to Runtype
133
+
134
+ ### As an External Agent
135
+
136
+ 1. Start the A2A server:
137
+
138
+ ```bash
139
+ npx @runtypelabs/a2a-aisdk-example serve --echo --port 9999
140
+ ```
141
+
142
+ 2. In Runtype Dashboard:
143
+ - Go to your Product
144
+ - Click "Add Capability" > "Connect External"
145
+ - Enter:
146
+ - Agent Card URL: `http://localhost:9999/.well-known/agent.json`
147
+ - A2A Endpoint URL: `http://localhost:9999/a2a`
148
+ - Click "Connect & Add"
149
+
150
+ ### Testing Runtype's A2A Surface
151
+
152
+ 1. Create an A2A Surface in Runtype Dashboard
153
+ 2. Add capabilities (flows) to the surface
154
+ 3. Generate an API key for the surface
155
+ 4. Test with the CLI:
156
+ ```bash
157
+ npx @runtypelabs/a2a-aisdk-example test-runtype \
158
+ --product-id prod_xxx \
159
+ --surface-id surf_xxx \
160
+ --api-key a2a_xxx \
161
+ --environment local
162
+ ```
163
+
164
+ ## Programmatic Usage
165
+
166
+ ### Create a Server
167
+
168
+ ```typescript
169
+ import { createA2AServer } from '@runtypelabs/a2a-aisdk-example'
170
+
171
+ const server = createA2AServer({
172
+ config: {
173
+ name: 'My Agent',
174
+ description: 'A helpful AI assistant',
175
+ port: 9999,
176
+ },
177
+ llmConfig: {
178
+ provider: 'openai',
179
+ model: 'gpt-4o-mini',
180
+ temperature: 0.7,
181
+ },
182
+ })
183
+
184
+ await server.start()
185
+
186
+ // Graceful shutdown
187
+ process.on('SIGINT', async () => {
188
+ await server.stop()
189
+ })
190
+ ```
191
+
192
+ ### Create a Client
193
+
194
+ ```typescript
195
+ import { A2AClient } from '@runtypelabs/a2a-aisdk-example'
196
+
197
+ const client = new A2AClient({
198
+ baseUrl: 'http://localhost:9999',
199
+ })
200
+
201
+ // Get agent card
202
+ const agentCard = await client.getAgentCard()
203
+ console.log(
204
+ 'Skills:',
205
+ agentCard.skills.map((s) => s.name)
206
+ )
207
+
208
+ // Send a task
209
+ const task = await client.sendTask({
210
+ skill: 'chat',
211
+ message: {
212
+ role: 'user',
213
+ parts: [{ type: 'text', text: 'Hello!' }],
214
+ },
215
+ })
216
+
217
+ console.log('Response:', task.artifacts?.[0]?.parts?.[0]?.text)
218
+ ```
219
+
220
+ ### Test Runtype Surface
221
+
222
+ ```typescript
223
+ import { createRuntypeA2AClient } from '@runtypelabs/a2a-aisdk-example'
224
+
225
+ const client = createRuntypeA2AClient({
226
+ productId: 'prod_xxx',
227
+ surfaceId: 'surf_xxx',
228
+ apiKey: 'a2a_xxx',
229
+ environment: 'local', // or 'staging', 'production'
230
+ })
231
+
232
+ // Send streaming task
233
+ await client.sendTaskStreaming(
234
+ {
235
+ skill: 'my-capability',
236
+ message: {
237
+ role: 'user',
238
+ parts: [{ type: 'text', text: 'Analyze this data...' }],
239
+ },
240
+ },
241
+ {
242
+ onChunk: (text) => process.stdout.write(text),
243
+ onStatus: (status) => console.log('Status:', status),
244
+ }
245
+ )
246
+ ```
247
+
248
+ ## CLI Reference
249
+
250
+ Run `npx @runtypelabs/a2a-aisdk-example --help` for all commands and options.
251
+
252
+ ### `serve` - Start A2A Server
253
+
254
+ ```
255
+ Usage: a2a-aisdk-example serve [options]
256
+
257
+ Options:
258
+ -p, --port <port> Port to listen on (default: "9999")
259
+ -h, --host <host> Host to bind to (default: "localhost")
260
+ -n, --name <name> Agent name (default: "Example A2A Agent")
261
+ --echo Run in echo mode (no LLM, for testing)
262
+ --provider <provider> LLM provider: openai, anthropic (default: "openai")
263
+ --model <model> LLM model (default: "gpt-4o-mini")
264
+ --temperature <temp> LLM temperature (default: "0.7")
265
+ ```
266
+
267
+ ### `test` - Test A2A Endpoint
268
+
269
+ ```
270
+ Usage: a2a-aisdk-example test [options] <url>
271
+
272
+ Arguments:
273
+ url Base URL of the A2A endpoint
274
+
275
+ Options:
276
+ -s, --skill <skill> Skill to test (default: "echo")
277
+ -m, --message <msg> Message to send (default: "Hello from A2A client!")
278
+ --stream Use streaming mode
279
+ -k, --api-key <key> API key for authentication
280
+ ```
281
+
282
+ ### `test-runtype` - Test Runtype A2A Surface
283
+
284
+ ```
285
+ Usage: a2a-aisdk-example test-runtype [options]
286
+
287
+ Options:
288
+ --product-id <id> Runtype product ID (required)
289
+ --surface-id <id> Runtype surface ID (required)
290
+ --api-key <key> A2A API key (required)
291
+ -e, --environment Environment: production, staging, local (default: "local")
292
+ -s, --skill <skill> Skill/capability to test
293
+ -m, --message <msg> Message to send (default: "Hello from A2A client!")
294
+ --stream Use streaming mode
295
+ ```
296
+
297
+ ## A2A Protocol
298
+
299
+ This package implements [A2A Protocol v0.3](https://a2aproject.github.io/A2A/specification/).
300
+
301
+ ### Endpoints
302
+
303
+ - `GET /.well-known/agent.json` - Agent Card discovery
304
+ - `POST /a2a` - JSON-RPC endpoint
305
+
306
+ ### Supported Methods
307
+
308
+ - `tasks/send` - Create and execute a task (synchronous)
309
+ - `tasks/sendSubscribe` - Create and execute a task with SSE streaming
310
+ - `tasks/get` - Get task status
311
+ - `tasks/cancel` - Cancel a running task
312
+ - `ping` - Health check
313
+
314
+ ## Vercel Deployment
315
+
316
+ Deploy your A2A agent to Vercel for serverless operation.
317
+
318
+ ### Option 1: Deploy the `vercel-app` directory
319
+
320
+ 1. In Vercel dashboard, set **Root Directory** to `vercel-app`
321
+ 2. Add environment variables:
322
+ - `OPENAI_API_KEY` or `ANTHROPIC_API_KEY`
323
+ - `AGENT_NAME` (optional)
324
+ - `ECHO_MODE=true` for testing without LLM
325
+ 3. Deploy
326
+
327
+ ### Option 2: Add to Existing Next.js App
328
+
329
+ Install the package and use the Vercel handlers:
330
+
331
+ ```typescript
332
+ // app/api/a2a/route.ts
333
+ import { createA2AHandler } from '@runtypelabs/a2a-aisdk-example/vercel'
334
+
335
+ export const POST = createA2AHandler({
336
+ name: 'My Agent',
337
+ llmConfig: { provider: 'openai', model: 'gpt-4o-mini' },
338
+ })
339
+
340
+ // app/.well-known/agent.json/route.ts
341
+ import { createAgentCardHandler } from '@runtypelabs/a2a-aisdk-example/vercel'
342
+
343
+ export const GET = createAgentCardHandler({
344
+ name: 'My Agent',
345
+ llmConfig: { provider: 'openai', model: 'gpt-4o-mini' },
346
+ })
347
+ ```
348
+
349
+ ### Serverless Limitations
350
+
351
+ Since Vercel functions are stateless:
352
+
353
+ - `tasks/get` returns "not available" (no task storage)
354
+ - `tasks/cancel` returns "not available" (can't cancel in-flight tasks)
355
+ - Use `tasks/sendSubscribe` for streaming responses
356
+
357
+ ## Development
358
+
359
+ ```bash
360
+ # Build
361
+ pnpm build
362
+
363
+ # Development mode (watch)
364
+ pnpm dev
365
+
366
+ # Type check
367
+ pnpm typecheck
368
+
369
+ # Clean
370
+ pnpm clean
371
+ ```
372
+
373
+ ## License
374
+
375
+ MIT