@runtypelabs/a2a-aisdk-example 0.0.1 → 0.2.3

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
package/LICENSE ADDED
@@ -0,0 +1,21 @@
1
+ MIT License
2
+
3
+ Copyright (c) 2025 Runtype Labs
4
+
5
+ Permission is hereby granted, free of charge, to any person obtaining a copy
6
+ of this software and associated documentation files (the "Software"), to deal
7
+ in the Software without restriction, including without limitation the rights
8
+ to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
9
+ copies of the Software, and to permit persons to whom the Software is
10
+ furnished to do so, subject to the following conditions:
11
+
12
+ The above copyright notice and this permission notice shall be included in all
13
+ copies or substantial portions of the Software.
14
+
15
+ THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
16
+ IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
17
+ FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
18
+ AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
19
+ LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
20
+ OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
21
+ SOFTWARE.
package/README.md ADDED
@@ -0,0 +1,357 @@
1
+ # @runtypelabs/a2a-agent-example
2
+
3
+ Reference A2A agent implementation with **deterministic time tools** and LLM-powered chat.
4
+
5
+ ## Why time tools?
6
+
7
+ LLMs can't reliably:
8
+
9
+ - Tell you what day "next Tuesday" is
10
+ - Calculate dates ("30 days from now")
11
+ - Convert timezones correctly
12
+ - Know what time it is
13
+
14
+ This agent includes deterministic time skills that compute answers from the system clock — no generation, no hallucination. Other agents can call these tools via A2A to get reliable temporal data.
15
+
16
+ ## Quick Start
17
+
18
+ ### Run an A2A Server (Echo Mode - for testing)
19
+
20
+ ```bash
21
+ # Using CLI
22
+ npx a2a-agent-example serve --echo
23
+ ```
24
+
25
+ This starts a server at:
26
+
27
+ - Agent Card: `http://localhost:9999/.well-known/agent.json`
28
+ - A2A Endpoint: `http://localhost:9999/a2a`
29
+
30
+ ### Test a time skill
31
+
32
+ ```bash
33
+ curl -X POST http://localhost:9999/a2a \
34
+ -H "Content-Type: application/json" \
35
+ -d '{
36
+ "jsonrpc": "2.0",
37
+ "id": "1",
38
+ "method": "tasks/send",
39
+ "params": {
40
+ "skill": "time/day_of_week",
41
+ "message": {
42
+ "role": "user",
43
+ "parts": [{"type": "data", "data": {"date": "2025-02-04"}}]
44
+ }
45
+ }
46
+ }'
47
+ # Returns: { "result": { "day": "Tuesday", ... }, "computed": { "method": "deterministic" } }
48
+ ```
49
+
50
+ ### Run with LLM
51
+
52
+ ```bash
53
+ # OpenAI
54
+ OPENAI_API_KEY=sk-xxx npx a2a-agent-example serve
55
+
56
+ # Anthropic
57
+ ANTHROPIC_API_KEY=sk-xxx npx a2a-agent-example serve --provider anthropic --model claude-3-haiku-20240307
58
+ ```
59
+
60
+ ### Test an A2A Endpoint
61
+
62
+ ```bash
63
+ # Test local server
64
+ npx a2a-agent-example test http://localhost:9999
65
+
66
+ # Test with streaming
67
+ npx a2a-agent-example test http://localhost:9999 --stream --skill chat --message "What is AI?"
68
+ ```
69
+
70
+ ### Test Runtype A2A Surface
71
+
72
+ ```bash
73
+ npx a2a-agent-example test-runtype \
74
+ --product-id prod_xxx \
75
+ --surface-id surf_xxx \
76
+ --api-key a2a_xxx \
77
+ --environment local \
78
+ --message "Hello!"
79
+ ```
80
+
81
+ ## Available Skills
82
+
83
+ ### Time Tools (Deterministic)
84
+
85
+ | Skill | Description |
86
+ | -------------------- | ------------------------------------- |
87
+ | `time/now` | Current time with timezone |
88
+ | `time/parse` | Parse "next Tuesday 3pm" to timestamp |
89
+ | `time/convert` | Convert between timezones |
90
+ | `time/add` | Add days/weeks/months to a date |
91
+ | `time/diff` | Duration between two dates |
92
+ | `time/day_of_week` | What day is this date? |
93
+ | `time/is_past` | Is this timestamp in the past? |
94
+ | `time/business_days` | Add/subtract business days |
95
+
96
+ ### General (LLM-Powered)
97
+
98
+ | Skill | Description |
99
+ | --------- | -------------------- |
100
+ | `chat` | Conversational AI |
101
+ | `analyze` | Content analysis |
102
+ | `echo` | Echo input (testing) |
103
+
104
+ `chat` can invoke skills tagged with `tool` (for example the deterministic `time/*` skills) through AI SDK tool calling.
105
+
106
+ Time tools return structured responses with a `computed.method: "deterministic"` field and `usage: "Use this value directly. Do not recalculate."` guidance for calling agents.
107
+
108
+ Example prompt that triggers tool use from `chat`:
109
+
110
+ ```bash
111
+ npx a2a-agent-example test http://localhost:9999 \
112
+ --skill chat \
113
+ --message "What day of the week is 2026-02-09 in UTC?"
114
+ ```
115
+
116
+ ## Connecting to Runtype
117
+
118
+ ### As an External Agent
119
+
120
+ 1. Start the A2A server:
121
+
122
+ ```bash
123
+ npx a2a-agent-example serve --echo --port 9999
124
+ ```
125
+
126
+ 2. In Runtype Dashboard:
127
+ - Go to your Product
128
+ - Click "Add Capability" > "Connect External"
129
+ - Enter:
130
+ - Agent Card URL: `http://localhost:9999/.well-known/agent.json`
131
+ - A2A Endpoint URL: `http://localhost:9999/a2a`
132
+ - Click "Connect & Add"
133
+
134
+ ### Testing Runtype's A2A Surface
135
+
136
+ 1. Create an A2A Surface in Runtype Dashboard
137
+ 2. Add capabilities (flows) to the surface
138
+ 3. Generate an API key for the surface
139
+ 4. Test with the CLI:
140
+ ```bash
141
+ npx a2a-agent-example test-runtype \
142
+ --product-id prod_xxx \
143
+ --surface-id surf_xxx \
144
+ --api-key a2a_xxx \
145
+ --environment local
146
+ ```
147
+
148
+ ## Programmatic Usage
149
+
150
+ ### Create a Server
151
+
152
+ ```typescript
153
+ import { createA2AServer } from '@runtypelabs/a2a-agent-example'
154
+
155
+ const server = createA2AServer({
156
+ config: {
157
+ name: 'My Agent',
158
+ description: 'A helpful AI assistant',
159
+ port: 9999,
160
+ },
161
+ llmConfig: {
162
+ provider: 'openai',
163
+ model: 'gpt-4o-mini',
164
+ temperature: 0.7,
165
+ },
166
+ })
167
+
168
+ await server.start()
169
+
170
+ // Graceful shutdown
171
+ process.on('SIGINT', async () => {
172
+ await server.stop()
173
+ })
174
+ ```
175
+
176
+ ### Create a Client
177
+
178
+ ```typescript
179
+ import { A2AClient } from '@runtypelabs/a2a-agent-example'
180
+
181
+ const client = new A2AClient({
182
+ baseUrl: 'http://localhost:9999',
183
+ })
184
+
185
+ // Get agent card
186
+ const agentCard = await client.getAgentCard()
187
+ console.log(
188
+ 'Skills:',
189
+ agentCard.skills.map((s) => s.name)
190
+ )
191
+
192
+ // Send a task
193
+ const task = await client.sendTask({
194
+ skill: 'chat',
195
+ message: {
196
+ role: 'user',
197
+ parts: [{ type: 'text', text: 'Hello!' }],
198
+ },
199
+ })
200
+
201
+ console.log('Response:', task.artifacts?.[0]?.parts?.[0]?.text)
202
+ ```
203
+
204
+ ### Test Runtype Surface
205
+
206
+ ```typescript
207
+ import { createRuntypeA2AClient } from '@runtypelabs/a2a-agent-example'
208
+
209
+ const client = createRuntypeA2AClient({
210
+ productId: 'prod_xxx',
211
+ surfaceId: 'surf_xxx',
212
+ apiKey: 'a2a_xxx',
213
+ environment: 'local', // or 'staging', 'production'
214
+ })
215
+
216
+ // Send streaming task
217
+ await client.sendTaskStreaming(
218
+ {
219
+ skill: 'my-capability',
220
+ message: {
221
+ role: 'user',
222
+ parts: [{ type: 'text', text: 'Analyze this data...' }],
223
+ },
224
+ },
225
+ {
226
+ onChunk: (text) => process.stdout.write(text),
227
+ onStatus: (status) => console.log('Status:', status),
228
+ }
229
+ )
230
+ ```
231
+
232
+ ## CLI Reference
233
+
234
+ ### `serve` - Start A2A Server
235
+
236
+ ```
237
+ Usage: a2a-agent-example serve [options]
238
+
239
+ Options:
240
+ -p, --port <port> Port to listen on (default: "9999")
241
+ -h, --host <host> Host to bind to (default: "localhost")
242
+ -n, --name <name> Agent name (default: "Example A2A Agent")
243
+ --echo Run in echo mode (no LLM, for testing)
244
+ --provider <provider> LLM provider: openai, anthropic (default: "openai")
245
+ --model <model> LLM model (default: "gpt-4o-mini")
246
+ --temperature <temp> LLM temperature (default: "0.7")
247
+ ```
248
+
249
+ ### `test` - Test A2A Endpoint
250
+
251
+ ```
252
+ Usage: a2a-agent-example test [options] <url>
253
+
254
+ Arguments:
255
+ url Base URL of the A2A endpoint
256
+
257
+ Options:
258
+ -s, --skill <skill> Skill to test (default: "echo")
259
+ -m, --message <msg> Message to send (default: "Hello from A2A client!")
260
+ --stream Use streaming mode
261
+ -k, --api-key <key> API key for authentication
262
+ ```
263
+
264
+ ### `test-runtype` - Test Runtype A2A Surface
265
+
266
+ ```
267
+ Usage: a2a-agent-example test-runtype [options]
268
+
269
+ Options:
270
+ --product-id <id> Runtype product ID (required)
271
+ --surface-id <id> Runtype surface ID (required)
272
+ --api-key <key> A2A API key (required)
273
+ -e, --environment Environment: production, staging, local (default: "local")
274
+ -s, --skill <skill> Skill/capability to test
275
+ -m, --message <msg> Message to send (default: "Hello from A2A client!")
276
+ --stream Use streaming mode
277
+ ```
278
+
279
+ ## A2A Protocol
280
+
281
+ This package implements [A2A Protocol v0.3](https://a2aproject.github.io/A2A/specification/).
282
+
283
+ ### Endpoints
284
+
285
+ - `GET /.well-known/agent.json` - Agent Card discovery
286
+ - `POST /a2a` - JSON-RPC endpoint
287
+
288
+ ### Supported Methods
289
+
290
+ - `tasks/send` - Create and execute a task (synchronous)
291
+ - `tasks/sendSubscribe` - Create and execute a task with SSE streaming
292
+ - `tasks/get` - Get task status
293
+ - `tasks/cancel` - Cancel a running task
294
+ - `ping` - Health check
295
+
296
+ ## Vercel Deployment
297
+
298
+ Deploy your A2A agent to Vercel for serverless operation.
299
+
300
+ ### Option 1: Deploy the `vercel-app` directory
301
+
302
+ 1. In Vercel dashboard, set **Root Directory** to `vercel-app`
303
+ 2. Add environment variables:
304
+ - `OPENAI_API_KEY` or `ANTHROPIC_API_KEY`
305
+ - `AGENT_NAME` (optional)
306
+ - `ECHO_MODE=true` for testing without LLM
307
+ 3. Deploy
308
+
309
+ ### Option 2: Add to Existing Next.js App
310
+
311
+ Install the package and use the Vercel handlers:
312
+
313
+ ```typescript
314
+ // app/api/a2a/route.ts
315
+ import { createA2AHandler } from '@runtypelabs/a2a-agent-example/vercel'
316
+
317
+ export const POST = createA2AHandler({
318
+ name: 'My Agent',
319
+ llmConfig: { provider: 'openai', model: 'gpt-4o-mini' },
320
+ })
321
+
322
+ // app/.well-known/agent.json/route.ts
323
+ import { createAgentCardHandler } from '@runtypelabs/a2a-agent-example/vercel'
324
+
325
+ export const GET = createAgentCardHandler({
326
+ name: 'My Agent',
327
+ llmConfig: { provider: 'openai', model: 'gpt-4o-mini' },
328
+ })
329
+ ```
330
+
331
+ ### Serverless Limitations
332
+
333
+ Since Vercel functions are stateless:
334
+
335
+ - `tasks/get` returns "not available" (no task storage)
336
+ - `tasks/cancel` returns "not available" (can't cancel in-flight tasks)
337
+ - Use `tasks/sendSubscribe` for streaming responses
338
+
339
+ ## Development
340
+
341
+ ```bash
342
+ # Build
343
+ pnpm build
344
+
345
+ # Development mode (watch)
346
+ pnpm dev
347
+
348
+ # Type check
349
+ pnpm typecheck
350
+
351
+ # Clean
352
+ pnpm clean
353
+ ```
354
+
355
+ ## License
356
+
357
+ MIT