@runtypelabs/timely-a2a 0.3.3

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
package/LICENSE ADDED
@@ -0,0 +1,21 @@
1
+ MIT License
2
+
3
+ Copyright (c) 2025 Runtype Labs
4
+
5
+ Permission is hereby granted, free of charge, to any person obtaining a copy
6
+ of this software and associated documentation files (the "Software"), to deal
7
+ in the Software without restriction, including without limitation the rights
8
+ to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
9
+ copies of the Software, and to permit persons to whom the Software is
10
+ furnished to do so, subject to the following conditions:
11
+
12
+ The above copyright notice and this permission notice shall be included in all
13
+ copies or substantial portions of the Software.
14
+
15
+ THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
16
+ IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
17
+ FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
18
+ AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
19
+ LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
20
+ OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
21
+ SOFTWARE.
package/README.md ADDED
@@ -0,0 +1,404 @@
1
+ # Timely (@runtypelabs/timely-a2a)
2
+
3
+ Time-focused A2A agent with **deterministic time tools** and LLM-powered time chat.
4
+
5
+ ## What is Timely?
6
+
7
+ LLMs can't reliably:
8
+
9
+ - Tell you what day "next Tuesday" is
10
+ - Calculate dates ("30 days from now")
11
+ - Convert timezones correctly
12
+ - Know what time it is
13
+
14
+ Timely is an A2A agent that solves this with deterministic time skills that compute answers from the system clock — no generation, no hallucination. The `chat` skill is restricted to time, date, timezone, and scheduling questions, using tool-calling to invoke the deterministic time tools. Other agents can call these tools via A2A to get reliable temporal data.
15
+
16
+ ## Quick Start
17
+
18
+ ### Run an A2A Server (Echo Mode - for testing)
19
+
20
+ ```bash
21
+ # Using CLI
22
+ npx @runtypelabs/timely-a2a serve --echo
23
+ ```
24
+
25
+ This starts a server at:
26
+
27
+ - Agent Card: `http://localhost:9999/.well-known/agent-card.json`
28
+ - A2A Endpoint: `http://localhost:9999/a2a`
29
+
30
+ ### Test a time skill
31
+
32
+ ```bash
33
+ curl -X POST http://localhost:9999/a2a \
34
+ -H "Content-Type: application/json" \
35
+ -d '{
36
+ "jsonrpc": "2.0",
37
+ "id": "1",
38
+ "method": "message/send",
39
+ "params": {
40
+ "message": {
41
+ "role": "user",
42
+ "parts": [{"data": {"date": "2025-02-04"}}]
43
+ },
44
+ "metadata": { "skill": "time/day_of_week" }
45
+ }
46
+ }'
47
+ ```
48
+
49
+ Returns:
50
+ ```bash
51
+ { "result": { "day": "Tuesday", ... }, "computed": { "method": "deterministic" } }
52
+ ```
53
+
54
+ ### Run with LLM
55
+
56
+ Uses [Vercel AI Gateway](https://sdk.vercel.ai/docs/ai-sdk-core/gateway) to access any model provider through a single API key.
57
+
58
+ ```bash
59
+ # Vercel AI Gateway (recommended — supports any provider)
60
+ AI_GATEWAY_API_KEY=xxx npx @runtypelabs/timely-a2a serve
61
+ ```
62
+
63
+ The default model is `alibaba/qwen3.5-flash`. Use `--model` to switch:
64
+
65
+ ```bash
66
+ AI_GATEWAY_API_KEY=xxx npx @runtypelabs/timely-a2a serve --model openai/gpt-4o-mini
67
+ ```
68
+
69
+ <details>
70
+ <summary>Direct provider fallback (without AI Gateway)</summary>
71
+
72
+ ```bash
73
+ # OpenAI
74
+ OPENAI_API_KEY=sk-xxx npx @runtypelabs/timely-a2a serve --model gpt-4o-mini --provider openai
75
+
76
+ # Anthropic
77
+ ANTHROPIC_API_KEY=sk-xxx npx @runtypelabs/timely-a2a serve --model claude-sonnet-4-6 --provider anthropic
78
+ ```
79
+
80
+ Direct provider mode only supports `openai` and `anthropic` providers, and the model name must not contain a `/` prefix.
81
+
82
+ </details>
83
+
84
+ ### Test an A2A Endpoint
85
+
86
+ The server must be running first. Use two terminals:
87
+
88
+ **Terminal 1** — start the server:
89
+
90
+ ```bash
91
+ # Echo mode (no API key needed)
92
+ npx @runtypelabs/timely-a2a serve --echo
93
+ ```
94
+
95
+ ```bash
96
+ # Or with LLM (requires AI_GATEWAY_API_KEY)
97
+ AI_GATEWAY_API_KEY=xxx npx @runtypelabs/timely-a2a serve
98
+ ```
99
+
100
+ **Terminal 2** — run the test:
101
+
102
+ ```bash
103
+ # Test echo (works with echo mode)
104
+ npx @runtypelabs/timely-a2a test http://localhost:9999
105
+
106
+ # Test with streaming (chat requires LLM mode + API key)
107
+ npx @runtypelabs/timely-a2a test http://localhost:9999 --stream --skill chat --message "What time is it in Tokyo?"
108
+ ```
109
+
110
+ > Run `npx @runtypelabs/timely-a2a --help` to see all commands and options.
111
+
112
+ ### Test Runtype A2A Surface
113
+
114
+ ```bash
115
+ npx @runtypelabs/timely-a2a test-runtype \
116
+ --product-id prod_xxx \
117
+ --surface-id surf_xxx \
118
+ --api-key a2a_xxx \
119
+ --environment local \
120
+ --message "Hello!"
121
+ ```
122
+
123
+ ## Available Skills
124
+
125
+ ### Time Tools (Deterministic)
126
+
127
+ | Skill | Description |
128
+ | -------------------- | ------------------------------------- |
129
+ | `time/now` | Current time with timezone |
130
+ | `time/parse` | Parse "next Tuesday 3pm" to timestamp |
131
+ | `time/convert` | Convert between timezones |
132
+ | `time/add` | Add days/weeks/months to a date |
133
+ | `time/diff` | Duration between two dates |
134
+ | `time/day_of_week` | What day is this date? |
135
+ | `time/is_past` | Is this timestamp in the past? |
136
+ | `time/business_days` | Add/subtract business days |
137
+
138
+ ### Time Chat (LLM-Powered)
139
+
140
+ | Skill | Description |
141
+ | ------ | ----------------------------------------------------------------------- |
142
+ | `chat` | Conversational assistant for time, date, timezone, and scheduling questions |
143
+ | `echo` | Echo input (testing) |
144
+
145
+ `chat` can invoke skills tagged with `tool` (for example the deterministic `time/*` skills) through AI SDK tool calling. It is restricted to time-related queries and will politely decline off-topic questions.
146
+
147
+ Time tools return structured responses with a `computed.method: "deterministic"` field and `usage: "Use this value directly. Do not recalculate."` guidance for calling agents.
148
+
149
+ Example prompt that triggers tool use from `chat`:
150
+
151
+ ```bash
152
+ npx @runtypelabs/timely-a2a test http://localhost:9999 \
153
+ --skill chat \
154
+ --message "What day of the week is 2026-02-09 in UTC?"
155
+ ```
156
+
157
+ ## Connecting to Runtype
158
+
159
+ ### As an External Agent
160
+
161
+ 1. Start the A2A server:
162
+
163
+ ```bash
164
+ npx @runtypelabs/timely-a2a serve --echo --port 9999
165
+ ```
166
+
167
+ 2. In Runtype Dashboard:
168
+ - Go to your Product
169
+ - Click "Add Capability" > "Connect External"
170
+ - Enter:
171
+ - Agent Card URL: `http://localhost:9999/.well-known/agent-card.json`
172
+ - A2A Endpoint URL: `http://localhost:9999/a2a`
173
+ - Click "Connect & Add"
174
+
175
+ ### Testing Runtype's A2A Surface
176
+
177
+ 1. Create an A2A Surface in Runtype Dashboard
178
+ 2. Add capabilities (flows) to the surface
179
+ 3. Generate an API key for the surface
180
+ 4. Test with the CLI:
181
+ ```bash
182
+ npx @runtypelabs/timely-a2a test-runtype \
183
+ --product-id prod_xxx \
184
+ --surface-id surf_xxx \
185
+ --api-key a2a_xxx \
186
+ --environment local
187
+ ```
188
+
189
+ ## Programmatic Usage
190
+
191
+ ### Create a Server
192
+
193
+ ```typescript
194
+ import { createA2AServer } from '@runtypelabs/timely-a2a'
195
+
196
+ // Requires AI_GATEWAY_API_KEY env var for gateway models (provider/model format)
197
+ const server = createA2AServer({
198
+ config: {
199
+ name: 'Timely',
200
+ description: 'Time-focused A2A agent with deterministic time tools and LLM-powered time chat',
201
+ port: 9999,
202
+ },
203
+ llmConfig: {
204
+ provider: 'openai',
205
+ model: 'alibaba/qwen3.5-flash', // gateway model — any provider via Vercel AI Gateway
206
+ temperature: 0.7,
207
+ },
208
+ })
209
+
210
+ await server.start()
211
+
212
+ // Graceful shutdown
213
+ process.on('SIGINT', async () => {
214
+ await server.stop()
215
+ })
216
+ ```
217
+
218
+ ### Create a Client
219
+
220
+ ```typescript
221
+ import { A2AClient } from '@runtypelabs/timely-a2a'
222
+
223
+ const client = new A2AClient({
224
+ baseUrl: 'http://localhost:9999',
225
+ })
226
+
227
+ // Get agent card
228
+ const agentCard = await client.getAgentCard()
229
+ console.log(
230
+ 'Skills:',
231
+ agentCard.skills.map((s) => s.name)
232
+ )
233
+
234
+ // Send a task
235
+ const task = await client.sendTask({
236
+ skill: 'chat',
237
+ message: {
238
+ role: 'user',
239
+ parts: [{ type: 'text', text: 'What time is it in Tokyo?' }],
240
+ },
241
+ })
242
+
243
+ console.log('Response:', task.artifacts?.[0]?.parts?.[0]?.text)
244
+ ```
245
+
246
+ ### Test Runtype Surface
247
+
248
+ ```typescript
249
+ import { createRuntypeA2AClient } from '@runtypelabs/timely-a2a'
250
+
251
+ const client = createRuntypeA2AClient({
252
+ productId: 'prod_xxx',
253
+ surfaceId: 'surf_xxx',
254
+ apiKey: 'a2a_xxx',
255
+ environment: 'local', // or 'staging', 'production'
256
+ })
257
+
258
+ // Send streaming task
259
+ await client.sendTaskStreaming(
260
+ {
261
+ skill: 'my-capability',
262
+ message: {
263
+ role: 'user',
264
+ parts: [{ type: 'text', text: 'What time is it in London?' }],
265
+ },
266
+ },
267
+ {
268
+ onChunk: (text) => process.stdout.write(text),
269
+ onStatus: (status) => console.log('Status:', status),
270
+ }
271
+ )
272
+ ```
273
+
274
+ ## CLI Reference
275
+
276
+ Run `npx @runtypelabs/timely-a2a --help` for all commands and options.
277
+
278
+ ### `serve` - Start A2A Server
279
+
280
+ ```
281
+ Usage: timely-a2a serve [options]
282
+
283
+ Options:
284
+ -p, --port <port> Port to listen on (default: "9999")
285
+ -h, --host <host> Host to bind to (default: "localhost")
286
+ -n, --name <name> Agent name (default: "Timely")
287
+ --echo Run in echo mode (no LLM, for testing)
288
+ --provider <provider> LLM provider: openai, anthropic (default: "openai")
289
+ --model <model> LLM model (default: "alibaba/qwen3.5-flash")
290
+ --temperature <temp> LLM temperature (default: "0.7")
291
+ ```
292
+
293
+ ### `test` - Test A2A Endpoint
294
+
295
+ ```
296
+ Usage: timely-a2a test [options] <url>
297
+
298
+ Arguments:
299
+ url Base URL of the A2A endpoint
300
+
301
+ Options:
302
+ -s, --skill <skill> Skill to test (default: "echo")
303
+ -m, --message <msg> Message to send (default: "Hello from A2A client!")
304
+ --stream Use streaming mode
305
+ -k, --api-key <key> API key for authentication
306
+ ```
307
+
308
+ ### `test-runtype` - Test Runtype A2A Surface
309
+
310
+ ```
311
+ Usage: timely-a2a test-runtype [options]
312
+
313
+ Options:
314
+ --product-id <id> Runtype product ID (required)
315
+ --surface-id <id> Runtype surface ID (required)
316
+ --api-key <key> A2A API key (required)
317
+ -e, --environment Environment: production, staging, local (default: "local")
318
+ -s, --skill <skill> Skill/capability to test
319
+ -m, --message <msg> Message to send (default: "Hello from A2A client!")
320
+ --stream Use streaming mode
321
+ ```
322
+
323
+ ## A2A Protocol
324
+
325
+ This package implements [A2A Protocol v0.3](https://a2a-protocol.org/v0.3.0/specification/).
326
+
327
+ ### Endpoints
328
+
329
+ - `GET /.well-known/agent-card.json` - Agent Card discovery (also serves `/.well-known/agent.json` for backward compat)
330
+ - `POST /a2a` - JSON-RPC endpoint
331
+
332
+ ### Supported Methods
333
+
334
+ | Spec Method (preferred) | Legacy Alias | Description |
335
+ | --- | --- | --- |
336
+ | `message/send` | `tasks/send`, `SendMessage` | Send a message (synchronous) |
337
+ | `message/stream` | `tasks/sendSubscribe`, `SendStreamingMessage` | Send a message with SSE streaming |
338
+ | `GetTask` | `tasks/get` | Get task status |
339
+ | `CancelTask` | `tasks/cancel` | Cancel a running task |
340
+ | `ping` | | Health check |
341
+
342
+ ## Vercel Deployment
343
+
344
+ Deploy your A2A agent to Vercel for serverless operation.
345
+
346
+ ### Option 1: Deploy the `vercel-app` directory
347
+
348
+ 1. In Vercel dashboard, set **Root Directory** to `vercel-app`
349
+ 2. Add environment variables:
350
+ - `AI_GATEWAY_API_KEY` — Vercel AI Gateway key (recommended)
351
+ - `AGENT_NAME` (optional)
352
+ - `ECHO_MODE=true` for testing without LLM
353
+ - Or use direct provider keys as fallback: `OPENAI_API_KEY` / `ANTHROPIC_API_KEY`
354
+ 3. Deploy
355
+
356
+ ### Option 2: Add to Existing Next.js App
357
+
358
+ Install the package and use the Vercel handlers. Set `AI_GATEWAY_API_KEY` in your environment for gateway models:
359
+
360
+ ```typescript
361
+ // app/api/a2a/route.ts
362
+ import { createA2AHandler } from '@runtypelabs/timely-a2a/vercel'
363
+
364
+ export const POST = createA2AHandler({
365
+ name: 'Timely',
366
+ llmConfig: { provider: 'openai', model: 'alibaba/qwen3.5-flash' },
367
+ })
368
+
369
+ // app/.well-known/agent-card.json/route.ts
370
+ import { createAgentCardHandler } from '@runtypelabs/timely-a2a/vercel'
371
+
372
+ export const GET = createAgentCardHandler({
373
+ name: 'Timely',
374
+ llmConfig: { provider: 'openai', model: 'alibaba/qwen3.5-flash' },
375
+ })
376
+ ```
377
+
378
+ ### Serverless Limitations
379
+
380
+ Since Vercel functions are stateless:
381
+
382
+ - `GetTask` returns "not available" (no task storage)
383
+ - `CancelTask` returns "not available" (can't cancel in-flight tasks)
384
+ - Use `message/stream` for streaming responses
385
+
386
+ ## Development
387
+
388
+ ```bash
389
+ # Build
390
+ pnpm build
391
+
392
+ # Development mode (watch)
393
+ pnpm dev
394
+
395
+ # Type check
396
+ pnpm typecheck
397
+
398
+ # Clean
399
+ pnpm clean
400
+ ```
401
+
402
+ ## License
403
+
404
+ MIT