simple_a2a 0.1.0 → 0.3.0

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
Files changed (56) hide show
  1. checksums.yaml +4 -4
  2. data/CHANGELOG.md +67 -0
  3. data/README.md +78 -38
  4. data/compare_agent2agent.md +460 -0
  5. data/docs/api/client/index.md +19 -0
  6. data/docs/api/index.md +4 -3
  7. data/docs/api/models/index.md +13 -11
  8. data/docs/api/server/index.md +42 -10
  9. data/docs/api/storage/index.md +0 -1
  10. data/docs/architecture/index.md +17 -15
  11. data/docs/architecture/protocol.md +16 -1
  12. data/docs/assets/images/simple_a2a.jpg +0 -0
  13. data/docs/examples/agent-chaining.md +107 -0
  14. data/docs/examples/auth-headers.md +105 -0
  15. data/docs/examples/cancellation.md +105 -0
  16. data/docs/examples/index.md +123 -52
  17. data/docs/examples/interrupted-states.md +114 -0
  18. data/docs/examples/multipart.md +103 -0
  19. data/docs/examples/push-notifications.md +117 -0
  20. data/docs/examples/resubscribe.md +129 -0
  21. data/docs/examples/sqlite-storage.md +131 -0
  22. data/docs/examples/streaming.md +1 -4
  23. data/docs/guides/push-notifications.md +4 -1
  24. data/docs/guides/streaming.md +34 -5
  25. data/docs/index.md +55 -27
  26. data/examples/04_resubscribe/client.rb +140 -0
  27. data/examples/04_resubscribe/server.rb +75 -0
  28. data/examples/05_cancellation/client.rb +150 -0
  29. data/examples/05_cancellation/server.rb +77 -0
  30. data/examples/06_push_notifications/client.rb +192 -0
  31. data/examples/06_push_notifications/server.rb +123 -0
  32. data/examples/07_agent_chaining/client.rb +120 -0
  33. data/examples/07_agent_chaining/server.rb +150 -0
  34. data/examples/08_interrupted_states/client.rb +148 -0
  35. data/examples/08_interrupted_states/server.rb +142 -0
  36. data/examples/09_multipart/client.rb +117 -0
  37. data/examples/09_multipart/server.rb +97 -0
  38. data/examples/10_auth_headers/client.rb +92 -0
  39. data/examples/10_auth_headers/server.rb +98 -0
  40. data/examples/11_sqlite_storage/Brewfile +1 -0
  41. data/examples/11_sqlite_storage/Gemfile +9 -0
  42. data/examples/11_sqlite_storage/client.rb +114 -0
  43. data/examples/11_sqlite_storage/run +154 -0
  44. data/examples/11_sqlite_storage/server.rb +131 -0
  45. data/examples/README.md +384 -0
  46. data/lib/simple_a2a/client/sse.rb +15 -0
  47. data/lib/simple_a2a/server/app.rb +131 -45
  48. data/lib/simple_a2a/server/base.rb +19 -17
  49. data/lib/simple_a2a/server/broadcast_registry.rb +24 -0
  50. data/lib/simple_a2a/server/multi_agent.rb +1 -1
  51. data/lib/simple_a2a/server/push_config_store.rb +29 -0
  52. data/lib/simple_a2a/server/push_sender.rb +1 -0
  53. data/lib/simple_a2a/server/task_broadcast.rb +46 -0
  54. data/lib/simple_a2a/version.rb +1 -1
  55. metadata +38 -20
  56. data/lib/simple_a2a/server/event_router.rb +0 -50
@@ -0,0 +1,384 @@
1
+ # simple_a2a — Example Applications
2
+
3
+ Eleven runnable demo applications that exercise the gem end-to-end. Each demo
4
+ pairs a `server.rb` and a `client.rb` so both sides of the A2A protocol are
5
+ visible in one place.
6
+
7
+ ---
8
+
9
+ ## How to run
10
+
11
+ ### Automated (recommended)
12
+
13
+ From the repository root, use the `run` launcher. It starts the server,
14
+ waits for it to accept connections, runs the client, then shuts the server
15
+ down cleanly.
16
+
17
+ ```bash
18
+ bundle exec ruby examples/run <demo-name>
19
+ ```
20
+
21
+ Examples:
22
+
23
+ ```bash
24
+ bundle exec ruby examples/run 01_basic_usage
25
+ bundle exec ruby examples/run 05_cancellation
26
+ ```
27
+
28
+ The trailing slash shown by tab-completion is accepted: `./run 01_basic_usage/`
29
+ works just as well.
30
+
31
+ ### Manual (two terminals)
32
+
33
+ Start the server in one terminal, then run the client in a second:
34
+
35
+ ```bash
36
+ # terminal 1
37
+ bundle exec ruby examples/01_basic_usage/server.rb
38
+
39
+ # terminal 2
40
+ bundle exec ruby examples/01_basic_usage/client.rb
41
+ ```
42
+
43
+ All demos bind the A2A server to `http://localhost:9292` unless noted
44
+ otherwise.
45
+
46
+ ### Demo 03 — LLM Research (special setup)
47
+
48
+ Demo 03 requires API keys and additional gems that are not part of the gem's
49
+ normal dependency set:
50
+
51
+ ```bash
52
+ bundle add ruby_llm async-http-faraday sinatra
53
+ export ANTHROPIC_API_KEY=your_key_here
54
+ export OPENAI_API_KEY=your_key_here
55
+ bundle exec ruby examples/run 03_llm_research
56
+ ```
57
+
58
+ The demo has its own lifecycle script that starts both the A2A server
59
+ (port 9292) and a Sinatra web client (port 4567). Open
60
+ `http://localhost:4567` in a browser after both processes start.
61
+
62
+ ---
63
+
64
+ ## Prerequisites
65
+
66
+ All other demos run with just the gem's standard development setup:
67
+
68
+ ```bash
69
+ bundle install
70
+ bundle exec ruby examples/run 01_basic_usage
71
+ ```
72
+
73
+ No additional gems or environment variables are required.
74
+
75
+ ---
76
+
77
+ ## Demo index
78
+
79
+ | # | Name | A2A features demonstrated |
80
+ |---|------|--------------------------|
81
+ | 01 | Basic Usage | Agent card discovery, `tasks/send`, `tasks/get`, `tasks/list`, error handling |
82
+ | 02 | Streaming | `tasks/sendSubscribe`, SSE, incremental artifact chunks |
83
+ | 03 | LLM Research | `multi_server`, parallel SSE agents, evaluator pattern, web client |
84
+ | 04 | Resubscribe | `tasks/resubscribe`, concurrent SSE subscribers, mid-stream join |
85
+ | 05 | Cancellation | `tasks/cancel`, concurrent tasks, task lifecycle |
86
+ | 06 | Push Notifications | `tasks/pushNotification/set/get/delete/list`, webhook delivery |
87
+ | 07 | Agent Chaining | Agent-to-agent calls via `A2A.client` inside an executor |
88
+ | 08 | Interrupted States | `input_required`, `auth_required`, multi-turn conversations |
89
+ | 09 | Multipart | `Part.text`, `Part.json`, `Part.binary`, `Part.from_url` |
90
+ | 10 | Auth Headers | `A2A.client(headers:)`, Bearer token middleware |
91
+ | 11 | SQLite Storage | `Storage::Base` injection, SQLite3 WAL persistence, cross-restart task survival |
92
+
93
+ ---
94
+
95
+ ## Demos
96
+
97
+ ### 01 — Basic Usage
98
+
99
+ ```bash
100
+ bundle exec ruby examples/run 01_basic_usage
101
+ ```
102
+
103
+ The foundational request/response pattern: a client sends messages to an
104
+ agent and receives completed artifacts in the HTTP response body.
105
+
106
+ **Protocol specification coverage:**
107
+
108
+ | Spec section | What the demo shows |
109
+ |---|---|
110
+ | Agent Card discovery | Client calls `GET /agentCard` and reads the agent's name, version, description, and skills |
111
+ | `tasks/send` | Synchronous task submission — executor runs to completion before the HTTP response returns |
112
+ | `tasks/get` | Client retrieves a previously completed task by ID |
113
+ | `tasks/list` | Client fetches all tasks on the server |
114
+ | Error responses | Client requests a non-existent task ID; server returns `TASK_NOT_FOUND` JSON-RPC error |
115
+ | `AgentCard` model | `name`, `version`, `description`, `skills`, `interfaces`, `capabilities` |
116
+ | `Message` model | `Message.user(text)` builds a user-role message with a text part |
117
+ | `Artifact` model | Completed task carries a named artifact with a text part |
118
+
119
+ ---
120
+
121
+ ### 02 — Streaming
122
+
123
+ ```bash
124
+ bundle exec ruby examples/run 02_streaming
125
+ ```
126
+
127
+ The server streams a long article word-by-word at 600 words per minute.
128
+ The client receives the text incrementally as Server-Sent Events rather
129
+ than waiting for the full response.
130
+
131
+ **Protocol specification coverage:**
132
+
133
+ | Spec section | What the demo shows |
134
+ |---|---|
135
+ | `tasks/sendSubscribe` | Client opens a persistent SSE connection; server streams events for the duration of the task |
136
+ | `TaskStatusUpdateEvent` | `working` event emitted when execution starts; `completed` final event signals the stream is done |
137
+ | `TaskArtifactUpdateEvent` | Each word arrives as a separate artifact chunk with `append: true` and `lastChunk: true` on the final word |
138
+ | `AgentCapabilities.streaming` | `true` in the agent card, advertised to clients before subscription |
139
+ | SSE transport | `Content-Type: text/event-stream`, `data:` frames, `\n\n` event boundaries |
140
+
141
+ ---
142
+
143
+ ### 03 — Multi-Agent LLM Research
144
+
145
+ ```bash
146
+ bundle exec ruby examples/run 03_llm_research
147
+ ```
148
+
149
+ Three agents on one server: an Anthropic researcher (Claude), an OpenAI
150
+ researcher (GPT), and an evaluator. The CLI client queries both researchers
151
+ in parallel then sends their combined output to the evaluator. A Sinatra web
152
+ client provides a browser UI with side-by-side streaming panels.
153
+
154
+ **Protocol specification coverage:**
155
+
156
+ | Spec section | What the demo shows |
157
+ |---|---|
158
+ | `A2A.multi_server` | Three agents hosted on one port under `/anthropic`, `/openai`, and `/evaluator` path prefixes |
159
+ | `tasks/sendSubscribe` | Both researcher agents stream token-by-token responses via SSE |
160
+ | Parallel agent calls | CLI client runs two SSE subscriptions concurrently using Ruby threads |
161
+ | Evaluator pattern | One agent's output is used as the input message to a downstream agent |
162
+ | `AgentCard.interfaces` | Each agent card declares its own URL path so clients can target individual agents |
163
+ | `AgentCapabilities.streaming` | `true` on all three agents |
164
+
165
+ **Requires:** `ANTHROPIC_API_KEY`, `OPENAI_API_KEY`, plus `ruby_llm`,
166
+ `async-http-faraday`, and `sinatra`.
167
+
168
+ ---
169
+
170
+ ### 04 — Resubscribe
171
+
172
+ ```bash
173
+ bundle exec ruby examples/run 04_resubscribe
174
+ ```
175
+
176
+ A multi-step analysis task runs on the server. A first subscriber attaches
177
+ from the beginning via `tasks/sendSubscribe`. A second subscriber attaches
178
+ mid-stream via `tasks/resubscribe` and receives the current task snapshot
179
+ followed by all remaining events. Both streams close cleanly when the task
180
+ completes.
181
+
182
+ **Protocol specification coverage:**
183
+
184
+ | Spec section | What the demo shows |
185
+ |---|---|
186
+ | `tasks/resubscribe` | Second client joins an in-flight SSE stream using the task ID |
187
+ | Task snapshot on join | First event delivered to a resubscriber is the current `Task` object, not a status update |
188
+ | Concurrent subscribers | Two independent SSE connections receive the same broadcast events |
189
+ | `BroadcastRegistry` | Server maps task IDs to live broadcasts so resubscribers can locate the stream |
190
+ | `TaskStatusUpdateEvent` | `working` and `completed` (final) events received by both subscribers |
191
+ | `TaskArtifactUpdateEvent` | Each analysis step delivered as a discrete artifact |
192
+
193
+ ---
194
+
195
+ ### 05 — Cancellation
196
+
197
+ ```bash
198
+ bundle exec ruby examples/run 05_cancellation
199
+ ```
200
+
201
+ Three tasks run concurrently via `tasks/sendSubscribe`. After three seconds,
202
+ one task is cancelled via `tasks/cancel` while the other two complete
203
+ normally. The client verifies the final states of all three tasks.
204
+
205
+ **Protocol specification coverage:**
206
+
207
+ | Spec section | What the demo shows |
208
+ |---|---|
209
+ | `tasks/cancel` | Client sends a cancel request by task ID while the task is mid-execution |
210
+ | `canceled` terminal state | Task B transitions to `canceled`; its SSE stream receives a final status event and closes |
211
+ | Concurrent task isolation | Tasks A and C are unaffected by the cancellation of task B |
212
+ | `AgentExecutor#cancel` | Default implementation calls `task.cancel!` and emits a final status event |
213
+ | `TaskState` lifecycle | `submitted → working → canceled` vs `submitted → working → completed` |
214
+ | Executor cooperative cancellation | Executor checks `ctx.task.terminal?` between steps and exits early when cancelled |
215
+
216
+ ---
217
+
218
+ ### 06 — Push Notifications
219
+
220
+ ```bash
221
+ bundle exec ruby examples/run 06_push_notifications
222
+ ```
223
+
224
+ The client registers a local webhook receiver on port 9293, submits a task,
225
+ then registers the webhook URL via the push notification RPC methods. The
226
+ server delivers an HTTP POST to the webhook after each step — the client
227
+ holds no open SSE connection and receives all updates out-of-band.
228
+
229
+ **Protocol specification coverage:**
230
+
231
+ | Spec section | What the demo shows |
232
+ |---|---|
233
+ | `tasks/pushNotification/set` | Client registers a `PushNotificationConfig` containing a webhook URL |
234
+ | `tasks/pushNotification/get` | Client confirms the config is stored by retrieving it by task ID |
235
+ | `tasks/pushNotification/list` | Client lists all registered push configs on the server |
236
+ | `tasks/pushNotification/delete` | Client removes the config; list confirms zero configs remain |
237
+ | `PushNotificationConfig` model | `webhookUrl` and optional `authenticationInfo` fields |
238
+ | `PushSender` | Server delivers `TaskStatusUpdateEvent` payloads as HTTP POSTs to the webhook URL |
239
+ | `AgentCapabilities.push_notifications` | `true` in the agent card; server rejects push RPC calls if `false` |
240
+ | Out-of-band delivery | Client receives progress updates without maintaining any persistent connection |
241
+
242
+ ---
243
+
244
+ ### 07 — Agent Chaining
245
+
246
+ ```bash
247
+ bundle exec ruby examples/run 07_agent_chaining
248
+ ```
249
+
250
+ Three agents share one port. The `PipelineAgent` executor calls the
251
+ `ReverseAgent` and `ShoutAgent` in sequence using `A2A.client` — the same
252
+ client interface an external caller would use. The top-level client speaks
253
+ only to the pipeline; the internal calls are invisible to it.
254
+
255
+ **Protocol specification coverage:**
256
+
257
+ | Spec section | What the demo shows |
258
+ |---|---|
259
+ | Agent-to-agent delegation | An executor uses `A2A.client` to call peer agents during task execution |
260
+ | `A2A.multi_server` | Three agents hosted at `/reverse`, `/shout`, `/pipeline` on one port |
261
+ | Agent Card discovery | Client discovers all three cards; pipeline card describes its composed capability |
262
+ | `tasks/send` | Sub-agents are called synchronously within the pipeline executor's fiber |
263
+ | Protocol transparency | Internal A2A calls use the same JSON-RPC wire format as external calls |
264
+ | Composability | Any agent can act as both a server (to its caller) and a client (to its dependencies) |
265
+
266
+ ---
267
+
268
+ ### 08 — Interrupted States
269
+
270
+ ```bash
271
+ bundle exec ruby examples/run 08_interrupted_states
272
+ ```
273
+
274
+ Two agents demonstrate the two interrupted task states. An `OrderAgent` uses
275
+ `input_required` to ask what the user wants before completing the order. A
276
+ `VaultAgent` uses `auth_required` to demand a token before revealing
277
+ protected data, staying blocked on a wrong token and unlocking on the correct
278
+ one. Each conversational turn is a separate `tasks/send` call threaded by
279
+ `message.context_id`.
280
+
281
+ **Protocol specification coverage:**
282
+
283
+ | Spec section | What the demo shows |
284
+ |---|---|
285
+ | `input_required` state | Task transitions to an interrupted state; `status.message` carries the question |
286
+ | `auth_required` state | Task blocks until the client provides valid credentials |
287
+ | Multi-turn conversations | Each follow-up is a new `tasks/send` carrying the same `context_id` |
288
+ | `Message.context_id` | Executor uses the message's `contextId` field to thread state across separate task calls |
289
+ | `TaskState` interrupted vs terminal | `input_required` and `auth_required` are non-terminal; the task can still complete |
290
+ | Rejection on bad auth | VaultAgent stays in `auth_required` on a wrong token, demonstrating repeated challenge |
291
+
292
+ ---
293
+
294
+ ### 09 — Multipart Artifacts
295
+
296
+ ```bash
297
+ bundle exec ruby examples/run 09_multipart
298
+ ```
299
+
300
+ A single artifact carries four parts of different types: a plain text summary,
301
+ a structured JSON hash, a base64-encoded binary CSV, and a URL reference. The
302
+ client inspects each part using the predicate methods and processes each type
303
+ appropriately.
304
+
305
+ **Protocol specification coverage:**
306
+
307
+ | Spec section | What the demo shows |
308
+ |---|---|
309
+ | `Part.text` | Plain prose with `media_type: "text/plain"` |
310
+ | `Part.json` | Structured data as a Ruby Hash; serialized as a JSON object in the artifact |
311
+ | `Part.binary` | Raw bytes base64-encoded for transport; `decoded_bytes` restores the original |
312
+ | `Part.from_url` | URL reference with `media_type` and `filename`; no content is inlined |
313
+ | `Part` predicates | `text?`, `json?`, `raw?`, `url?` allow type-safe dispatch on the receiving end |
314
+ | Multi-part `Artifact` | One artifact contains all four parts; clients can select the representation they need |
315
+ | `Artifact.name` | Named artifact (`"report"`) for client-side identification |
316
+
317
+ ---
318
+
319
+ ### 10 — Authentication Headers
320
+
321
+ ```bash
322
+ bundle exec ruby examples/run 10_auth_headers
323
+ ```
324
+
325
+ The server wraps its Rack app in a Bearer token middleware. Agent card
326
+ discovery (`GET /agentCard`) is left public; all RPC calls (`POST /`) require
327
+ `Authorization: Bearer <token>`. Two clients are created — one without
328
+ headers and one with — demonstrating that the `headers:` option on
329
+ `A2A.client` is the injection point for any custom auth scheme.
330
+
331
+ **Protocol specification coverage:**
332
+
333
+ | Spec section | What the demo shows |
334
+ |---|---|
335
+ | `A2A.client(headers:)` | `headers: { "Authorization" => "Bearer token" }` appended to every request |
336
+ | `AgentCard` public discovery | `GET /agentCard` succeeds without authentication — agents are discoverable |
337
+ | Bearer token authentication | `Authorization: Bearer <token>` header checked on all POST (RPC) requests |
338
+ | Rack middleware composition | `BearerAuthMiddleware.new(rack_app, token:)` wraps the standard app without library changes |
339
+ | JSON-RPC error on rejection | Middleware returns a well-formed JSON-RPC error body so the client can rescue `A2A::Error` |
340
+ | Header flexibility | The same `headers:` option supports API keys, custom schemes, and any HTTP header |
341
+
342
+ ---
343
+
344
+ ### 11 — SQLite3 Persistent Storage
345
+
346
+ ```bash
347
+ bundle exec ruby examples/run 11_sqlite_storage
348
+ ```
349
+
350
+ The server injects a `SqliteStorage` instance — a custom `Storage::Base` subclass
351
+ backed by SQLite3 — instead of the default in-memory store. The demo runs in two
352
+ phases managed by a custom `run` script to prove that tasks survive a full server
353
+ restart:
354
+
355
+ 1. **Populate** — server starts, three tasks are sent and stored in `tasks.db`, task
356
+ IDs are written to a temp JSON file, server stops.
357
+ 2. **Verify** — server restarts with the same `tasks.db`, client reads the saved IDs
358
+ and fetches each task from the freshly booted server, confirming all three tasks
359
+ are present with state `completed`.
360
+
361
+ The `SqliteStorage` implementation uses WAL mode and a mutex for safe concurrent
362
+ access. Dependencies are declared in conventional tool files kept alongside the
363
+ demo rather than inlined into application code:
364
+
365
+ - **`Brewfile`** — declares the `sqlite3` binary dependency; the `run` script
366
+ calls `brew bundle install` on macOS if the binary is not already present.
367
+ Other platforms must provide the binary themselves.
368
+ - **`Gemfile`** — uses `gemspec path: "../../"` to pull in all of the project's
369
+ own dependencies, then adds `sqlite3`. The `run` script calls `bundle install`
370
+ with this Gemfile before spawning either server phase, so `server.rb` can
371
+ simply `require "sqlite3"` with no inline gem-install logic.
372
+
373
+ **Protocol specification coverage:**
374
+
375
+ | Spec section | What the demo shows |
376
+ |---|---|
377
+ | `Storage::Base` injection | `A2A.server(storage:)` accepts any `Storage::Base` subclass — no library changes needed |
378
+ | `SqliteStorage#save` | Tasks serialized to JSON and upserted into SQLite via `ON CONFLICT DO UPDATE` |
379
+ | `SqliteStorage#find!` | Task fetched by ID across process boundaries; raises `TaskNotFoundError` if missing |
380
+ | `SqliteStorage#list` | All stored tasks returned in insertion order |
381
+ | `SqliteStorage#size` | Task count reported at server startup to confirm DB contents |
382
+ | Cross-restart persistence | Tasks created in server process 1 are visible to server process 2 via the shared DB file |
383
+ | WAL mode concurrency | `PRAGMA journal_mode=WAL` allows concurrent readers during writes |
384
+ | `Brewfile` / `Gemfile` pattern | Per-demo dependency files keep application code free of setup logic |
@@ -23,6 +23,21 @@ module A2A
23
23
  end
24
24
  end
25
25
 
26
+ def resubscribe(task_id:, &block)
27
+ body = JSON.generate({
28
+ "jsonrpc" => "2.0",
29
+ "id" => SecureRandom.uuid,
30
+ "method" => "tasks/resubscribe",
31
+ "params" => { "id" => task_id }
32
+ })
33
+
34
+ run_async do |internet|
35
+ headers = rpc_headers.merge("accept" => "text/event-stream")
36
+ response = internet.post(@url, headers: headers, body: body)
37
+ parse_sse_stream(response, &block)
38
+ end
39
+ end
40
+
26
41
  private
27
42
 
28
43
  def parse_sse_stream(response, &block)
@@ -7,26 +7,27 @@ module A2A
7
7
  module Server
8
8
  class App < Roda
9
9
  SUPPORTED_VERSIONS = %w[1.0 0.3].freeze
10
+ SSE_METHODS = %w[tasks/sendSubscribe tasks/resubscribe].freeze
10
11
 
11
12
  plugin :json
12
13
  plugin :json_parser
13
14
  plugin :halt
14
15
  plugin :all_verbs
15
16
 
16
- def self.configure(agent_card:, storage:, executor:, event_router:, push_sender: nil)
17
- @agent_card = agent_card
18
- @storage = storage
19
- @executor = executor
20
- @event_router = event_router
21
- @push_sender = push_sender
17
+ def self.configure(agent_card:, storage:, executor:, broadcast_registry:, push_sender: nil, push_config_store: nil)
18
+ @agent_card = agent_card
19
+ @storage = storage
20
+ @executor = executor
21
+ @broadcast_registry = broadcast_registry
22
+ @push_sender = push_sender
23
+ @push_config_store = push_config_store || PushConfigStore.new
22
24
  end
23
25
 
24
26
  class << self
25
- attr_reader :agent_card, :storage, :executor, :event_router, :push_sender
27
+ attr_reader :agent_card, :storage, :executor, :broadcast_registry, :push_sender, :push_config_store
26
28
  end
27
29
 
28
30
  route do |r|
29
- # A2A version negotiation
30
31
  a2a_version = request.env["HTTP_A2A_VERSION"]
31
32
  if a2a_version && !SUPPORTED_VERSIONS.include?(a2a_version)
32
33
  err = JsonRpc::Response.error(
@@ -53,15 +54,20 @@ module A2A
53
54
  r.halt([200, { "Content-Type" => "application/json" }, [err]])
54
55
  end
55
56
 
56
- # SSE has different headers and body — intercept before normal dispatch.
57
- # The body is an Enumerator so Falcon streams each event to the client
58
- # as the executor yields it, rather than buffering the whole response.
59
- if rpc_req.method == "tasks/sendSubscribe"
60
- r.halt([200, {
61
- "Content-Type" => "text/event-stream",
62
- "Cache-Control" => "no-cache",
63
- "X-Accel-Buffering" => "no"
64
- }, handle_send_subscribe(rpc_req)])
57
+ if SSE_METHODS.include?(rpc_req.method)
58
+ begin
59
+ sse_body = rpc_req.method == "tasks/sendSubscribe" ?
60
+ handle_send_subscribe(rpc_req) :
61
+ handle_resubscribe(rpc_req)
62
+ r.halt([200, {
63
+ "Content-Type" => "text/event-stream",
64
+ "Cache-Control" => "no-cache",
65
+ "X-Accel-Buffering" => "no"
66
+ }, sse_body])
67
+ rescue A2A::Error, JsonRpc::InvalidParamsError => e
68
+ err = JsonRpc::Response.from_error(id: rpc_req.id, error: e)
69
+ r.halt([200, { "Content-Type" => "application/json" }, [err]])
70
+ end
65
71
  end
66
72
 
67
73
  result = dispatch(rpc_req)
@@ -110,7 +116,7 @@ module A2A
110
116
  task: task,
111
117
  message: message,
112
118
  storage: self.class.storage,
113
- event_router: self.class.event_router
119
+ event_router: TaskBroadcast.new
114
120
  )
115
121
  self.class.executor.call(ctx)
116
122
  self.class.storage.save(task)
@@ -140,11 +146,14 @@ module A2A
140
146
  task = self.class.storage.find!(task_id)
141
147
  raise TaskNotCancelableError, "Task #{task_id} is already terminal" if task.terminal?
142
148
 
149
+ # Use the live broadcast if streaming; otherwise events are discarded (no subscribers).
150
+ broadcast = self.class.broadcast_registry.find(task_id) || TaskBroadcast.new
151
+
143
152
  ctx = Server::Context.new(
144
153
  task: task,
145
154
  message: nil,
146
155
  storage: self.class.storage,
147
- event_router: self.class.event_router
156
+ event_router: broadcast
148
157
  )
149
158
  self.class.executor.cancel(ctx)
150
159
  self.class.storage.save(task)
@@ -156,47 +165,89 @@ module A2A
156
165
  params = rpc_req.params || {}
157
166
  msg_hash = params["message"]
158
167
  raise JsonRpc::InvalidParamsError, "message is required" unless msg_hash.is_a?(Hash)
159
- message = Models::Message.from_hash(msg_hash)
168
+ message = Models::Message.from_hash(msg_hash)
160
169
 
161
170
  task = Models::Task.new(
162
171
  status: Models::TaskStatus.new(state: Models::Types::TaskState::SUBMITTED)
163
172
  )
164
173
  self.class.storage.save(task)
165
174
 
175
+ broadcast = TaskBroadcast.new
176
+ queue = broadcast.subscribe
177
+ self.class.broadcast_registry.register(task.id, broadcast)
178
+
166
179
  storage = self.class.storage
167
180
  executor = self.class.executor
168
-
169
- # Protocol::HTTP::Body::Writable is a Readable subclass.
170
- # Protocol::Rack detects this and does NOT wrap it in an Enumerable
171
- # fiber — Falcon reads it directly via queue.dequeue in its async task.
172
- # The executor runs in a sibling async task and pushes events via
173
- # output.write (queue.enqueue), which is fully async-safe.
174
- # This avoids the FiberError that Enumerator::Yielder#<< raises when
175
- # called from inside an LLM streaming callback (an async task context
176
- # where the Enumerator consumer fiber is not currently waiting).
177
- output = Protocol::HTTP::Body::Writable.new
181
+ registry = self.class.broadcast_registry
182
+ output = Protocol::HTTP::Body::Writable.new
178
183
 
179
184
  Async::Task.current.async do
180
- streaming_router = Object.new.tap do |ro|
181
- ro.define_singleton_method(:publish) { |_, ev| output.write("data: #{JSON.generate({ 'jsonrpc' => '2.0', 'result' => ev.to_h })}\n\n") }
182
- ro.define_singleton_method(:open) { |*| }
183
- ro.define_singleton_method(:close) { |*| }
184
- ro.define_singleton_method(:channel?) { |*| false }
185
- ro.define_singleton_method(:subscribe) { |*| }
186
- end
187
-
188
185
  ctx = Server::Context.new(
189
186
  task: task,
190
187
  message: message,
191
188
  storage: storage,
192
- event_router: streaming_router
189
+ event_router: broadcast
193
190
  )
194
-
195
191
  executor.call(ctx)
196
192
  storage.save(task)
197
193
  rescue => e
198
- output.write("data: #{JSON.generate({ 'jsonrpc' => '2.0', 'error' => { 'code' => -32000, 'message' => e.message } })}\n\n") rescue nil
194
+ broadcast.error(e.message)
195
+ ensure
196
+ broadcast.close
197
+ registry.unregister(task.id)
198
+ end
199
+
200
+ Async::Task.current.async do
201
+ loop do
202
+ event = queue.async_pop
203
+ break if event.equal?(TaskBroadcast::DONE)
204
+ if event.is_a?(TaskBroadcast::BroadcastError)
205
+ output.write(sse_error_frame(event.message)) rescue nil
206
+ break
207
+ end
208
+ output.write(sse_frame(event))
209
+ end
210
+ rescue => e
211
+ output.write(sse_error_frame(e.message)) rescue nil
212
+ ensure
213
+ broadcast.unsubscribe(queue)
214
+ output.close_write rescue nil
215
+ end
216
+
217
+ output
218
+ end
219
+
220
+ def handle_resubscribe(rpc_req)
221
+ params = rpc_req.params || {}
222
+ task_id = params["id"] || params["taskId"]
223
+ raise JsonRpc::InvalidParamsError, "id is required" unless task_id
224
+
225
+ task = self.class.storage.find!(task_id)
226
+ raise UnsupportedOperationError, "Task #{task_id} is in a terminal state" if task.terminal?
227
+
228
+ broadcast = self.class.broadcast_registry.find(task_id)
229
+ raise UnsupportedOperationError, "Task #{task_id} is no longer streaming" unless broadcast
230
+
231
+ queue = broadcast.subscribe
232
+ output = Protocol::HTTP::Body::Writable.new
233
+
234
+ # Spec requires the current Task snapshot as the first SSE event.
235
+ output.write(sse_frame(task))
236
+
237
+ Async::Task.current.async do
238
+ loop do
239
+ event = queue.async_pop
240
+ break if event.equal?(TaskBroadcast::DONE)
241
+ if event.is_a?(TaskBroadcast::BroadcastError)
242
+ output.write(sse_error_frame(event.message)) rescue nil
243
+ break
244
+ end
245
+ output.write(sse_frame(event))
246
+ end
247
+ rescue => e
248
+ output.write(sse_error_frame(e.message)) rescue nil
199
249
  ensure
250
+ broadcast.unsubscribe(queue)
200
251
  output.close_write rescue nil
201
252
  end
202
253
 
@@ -205,22 +256,57 @@ module A2A
205
256
 
206
257
  def handle_push_set(rpc_req)
207
258
  raise PushNotificationNotSupportedError unless self.class.agent_card&.capabilities&.push_notifications
208
- JsonRpc::Response.success(id: rpc_req.id, result: true)
259
+
260
+ params = rpc_req.params || {}
261
+ task_id = params["id"] or raise JsonRpc::InvalidParamsError, "id is required"
262
+ cfg_h = params["pushNotificationConfig"]
263
+ raise JsonRpc::InvalidParamsError, "pushNotificationConfig is required" unless cfg_h.is_a?(Hash)
264
+
265
+ self.class.storage.find!(task_id)
266
+
267
+ config = Models::PushNotificationConfig.from_hash(cfg_h.merge("taskId" => task_id))
268
+ raise JsonRpc::InvalidParamsError, "pushNotificationConfig.webhookUrl is required" unless config.valid?
269
+
270
+ self.class.push_config_store.set(task_id, config)
271
+ result = { "id" => task_id, "pushNotificationConfig" => config.to_h }
272
+ JsonRpc::Response.success(id: rpc_req.id, result: result)
209
273
  end
210
274
 
211
275
  def handle_push_get(rpc_req)
212
276
  raise PushNotificationNotSupportedError unless self.class.agent_card&.capabilities&.push_notifications
213
- JsonRpc::Response.success(id: rpc_req.id, result: nil)
277
+
278
+ params = rpc_req.params || {}
279
+ task_id = params["id"] or raise JsonRpc::InvalidParamsError, "id is required"
280
+
281
+ config = self.class.push_config_store.get(task_id)
282
+ result = config ? { "id" => task_id, "pushNotificationConfig" => config.to_h } : nil
283
+ JsonRpc::Response.success(id: rpc_req.id, result: result)
214
284
  end
215
285
 
216
286
  def handle_push_delete(rpc_req)
217
287
  raise PushNotificationNotSupportedError unless self.class.agent_card&.capabilities&.push_notifications
218
- JsonRpc::Response.success(id: rpc_req.id, result: true)
288
+
289
+ params = rpc_req.params || {}
290
+ task_id = params["id"] or raise JsonRpc::InvalidParamsError, "id is required"
291
+
292
+ self.class.push_config_store.delete(task_id)
293
+ JsonRpc::Response.success(id: rpc_req.id, result: nil)
219
294
  end
220
295
 
221
296
  def handle_push_list(rpc_req)
222
297
  raise PushNotificationNotSupportedError unless self.class.agent_card&.capabilities&.push_notifications
223
- JsonRpc::Response.success(id: rpc_req.id, result: [])
298
+
299
+ configs = self.class.push_config_store.list
300
+ result = configs.map { |tid, cfg| { "id" => tid, "pushNotificationConfig" => cfg.to_h } }
301
+ JsonRpc::Response.success(id: rpc_req.id, result: result)
302
+ end
303
+
304
+ def sse_frame(result)
305
+ "data: #{JSON.generate({ 'jsonrpc' => '2.0', 'result' => result.to_h })}\n\n"
306
+ end
307
+
308
+ def sse_error_frame(message)
309
+ "data: #{JSON.generate({ 'jsonrpc' => '2.0', 'error' => { 'code' => -32_000, 'message' => message } })}\n\n"
224
310
  end
225
311
  end
226
312
  end