simple_a2a 0.1.0 → 0.3.0
This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
- checksums.yaml +4 -4
- data/CHANGELOG.md +67 -0
- data/README.md +78 -38
- data/compare_agent2agent.md +460 -0
- data/docs/api/client/index.md +19 -0
- data/docs/api/index.md +4 -3
- data/docs/api/models/index.md +13 -11
- data/docs/api/server/index.md +42 -10
- data/docs/api/storage/index.md +0 -1
- data/docs/architecture/index.md +17 -15
- data/docs/architecture/protocol.md +16 -1
- data/docs/assets/images/simple_a2a.jpg +0 -0
- data/docs/examples/agent-chaining.md +107 -0
- data/docs/examples/auth-headers.md +105 -0
- data/docs/examples/cancellation.md +105 -0
- data/docs/examples/index.md +123 -52
- data/docs/examples/interrupted-states.md +114 -0
- data/docs/examples/multipart.md +103 -0
- data/docs/examples/push-notifications.md +117 -0
- data/docs/examples/resubscribe.md +129 -0
- data/docs/examples/sqlite-storage.md +131 -0
- data/docs/examples/streaming.md +1 -4
- data/docs/guides/push-notifications.md +4 -1
- data/docs/guides/streaming.md +34 -5
- data/docs/index.md +55 -27
- data/examples/04_resubscribe/client.rb +140 -0
- data/examples/04_resubscribe/server.rb +75 -0
- data/examples/05_cancellation/client.rb +150 -0
- data/examples/05_cancellation/server.rb +77 -0
- data/examples/06_push_notifications/client.rb +192 -0
- data/examples/06_push_notifications/server.rb +123 -0
- data/examples/07_agent_chaining/client.rb +120 -0
- data/examples/07_agent_chaining/server.rb +150 -0
- data/examples/08_interrupted_states/client.rb +148 -0
- data/examples/08_interrupted_states/server.rb +142 -0
- data/examples/09_multipart/client.rb +117 -0
- data/examples/09_multipart/server.rb +97 -0
- data/examples/10_auth_headers/client.rb +92 -0
- data/examples/10_auth_headers/server.rb +98 -0
- data/examples/11_sqlite_storage/Brewfile +1 -0
- data/examples/11_sqlite_storage/Gemfile +9 -0
- data/examples/11_sqlite_storage/client.rb +114 -0
- data/examples/11_sqlite_storage/run +154 -0
- data/examples/11_sqlite_storage/server.rb +131 -0
- data/examples/README.md +384 -0
- data/lib/simple_a2a/client/sse.rb +15 -0
- data/lib/simple_a2a/server/app.rb +131 -45
- data/lib/simple_a2a/server/base.rb +19 -17
- data/lib/simple_a2a/server/broadcast_registry.rb +24 -0
- data/lib/simple_a2a/server/multi_agent.rb +1 -1
- data/lib/simple_a2a/server/push_config_store.rb +29 -0
- data/lib/simple_a2a/server/push_sender.rb +1 -0
- data/lib/simple_a2a/server/task_broadcast.rb +46 -0
- data/lib/simple_a2a/version.rb +1 -1
- metadata +38 -20
- data/lib/simple_a2a/server/event_router.rb +0 -50
|
@@ -0,0 +1,114 @@
|
|
|
1
|
+
# 08 Interrupted States
|
|
2
|
+
|
|
3
|
+
**Run it:**
|
|
4
|
+
|
|
5
|
+
```bash
|
|
6
|
+
bundle exec ruby examples/run 08_interrupted_states
|
|
7
|
+
```
|
|
8
|
+
|
|
9
|
+
**What it shows:** `input_required` and `auth_required` — the two interrupted task states — with multi-turn conversations threaded by `message.context_id`.
|
|
10
|
+
|
|
11
|
+
---
|
|
12
|
+
|
|
13
|
+
## Files
|
|
14
|
+
|
|
15
|
+
| File | Purpose |
|
|
16
|
+
|---|---|
|
|
17
|
+
| `examples/08_interrupted_states/server.rb` | Two agents: `OrderAgent` (`input_required`) and `VaultAgent` (`auth_required`) |
|
|
18
|
+
| `examples/08_interrupted_states/client.rb` | Drives two separate multi-turn conversations, each as a sequence of `tasks/send` calls |
|
|
19
|
+
|
|
20
|
+
---
|
|
21
|
+
|
|
22
|
+
## The scenario
|
|
23
|
+
|
|
24
|
+
Two independent conversation threads run in sequence:
|
|
25
|
+
|
|
26
|
+
**Thread A — `input_required`**
|
|
27
|
+
|
|
28
|
+
1. Client sends "start order" → server returns `input_required` asking "what would you like?"
|
|
29
|
+
2. Client sends "one large pizza" with the same `context_id` → server returns `completed` with the order confirmation.
|
|
30
|
+
|
|
31
|
+
**Thread B — `auth_required`**
|
|
32
|
+
|
|
33
|
+
1. Client sends "get secret" → server returns `auth_required`.
|
|
34
|
+
2. Client sends wrong token → server stays in `auth_required` (repeated challenge).
|
|
35
|
+
3. Client sends correct token → server returns `completed` with the protected data.
|
|
36
|
+
|
|
37
|
+
---
|
|
38
|
+
|
|
39
|
+
## Key concept — `context_id`
|
|
40
|
+
|
|
41
|
+
Each conversational turn is a separate `tasks/send` call. The server uses `message.context_id` to look up the in-progress conversation state:
|
|
42
|
+
|
|
43
|
+
```ruby
|
|
44
|
+
# Client — build a message with an explicit context_id
|
|
45
|
+
def msg(text, context_id:)
|
|
46
|
+
A2A::Models::Message.new(
|
|
47
|
+
role: A2A::Models::Types::Role::USER,
|
|
48
|
+
parts: [A2A::Models::Part.text(text)],
|
|
49
|
+
context_id: context_id
|
|
50
|
+
)
|
|
51
|
+
end
|
|
52
|
+
|
|
53
|
+
context = SecureRandom.uuid
|
|
54
|
+
|
|
55
|
+
# Turn 1
|
|
56
|
+
task1 = client.send_task(message: msg("start order", context_id: context))
|
|
57
|
+
# task1.status.state => "input_required"
|
|
58
|
+
|
|
59
|
+
# Turn 2
|
|
60
|
+
task2 = client.send_task(message: msg("one large pizza", context_id: context))
|
|
61
|
+
# task2.status.state => "completed"
|
|
62
|
+
```
|
|
63
|
+
|
|
64
|
+
---
|
|
65
|
+
|
|
66
|
+
## Server — executor state
|
|
67
|
+
|
|
68
|
+
Each executor keeps a mutex-protected hash keyed by `context_id`:
|
|
69
|
+
|
|
70
|
+
```ruby
|
|
71
|
+
class OrderExecutor < A2A::Server::AgentExecutor
|
|
72
|
+
def initialize
|
|
73
|
+
@pending = {}
|
|
74
|
+
@mutex = Mutex.new
|
|
75
|
+
end
|
|
76
|
+
|
|
77
|
+
def call(ctx)
|
|
78
|
+
cid = ctx.message.context_id
|
|
79
|
+
|
|
80
|
+
if @mutex.synchronize { @pending[cid] }
|
|
81
|
+
# Turn 2 — complete the order
|
|
82
|
+
item = ctx.message.text_content.strip
|
|
83
|
+
@mutex.synchronize { @pending.delete(cid) }
|
|
84
|
+
ctx.task.complete!(artifacts: [
|
|
85
|
+
A2A::Models::Artifact.new(
|
|
86
|
+
parts: [A2A::Models::Part.text("Order placed: #{item}")]
|
|
87
|
+
)
|
|
88
|
+
])
|
|
89
|
+
else
|
|
90
|
+
# Turn 1 — ask what they want
|
|
91
|
+
@mutex.synchronize { @pending[cid] = true }
|
|
92
|
+
ctx.task.require_input!(message: A2A::Models::Message.new(
|
|
93
|
+
role: A2A::Models::Types::Role::AGENT,
|
|
94
|
+
parts: [A2A::Models::Part.text("What would you like to order?")]
|
|
95
|
+
))
|
|
96
|
+
end
|
|
97
|
+
end
|
|
98
|
+
end
|
|
99
|
+
```
|
|
100
|
+
|
|
101
|
+
The `VaultAgent` follows the same pattern with `require_auth!` and a token check that stays in `auth_required` on a wrong value, demonstrating that the interrupted state is non-terminal and resumable.
|
|
102
|
+
|
|
103
|
+
---
|
|
104
|
+
|
|
105
|
+
## Protocol coverage
|
|
106
|
+
|
|
107
|
+
| Spec section | What the demo shows |
|
|
108
|
+
|---|---|
|
|
109
|
+
| `input_required` state | Task transitions to an interrupted state; `status.message` carries the question |
|
|
110
|
+
| `auth_required` state | Task blocks until the client provides valid credentials |
|
|
111
|
+
| Multi-turn conversations | Each follow-up is a new `tasks/send` carrying the same `context_id` |
|
|
112
|
+
| `Message.context_id` | Executor uses the message's `contextId` field to thread state across separate task calls |
|
|
113
|
+
| `TaskState` interrupted vs terminal | `input_required` and `auth_required` are non-terminal; the task can still complete |
|
|
114
|
+
| Rejection on bad auth | VaultAgent stays in `auth_required` on a wrong token, demonstrating repeated challenge |
|
|
@@ -0,0 +1,103 @@
|
|
|
1
|
+
# 09 Multipart Artifacts
|
|
2
|
+
|
|
3
|
+
**Run it:**
|
|
4
|
+
|
|
5
|
+
```bash
|
|
6
|
+
bundle exec ruby examples/run 09_multipart
|
|
7
|
+
```
|
|
8
|
+
|
|
9
|
+
**What it shows:** a single artifact carrying all four `Part` types — `text`, `json`, `binary`, and `url` — and how the client uses predicate methods to dispatch on part type.
|
|
10
|
+
|
|
11
|
+
---
|
|
12
|
+
|
|
13
|
+
## Files
|
|
14
|
+
|
|
15
|
+
| File | Purpose |
|
|
16
|
+
|---|---|
|
|
17
|
+
| `examples/09_multipart/server.rb` | `ReportExecutor` builds one four-part artifact covering every `Part` constructor |
|
|
18
|
+
| `examples/09_multipart/client.rb` | Iterates artifact parts; uses `text?`, `json?`, `raw?`, `url?` to process each type |
|
|
19
|
+
|
|
20
|
+
---
|
|
21
|
+
|
|
22
|
+
## The four Part types
|
|
23
|
+
|
|
24
|
+
| Constructor | Transport format | Client accessor |
|
|
25
|
+
|---|---|---|
|
|
26
|
+
| `Part.text(str, media_type:)` | Plain string | `part.text` |
|
|
27
|
+
| `Part.json(hash, filename:)` | JSON object | `part.data` |
|
|
28
|
+
| `Part.binary(bytes, media_type:, filename:)` | Base64-encoded string | `part.decoded_bytes` |
|
|
29
|
+
| `Part.from_url(url, media_type:, filename:)` | URL string, no inline content | `part.url` |
|
|
30
|
+
|
|
31
|
+
---
|
|
32
|
+
|
|
33
|
+
## Server — `ReportExecutor`
|
|
34
|
+
|
|
35
|
+
```ruby
|
|
36
|
+
def call(ctx)
|
|
37
|
+
topic = ctx.message.text_content.strip
|
|
38
|
+
|
|
39
|
+
summary = A2A::Models::Part.text(<<~TEXT, media_type: "text/plain")
|
|
40
|
+
Report on: #{topic}
|
|
41
|
+
Generated at #{Time.now.utc.iso8601}
|
|
42
|
+
TEXT
|
|
43
|
+
|
|
44
|
+
metadata = A2A::Models::Part.json(
|
|
45
|
+
{ "topic" => topic, "word_count" => topic.split.length, "confidence" => 0.92 },
|
|
46
|
+
filename: "metadata.json"
|
|
47
|
+
)
|
|
48
|
+
|
|
49
|
+
csv_bytes = build_csv(topic).b
|
|
50
|
+
csv_part = A2A::Models::Part.binary(
|
|
51
|
+
csv_bytes, media_type: "text/csv", filename: "term_scores.csv"
|
|
52
|
+
)
|
|
53
|
+
|
|
54
|
+
url_part = A2A::Models::Part.from_url(
|
|
55
|
+
"https://en.wikipedia.org/wiki/Special:Search?search=#{URI.encode_uri_component(topic)}",
|
|
56
|
+
media_type: "text/html",
|
|
57
|
+
filename: "reference.html"
|
|
58
|
+
)
|
|
59
|
+
|
|
60
|
+
ctx.task.complete!(artifacts: [
|
|
61
|
+
A2A::Models::Artifact.new(name: "report", parts: [summary, metadata, csv_part, url_part])
|
|
62
|
+
])
|
|
63
|
+
end
|
|
64
|
+
```
|
|
65
|
+
|
|
66
|
+
---
|
|
67
|
+
|
|
68
|
+
## Client — type-safe dispatch
|
|
69
|
+
|
|
70
|
+
```ruby
|
|
71
|
+
artifact.parts.each do |part|
|
|
72
|
+
if part.text?
|
|
73
|
+
puts part.text
|
|
74
|
+
|
|
75
|
+
elsif part.json?
|
|
76
|
+
puts JSON.pretty_generate(part.data)
|
|
77
|
+
|
|
78
|
+
elsif part.raw?
|
|
79
|
+
bytes = part.decoded_bytes
|
|
80
|
+
puts "#{bytes.bytesize} bytes (#{part.media_type})"
|
|
81
|
+
puts bytes.force_encoding("UTF-8")
|
|
82
|
+
|
|
83
|
+
elsif part.url?
|
|
84
|
+
puts part.url
|
|
85
|
+
end
|
|
86
|
+
end
|
|
87
|
+
```
|
|
88
|
+
|
|
89
|
+
The predicates `text?`, `json?`, `raw?`, and `url?` are mutually exclusive. `decoded_bytes` reverses the base64 encoding that the transport layer applies to binary parts.
|
|
90
|
+
|
|
91
|
+
---
|
|
92
|
+
|
|
93
|
+
## Protocol coverage
|
|
94
|
+
|
|
95
|
+
| Spec section | What the demo shows |
|
|
96
|
+
|---|---|
|
|
97
|
+
| `Part.text` | Plain prose with `media_type: "text/plain"` |
|
|
98
|
+
| `Part.json` | Structured data as a Ruby Hash; serialized as a JSON object in the artifact |
|
|
99
|
+
| `Part.binary` | Raw bytes base64-encoded for transport; `decoded_bytes` restores the original |
|
|
100
|
+
| `Part.from_url` | URL reference with `media_type` and `filename`; no content is inlined |
|
|
101
|
+
| `Part` predicates | `text?`, `json?`, `raw?`, `url?` allow type-safe dispatch on the receiving end |
|
|
102
|
+
| Multi-part `Artifact` | One artifact contains all four parts; clients can select the representation they need |
|
|
103
|
+
| `Artifact.name` | Named artifact (`"report"`) for client-side identification |
|
|
@@ -0,0 +1,117 @@
|
|
|
1
|
+
# 06 Push Notifications
|
|
2
|
+
|
|
3
|
+
**Run it:**
|
|
4
|
+
|
|
5
|
+
```bash
|
|
6
|
+
bundle exec ruby examples/run 06_push_notifications
|
|
7
|
+
```
|
|
8
|
+
|
|
9
|
+
**What it shows:** the full push notification CRUD cycle — registering a webhook, confirming it, receiving out-of-band task events, then deleting the config.
|
|
10
|
+
|
|
11
|
+
---
|
|
12
|
+
|
|
13
|
+
## Files
|
|
14
|
+
|
|
15
|
+
| File | Purpose |
|
|
16
|
+
|---|---|
|
|
17
|
+
| `examples/06_push_notifications/server.rb` | `PushExecutor` that delivers `TaskStatusUpdateEvent` payloads to the registered webhook after each step |
|
|
18
|
+
| `examples/06_push_notifications/client.rb` | Runs a local WEBrick webhook receiver on port 9293, submits a task, and exercises all four push-notification RPC methods |
|
|
19
|
+
|
|
20
|
+
---
|
|
21
|
+
|
|
22
|
+
## The scenario
|
|
23
|
+
|
|
24
|
+
The client holds no persistent SSE connection. Instead:
|
|
25
|
+
|
|
26
|
+
1. It starts a local HTTP receiver on port 9293 backed by a `Queue`.
|
|
27
|
+
2. It submits a task via `tasks/send` (synchronous — gets back the initial task object).
|
|
28
|
+
3. It registers the webhook URL via `tasks/pushNotification/set`.
|
|
29
|
+
4. The server `PushExecutor` runs its work steps; after each step it looks up the push config and delivers a `TaskStatusUpdateEvent` to the webhook.
|
|
30
|
+
5. The client's `Queue` collects payloads; the main thread waits until it receives the final `completed` event.
|
|
31
|
+
6. The client confirms the config exists (`tasks/pushNotification/get`), lists all configs (`tasks/pushNotification/list`), then removes the config (`tasks/pushNotification/delete`).
|
|
32
|
+
|
|
33
|
+
---
|
|
34
|
+
|
|
35
|
+
## Server design
|
|
36
|
+
|
|
37
|
+
`PushExecutor` is initialized with a shared `PushSender` and `PushConfigStore`. After each work step it delivers an event:
|
|
38
|
+
|
|
39
|
+
```ruby
|
|
40
|
+
class PushExecutor < A2A::Server::AgentExecutor
|
|
41
|
+
def initialize(push_sender:, push_config_store:)
|
|
42
|
+
@push_sender = push_sender
|
|
43
|
+
@push_config_store = push_config_store
|
|
44
|
+
end
|
|
45
|
+
|
|
46
|
+
def call(ctx)
|
|
47
|
+
ctx.task.start!
|
|
48
|
+
deliver(ctx)
|
|
49
|
+
|
|
50
|
+
3.times do |i|
|
|
51
|
+
sleep 1
|
|
52
|
+
ctx.emit_artifact(...)
|
|
53
|
+
deliver(ctx)
|
|
54
|
+
end
|
|
55
|
+
|
|
56
|
+
ctx.task.complete!
|
|
57
|
+
deliver(ctx)
|
|
58
|
+
end
|
|
59
|
+
|
|
60
|
+
private
|
|
61
|
+
|
|
62
|
+
def deliver(ctx)
|
|
63
|
+
config = @push_config_store.get(ctx.task.id)
|
|
64
|
+
return unless config
|
|
65
|
+
event = A2A::Models::TaskStatusUpdateEvent.new(
|
|
66
|
+
task_id: ctx.task.id, status: ctx.task.status, final: ctx.task.terminal?
|
|
67
|
+
)
|
|
68
|
+
@push_sender.deliver(config, event)
|
|
69
|
+
end
|
|
70
|
+
end
|
|
71
|
+
```
|
|
72
|
+
|
|
73
|
+
The same `push_config_store` instance is passed to both the executor and `A2A.server(push_config_store:)`, so the server's built-in RPC handlers and the executor share one store.
|
|
74
|
+
|
|
75
|
+
---
|
|
76
|
+
|
|
77
|
+
## Client design
|
|
78
|
+
|
|
79
|
+
WEBrick in a background thread writes received payloads into a `Queue`; the main thread reads from it:
|
|
80
|
+
|
|
81
|
+
```ruby
|
|
82
|
+
queue = Queue.new
|
|
83
|
+
|
|
84
|
+
server = WEBrick::HTTPServer.new(Port: 9293, Logger: ..., AccessLog: [])
|
|
85
|
+
server.mount_proc("/webhook") do |req, res|
|
|
86
|
+
queue.push(JSON.parse(req.body))
|
|
87
|
+
res.status = 200
|
|
88
|
+
end
|
|
89
|
+
Thread.new { server.start }
|
|
90
|
+
|
|
91
|
+
# Register the webhook
|
|
92
|
+
client.rpc_call("tasks/pushNotification/set", {
|
|
93
|
+
"id" => task.id,
|
|
94
|
+
"config" => { "webhookUrl" => "http://localhost:9293/webhook" }
|
|
95
|
+
})
|
|
96
|
+
|
|
97
|
+
# Wait for the final push
|
|
98
|
+
loop do
|
|
99
|
+
payload = queue.pop
|
|
100
|
+
break if payload.dig("status", "state") == "completed"
|
|
101
|
+
end
|
|
102
|
+
```
|
|
103
|
+
|
|
104
|
+
---
|
|
105
|
+
|
|
106
|
+
## Protocol coverage
|
|
107
|
+
|
|
108
|
+
| Spec section | What the demo shows |
|
|
109
|
+
|---|---|
|
|
110
|
+
| `tasks/pushNotification/set` | Client registers a `PushNotificationConfig` containing a webhook URL |
|
|
111
|
+
| `tasks/pushNotification/get` | Client confirms the config is stored by retrieving it by task ID |
|
|
112
|
+
| `tasks/pushNotification/list` | Client lists all registered push configs on the server |
|
|
113
|
+
| `tasks/pushNotification/delete` | Client removes the config; list confirms zero configs remain |
|
|
114
|
+
| `PushNotificationConfig` model | `webhookUrl` and optional `authenticationInfo` fields |
|
|
115
|
+
| `PushSender` | Server delivers `TaskStatusUpdateEvent` payloads as HTTP POSTs to the webhook URL |
|
|
116
|
+
| `AgentCapabilities.push_notifications` | `true` in the agent card; server rejects push RPC calls if `false` |
|
|
117
|
+
| Out-of-band delivery | Client receives progress updates without maintaining any persistent connection |
|
|
@@ -0,0 +1,129 @@
|
|
|
1
|
+
# 04 Resubscribe
|
|
2
|
+
|
|
3
|
+
**Run it:**
|
|
4
|
+
|
|
5
|
+
```bash
|
|
6
|
+
bundle exec ruby examples/run 04_resubscribe
|
|
7
|
+
```
|
|
8
|
+
|
|
9
|
+
**What it shows:** two independent SSE subscribers watching the same running task — one started with `tasks/sendSubscribe`, the other attached mid-stream with `tasks/resubscribe`.
|
|
10
|
+
|
|
11
|
+
---
|
|
12
|
+
|
|
13
|
+
## The scenario
|
|
14
|
+
|
|
15
|
+
A five-step analysis pipeline runs for ~6 seconds. Subscriber 1 starts the task and begins receiving events immediately. As soon as it has the task ID, Subscriber 2 calls `tasks/resubscribe`. Subscriber 2's **first event is the current Task snapshot** — the live state of the task at the moment it joined — and it then receives all remaining events just like Subscriber 1.
|
|
16
|
+
|
|
17
|
+
This demonstrates three protocol guarantees from the A2A specification:
|
|
18
|
+
|
|
19
|
+
1. **Snapshot on join** — `tasks/resubscribe` always delivers the current Task object as its first SSE event, so late-joining clients never miss the task's current state.
|
|
20
|
+
2. **Independent fan-out** — each subscriber has its own event queue; one slow or disconnecting subscriber does not affect the other.
|
|
21
|
+
3. **Single completion** — the task completes once; both streams close cleanly when the executor emits its final status event.
|
|
22
|
+
|
|
23
|
+
---
|
|
24
|
+
|
|
25
|
+
## Expected output
|
|
26
|
+
|
|
27
|
+
```
|
|
28
|
+
=== Agent Card ===
|
|
29
|
+
Name: AnalysisAgent
|
|
30
|
+
Description: Multi-step analysis pipeline — demonstrates tasks/resubscribe
|
|
31
|
+
Streaming: true
|
|
32
|
+
|
|
33
|
+
[subscriber-1] tasks/sendSubscribe (new task) [subscriber-2] tasks/resubscribe (join mid-stream)
|
|
34
|
+
────────────────────────────────────────────────────────────────────────────────────
|
|
35
|
+
[subscriber-1] status: working
|
|
36
|
+
[subscriber-2] (resubscribing to task c29189f8…)
|
|
37
|
+
[subscriber-2] (snapshot) state=working, steps so far: 0
|
|
38
|
+
[subscriber-1] Step 1/5: Collecting data from sources
|
|
39
|
+
[subscriber-2] Step 1/5: Collecting data from sources
|
|
40
|
+
[subscriber-1] Step 2/5: Filtering and normalising records
|
|
41
|
+
[subscriber-2] Step 2/5: Filtering and normalising records
|
|
42
|
+
...
|
|
43
|
+
[subscriber-1] status (final): completed
|
|
44
|
+
[subscriber-2] status (final): completed
|
|
45
|
+
|
|
46
|
+
=== Summary ===
|
|
47
|
+
Subscriber 1 — events received : 7
|
|
48
|
+
Subscriber 1 — artifact steps : 5
|
|
49
|
+
|
|
50
|
+
Subscriber 2 — events received : 7
|
|
51
|
+
Subscriber 2 — task snapshot : yes
|
|
52
|
+
Subscriber 2 — artifact steps : 5
|
|
53
|
+
Subscriber 2 — joined at step : 1 of 5
|
|
54
|
+
|
|
55
|
+
Both streams terminated cleanly: true
|
|
56
|
+
```
|
|
57
|
+
|
|
58
|
+
---
|
|
59
|
+
|
|
60
|
+
## How it works
|
|
61
|
+
|
|
62
|
+
### Server — `AnalysisExecutor`
|
|
63
|
+
|
|
64
|
+
A straightforward streaming executor. Emits one `TaskArtifactUpdateEvent` per step with a 1.2-second pause between steps, giving the client time to resubscribe mid-stream.
|
|
65
|
+
|
|
66
|
+
```ruby
|
|
67
|
+
class AnalysisExecutor < A2A::Server::AgentExecutor
|
|
68
|
+
STEPS = [
|
|
69
|
+
"Collecting data from sources",
|
|
70
|
+
"Filtering and normalising records",
|
|
71
|
+
"Running statistical analysis",
|
|
72
|
+
"Detecting anomalies",
|
|
73
|
+
"Generating final report"
|
|
74
|
+
].freeze
|
|
75
|
+
|
|
76
|
+
def call(ctx)
|
|
77
|
+
ctx.task.start!
|
|
78
|
+
ctx.emit_status
|
|
79
|
+
|
|
80
|
+
STEPS.each_with_index do |description, i|
|
|
81
|
+
sleep 1.2
|
|
82
|
+
ctx.emit_artifact(A2A::Models::Artifact.new(
|
|
83
|
+
index: i,
|
|
84
|
+
parts: [A2A::Models::Part.text("Step #{i + 1}/#{STEPS.length}: #{description}")],
|
|
85
|
+
last_chunk: true
|
|
86
|
+
), last_chunk: true)
|
|
87
|
+
end
|
|
88
|
+
|
|
89
|
+
ctx.task.complete!
|
|
90
|
+
ctx.emit_status(final: true)
|
|
91
|
+
end
|
|
92
|
+
end
|
|
93
|
+
```
|
|
94
|
+
|
|
95
|
+
### Client — concurrent subscribers via `Async`
|
|
96
|
+
|
|
97
|
+
Both subscribers run inside a single `Async` reactor. Subscriber 1 runs in a background async task so it doesn't block; once it captures the task ID from the first status event, the main fiber calls `resubscribe` synchronously.
|
|
98
|
+
|
|
99
|
+
```ruby
|
|
100
|
+
Async do |reactor|
|
|
101
|
+
captured_task_id = nil
|
|
102
|
+
|
|
103
|
+
# Subscriber 1 — starts the task
|
|
104
|
+
sub1_task = reactor.async do
|
|
105
|
+
A2A.sse_client(url: URL).send_subscribe(message: msg) do |event|
|
|
106
|
+
captured_task_id ||= event.task_id if event.respond_to?(:task_id)
|
|
107
|
+
# … print event …
|
|
108
|
+
end
|
|
109
|
+
end
|
|
110
|
+
|
|
111
|
+
# Wait for the task ID, then attach Subscriber 2
|
|
112
|
+
loop { break if captured_task_id; reactor.sleep(0.05) }
|
|
113
|
+
|
|
114
|
+
A2A.sse_client(url: URL).resubscribe(task_id: captured_task_id) do |event|
|
|
115
|
+
case event
|
|
116
|
+
when Hash # Task snapshot — first event only
|
|
117
|
+
puts "(snapshot) state=#{event.dig('status', 'state')}"
|
|
118
|
+
when A2A::Models::TaskStatusUpdateEvent, A2A::Models::TaskArtifactUpdateEvent
|
|
119
|
+
# … print event …
|
|
120
|
+
end
|
|
121
|
+
end
|
|
122
|
+
|
|
123
|
+
sub1_task.wait
|
|
124
|
+
end
|
|
125
|
+
```
|
|
126
|
+
|
|
127
|
+
### Under the hood — `TaskBroadcast`
|
|
128
|
+
|
|
129
|
+
When `handle_send_subscribe` creates the task, it also creates a `TaskBroadcast` and registers it in the `BroadcastRegistry` under the task ID. Each SSE subscriber — whether the original or a resubscriber — gets its own `RactorQueue` from `broadcast.subscribe`. The executor calls `ctx.emit_status` and `ctx.emit_artifact`, which call `broadcast.publish`, which calls `async_push` on every subscriber queue. A pump loop in each SSE handler calls `async_pop` and writes SSE frames to the HTTP body. When the executor finishes, `broadcast.close` pushes the `DONE` sentinel to all queues and the pump loops exit cleanly.
|
|
@@ -0,0 +1,131 @@
|
|
|
1
|
+
# 11 SQLite3 Persistent Storage
|
|
2
|
+
|
|
3
|
+
**Run it:**
|
|
4
|
+
|
|
5
|
+
```bash
|
|
6
|
+
bundle exec ruby examples/run 11_sqlite_storage
|
|
7
|
+
```
|
|
8
|
+
|
|
9
|
+
**What it shows:** replacing the default in-memory store with a custom `Storage::Base` subclass backed by SQLite3, proving that tasks survive a full server restart.
|
|
10
|
+
|
|
11
|
+
---
|
|
12
|
+
|
|
13
|
+
## Files
|
|
14
|
+
|
|
15
|
+
| File | Purpose |
|
|
16
|
+
|---|---|
|
|
17
|
+
| `examples/11_sqlite_storage/server.rb` | `SqliteStorage < A2A::Storage::Base` and the `PersistentEchoAgent` server |
|
|
18
|
+
| `examples/11_sqlite_storage/client.rb` | `populate` phase (send tasks, save IDs) and `verify` phase (fetch tasks after restart) |
|
|
19
|
+
| `examples/11_sqlite_storage/run` | Two-phase lifecycle: populate → stop → restart → verify → cleanup |
|
|
20
|
+
| `examples/11_sqlite_storage/Gemfile` | Extends the project gemspec + adds `sqlite3` |
|
|
21
|
+
| `examples/11_sqlite_storage/Brewfile` | Declares the `sqlite3` binary dependency for Homebrew |
|
|
22
|
+
|
|
23
|
+
---
|
|
24
|
+
|
|
25
|
+
## Dependencies
|
|
26
|
+
|
|
27
|
+
This demo has its own `Gemfile` and `Brewfile`. The `run` script handles setup before spawning anything:
|
|
28
|
+
|
|
29
|
+
1. **Binary check** — verifies `sqlite3` is on `$PATH`; on macOS runs `brew bundle install --file=Brewfile` if it is not. Other platforms must provide the binary.
|
|
30
|
+
2. **Gem install** — runs `bundle install` with the local `Gemfile`, which uses `gemspec path: "../../"` to pull in all project dependencies plus `sqlite3`.
|
|
31
|
+
|
|
32
|
+
Once setup completes, `server.rb` can simply `require "sqlite3"` with no inline install logic.
|
|
33
|
+
|
|
34
|
+
---
|
|
35
|
+
|
|
36
|
+
## The two-phase demo
|
|
37
|
+
|
|
38
|
+
**Phase 1 — populate**
|
|
39
|
+
|
|
40
|
+
The server starts with an empty database. The client sends three tasks (alpha, beta, gamma) and writes their IDs to a temp JSON file, then the server stops.
|
|
41
|
+
|
|
42
|
+
**Phase 2 — verify**
|
|
43
|
+
|
|
44
|
+
The server restarts pointing at the same database file. At startup it prints the existing task count (`existing tasks in DB: 3`), confirming the data survived. The client reads the saved IDs, fetches each task from the freshly booted server, and asserts all three are present and `completed`.
|
|
45
|
+
|
|
46
|
+
---
|
|
47
|
+
|
|
48
|
+
## `SqliteStorage` implementation
|
|
49
|
+
|
|
50
|
+
`SqliteStorage` subclasses `A2A::Storage::Base` and implements all five required methods:
|
|
51
|
+
|
|
52
|
+
```ruby
|
|
53
|
+
class SqliteStorage < A2A::Storage::Base
|
|
54
|
+
def initialize(path)
|
|
55
|
+
@db = SQLite3::Database.new(path)
|
|
56
|
+
@mutex = Mutex.new
|
|
57
|
+
@db.execute("PRAGMA journal_mode=WAL")
|
|
58
|
+
@db.execute("PRAGMA busy_timeout=5000")
|
|
59
|
+
setup_schema
|
|
60
|
+
end
|
|
61
|
+
|
|
62
|
+
def save(task)
|
|
63
|
+
@mutex.synchronize do
|
|
64
|
+
now = Time.now.iso8601
|
|
65
|
+
@db.execute(
|
|
66
|
+
"INSERT INTO tasks (id, data, created_at, updated_at) VALUES (?, ?, ?, ?) " \
|
|
67
|
+
"ON CONFLICT(id) DO UPDATE SET data=excluded.data, updated_at=excluded.updated_at",
|
|
68
|
+
[task.id, task.to_h.to_json, now, now]
|
|
69
|
+
)
|
|
70
|
+
end
|
|
71
|
+
task
|
|
72
|
+
end
|
|
73
|
+
|
|
74
|
+
def find(id)
|
|
75
|
+
row = @mutex.synchronize { @db.get_first_row("SELECT data FROM tasks WHERE id=?", [id]) }
|
|
76
|
+
return nil unless row
|
|
77
|
+
A2A::Models::Task.from_hash(JSON.parse(row[0]))
|
|
78
|
+
end
|
|
79
|
+
|
|
80
|
+
def find!(id)
|
|
81
|
+
find(id) or raise A2A::TaskNotFoundError, "Task #{id} not found"
|
|
82
|
+
end
|
|
83
|
+
|
|
84
|
+
def list
|
|
85
|
+
rows = @mutex.synchronize { @db.execute("SELECT data FROM tasks ORDER BY rowid") }
|
|
86
|
+
rows.map { |row| A2A::Models::Task.from_hash(JSON.parse(row[0])) }
|
|
87
|
+
end
|
|
88
|
+
|
|
89
|
+
def delete(id)
|
|
90
|
+
@mutex.synchronize { @db.execute("DELETE FROM tasks WHERE id=?", [id]) }
|
|
91
|
+
end
|
|
92
|
+
end
|
|
93
|
+
```
|
|
94
|
+
|
|
95
|
+
Tasks are stored as JSON blobs. `from_hash` reconstructs the full `Task` object including nested `TaskStatus`, `Artifact`, and `Part` models.
|
|
96
|
+
|
|
97
|
+
WAL mode allows concurrent readers while a write is in progress — important when multiple fibers are saving and fetching tasks simultaneously in a Falcon server.
|
|
98
|
+
|
|
99
|
+
---
|
|
100
|
+
|
|
101
|
+
## Injecting the storage
|
|
102
|
+
|
|
103
|
+
The storage instance is passed to `A2A.server` via the `storage:` keyword:
|
|
104
|
+
|
|
105
|
+
```ruby
|
|
106
|
+
storage = SqliteStorage.new(db_path)
|
|
107
|
+
A2A.server(agent_card: card, executor: EchoExecutor.new, storage: storage).run
|
|
108
|
+
```
|
|
109
|
+
|
|
110
|
+
No other library configuration is needed. The server's built-in RPC handlers (`tasks/get`, `tasks/list`, `tasks/cancel`, etc.) all use the injected store.
|
|
111
|
+
|
|
112
|
+
---
|
|
113
|
+
|
|
114
|
+
## Protocol coverage
|
|
115
|
+
|
|
116
|
+
| Spec section | What the demo shows |
|
|
117
|
+
|---|---|
|
|
118
|
+
| `Storage::Base` injection | `A2A.server(storage:)` accepts any `Storage::Base` subclass — no library changes needed |
|
|
119
|
+
| `SqliteStorage#save` | Tasks serialized to JSON and upserted via `ON CONFLICT DO UPDATE` |
|
|
120
|
+
| `SqliteStorage#find!` | Task fetched by ID across process boundaries; raises `TaskNotFoundError` if missing |
|
|
121
|
+
| `SqliteStorage#list` | All stored tasks returned in insertion order |
|
|
122
|
+
| `SqliteStorage#size` | Task count reported at server startup to confirm DB contents |
|
|
123
|
+
| Cross-restart persistence | Tasks created in server process 1 are visible to server process 2 via the shared DB file |
|
|
124
|
+
| WAL mode concurrency | `PRAGMA journal_mode=WAL` allows concurrent readers during writes |
|
|
125
|
+
| `Brewfile` / `Gemfile` pattern | Per-demo dependency files keep application code free of setup logic |
|
|
126
|
+
|
|
127
|
+
---
|
|
128
|
+
|
|
129
|
+
## Related guide
|
|
130
|
+
|
|
131
|
+
See [Custom Storage](../guides/custom-storage.md) for a detailed walkthrough of the `Storage::Base` interface and adapter patterns for other databases.
|
data/docs/examples/streaming.md
CHANGED
|
@@ -37,10 +37,7 @@ def call(ctx)
|
|
|
37
37
|
text = i.zero? ? word : " #{word}"
|
|
38
38
|
|
|
39
39
|
artifact = A2A::Models::Artifact.new(
|
|
40
|
-
|
|
41
|
-
parts: [A2A::Models::Part.text(text)],
|
|
42
|
-
append: i > 0,
|
|
43
|
-
last_chunk: i == WORDS.length - 1
|
|
40
|
+
parts: [A2A::Models::Part.text(text)]
|
|
44
41
|
)
|
|
45
42
|
|
|
46
43
|
ctx.emit_artifact(artifact, append: i > 0, last_chunk: i == WORDS.length - 1)
|
|
@@ -80,7 +80,10 @@ A JWT is generated and sent as `Authorization: Bearer <token>`. The token payloa
|
|
|
80
80
|
```
|
|
81
81
|
|
|
82
82
|
```ruby
|
|
83
|
-
auth = A2A::Models::AuthenticationInfo.new(
|
|
83
|
+
auth = A2A::Models::AuthenticationInfo.new(
|
|
84
|
+
scheme: "bearer",
|
|
85
|
+
value: "" # not used for JWT; PushSender generates the token from the private key
|
|
86
|
+
)
|
|
84
87
|
```
|
|
85
88
|
|
|
86
89
|
### Static token
|
data/docs/guides/streaming.md
CHANGED
|
@@ -16,10 +16,7 @@ class StreamingExecutor < A2A::Server::AgentExecutor
|
|
|
16
16
|
["Thinking… ", "Processing… ", "Done!"].each_with_index do |chunk, i|
|
|
17
17
|
last = i == 2
|
|
18
18
|
artifact = A2A::Models::Artifact.new(
|
|
19
|
-
|
|
20
|
-
parts: [A2A::Models::Part.text(chunk)],
|
|
21
|
-
append: i > 0, # true for chunks after the first
|
|
22
|
-
last_chunk: last
|
|
19
|
+
parts: [A2A::Models::Part.text(chunk)]
|
|
23
20
|
)
|
|
24
21
|
ctx.emit_artifact(artifact, append: i > 0, last_chunk: last)
|
|
25
22
|
end
|
|
@@ -64,6 +61,38 @@ The block is called for each parsed SSE event. Unrecognized event types yield a
|
|
|
64
61
|
|
|
65
62
|
---
|
|
66
63
|
|
|
64
|
+
---
|
|
65
|
+
|
|
66
|
+
## Resubscribing to an existing task
|
|
67
|
+
|
|
68
|
+
`tasks/resubscribe` lets a client attach a new SSE stream to a task that is already running — useful when a connection drops and the client needs to reconnect without re-sending the original message.
|
|
69
|
+
|
|
70
|
+
```ruby
|
|
71
|
+
client = A2A.sse_client(url: "http://localhost:9292")
|
|
72
|
+
|
|
73
|
+
# The first event yielded is the current Task snapshot (a Hash, no `type` field).
|
|
74
|
+
# Subsequent events are the live stream from the executor.
|
|
75
|
+
client.resubscribe(task_id: "existing-task-id") do |event|
|
|
76
|
+
case event
|
|
77
|
+
when Hash
|
|
78
|
+
puts "reconnected — current state: #{event['status']['state']}"
|
|
79
|
+
when A2A::Models::TaskStatusUpdateEvent
|
|
80
|
+
puts "[status] #{event.status.state} final=#{event.final}"
|
|
81
|
+
when A2A::Models::TaskArtifactUpdateEvent
|
|
82
|
+
print event.artifact.parts.map(&:text).join
|
|
83
|
+
end
|
|
84
|
+
end
|
|
85
|
+
```
|
|
86
|
+
|
|
87
|
+
`resubscribe` raises (server returns `UnsupportedOperationError`) if:
|
|
88
|
+
- The task ID does not exist (`TaskNotFoundError`)
|
|
89
|
+
- The task is already in a terminal state
|
|
90
|
+
- The task was not started with `tasks/sendSubscribe` (not in the broadcast registry)
|
|
91
|
+
|
|
92
|
+
Multiple clients may resubscribe to the same task concurrently — each gets an independent event queue backed by `RactorQueue`.
|
|
93
|
+
|
|
94
|
+
---
|
|
95
|
+
|
|
67
96
|
## AgentCard declaration
|
|
68
97
|
|
|
69
98
|
Advertise streaming support in your AgentCard:
|
|
@@ -72,4 +101,4 @@ Advertise streaming support in your AgentCard:
|
|
|
72
101
|
capabilities = A2A::Models::AgentCapabilities.new(streaming: true)
|
|
73
102
|
```
|
|
74
103
|
|
|
75
|
-
Clients can check `card.capabilities.streaming` before using `send_subscribe`.
|
|
104
|
+
Clients can check `card.capabilities.streaming` before using `send_subscribe` or `resubscribe`.
|