agent99 0.0.4 → 0.0.5
This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
- checksums.yaml +4 -4
- data/A2A_SPEC-dev.md +1829 -0
- data/CHANGELOG.md +31 -0
- data/COMMITS.md +196 -0
- data/DOCS.md +96 -0
- data/README.md +200 -78
- data/Rakefile +62 -0
- data/docs/AI/htm.md +215 -0
- data/docs/AI/htm.rb +141 -0
- data/docs/AI/htm_demo.db +0 -0
- data/docs/AI/notes_on_htm_implementation.md +1319 -0
- data/docs/AI/some_code.rb +692 -0
- data/docs/advanced-topics/a2a-protocol.md +13 -0
- data/docs/{control_actions.md → advanced-topics/control-actions.md} +2 -0
- data/docs/advanced-topics/model-context-protocol.md +4 -0
- data/docs/advanced-topics/multi-agent-processing.md +674 -0
- data/docs/agent-development/request-response-handling.md +512 -0
- data/docs/api-reference/agent99-base.md +463 -0
- data/docs/api-reference/message-clients.md +495 -0
- data/docs/api-reference/registry-client.md +470 -0
- data/docs/api-reference/schemas.md +518 -0
- data/docs/assets/css/custom.css +27 -0
- data/docs/assets/images/agent-lifecycle.svg +73 -0
- data/docs/assets/images/agent-registry-process.svg +86 -0
- data/docs/assets/images/agent-registry-processes.svg +114 -0
- data/docs/assets/images/agent-types-overview.svg +51 -0
- data/docs/assets/images/agent99-architecture.svg +85 -0
- data/docs/assets/images/agent99_logo.png +0 -0
- data/docs/assets/images/control-actions-state.svg +83 -0
- data/docs/assets/images/knowledge-graph.svg +77 -0
- data/docs/assets/images/message-processing-flow.svg +148 -0
- data/docs/assets/images/multi-agent-system.svg +66 -0
- data/docs/assets/images/proxy-pattern-sequence.svg +48 -0
- data/docs/assets/images/request-flow.svg +97 -0
- data/docs/assets/images/request-processing-lifecycle.svg +50 -0
- data/docs/assets/images/request-response-sequence.svg +39 -0
- data/docs/{agent_lifecycle.md → core-concepts/agent-lifecycle.md} +2 -0
- data/docs/core-concepts/agent-types.md +255 -0
- data/docs/{architecture.md → core-concepts/architecture.md} +5 -5
- data/docs/{what_is_an_agent.md → core-concepts/what-is-an-agent.md} +1 -1
- data/docs/diagrams/message-flow-sequence.svg +198 -0
- data/docs/diagrams/p2p-network-topology.svg +181 -0
- data/docs/diagrams/smart-transport-routing.svg +165 -0
- data/docs/diagrams/three-layer-architecture.svg +77 -0
- data/docs/diagrams/transport-extension-api.svg +309 -0
- data/docs/diagrams/transport-extension-architecture.svg +234 -0
- data/docs/diagrams/transport-selection-flowchart.svg +264 -0
- data/docs/examples/advanced-examples.md +951 -0
- data/docs/examples/basic-examples.md +268 -0
- data/docs/{agent_registry_processes.md → framework-components/agent-registry.md} +1 -1
- data/docs/{message_processing.md → framework-components/message-processing.md} +3 -1
- data/docs/getting-started/basic-example.md +306 -0
- data/docs/getting-started/installation.md +160 -0
- data/docs/getting-started/overview.md +64 -0
- data/docs/getting-started/quick-start.md +179 -0
- data/docs/index.md +97 -0
- data/examples/DEMO.md +148 -0
- data/examples/README.md +50 -0
- data/examples/bad_agent.rb +32 -0
- data/examples/registry.rb +0 -8
- data/examples/run_demo.rb +433 -0
- data/lib/agent99/amqp_message_client.rb +2 -2
- data/lib/agent99/base.rb +1 -1
- data/lib/agent99/message_processing.rb +6 -12
- data/lib/agent99/registry_client.rb +4 -1
- data/lib/agent99/version.rb +1 -1
- data/lib/agent99.rb +1 -1
- data/mkdocs.yml +195 -0
- data/p2p_plan.md +533 -0
- data/p2p_roadmap.md +299 -0
- data/registry_plan.md +1818 -0
- metadata +89 -32
- data/docs/README.md +0 -57
- data/docs/diagrams/agent_registry_processes.dot +0 -42
- data/docs/diagrams/agent_registry_processes.png +0 -0
- data/docs/diagrams/high_level_architecture.dot +0 -26
- data/docs/diagrams/high_level_architecture.png +0 -0
- data/docs/diagrams/request_flow.dot +0 -42
- data/docs/diagrams/request_flow.png +0 -0
- /data/docs/{advanced_features.md → advanced-topics/advanced-features.md} +0 -0
- /data/docs/{extending_the_framework.md → advanced-topics/extending-the-framework.md} +0 -0
- /data/docs/{custom_agent_implementation.md → agent-development/custom-agent-implementation.md} +0 -0
- /data/docs/{error_handling_and_logging.md → agent-development/error-handling-and-logging.md} +0 -0
- /data/docs/{schema_definition.md → agent-development/schema-definition.md} +0 -0
- /data/docs/{api_reference.md → api-reference/overview.md} +0 -0
- /data/docs/{agent_discovery.md → framework-components/agent-discovery.md} +0 -0
- /data/docs/{messaging_system.md → framework-components/messaging-system.md} +0 -0
- /data/docs/{breaking_change_v0.0.4.md → operations/breaking-changes.md} +0 -0
- /data/docs/{configuration.md → operations/configuration.md} +0 -0
- /data/docs/{preformance_considerations.md → operations/performance-considerations.md} +0 -0
- /data/docs/{security.md → operations/security.md} +0 -0
- /data/docs/{troubleshooting.md → operations/troubleshooting.md} +0 -0
@@ -0,0 +1,951 @@
|
|
1
|
+
# Advanced Examples
|
2
|
+
|
3
|
+
This section contains advanced examples demonstrating sophisticated Agent99 patterns, distributed architectures, and real-world use cases.
|
4
|
+
|
5
|
+
## Microservices Architecture Example
|
6
|
+
|
7
|
+
A complete e-commerce microservices system using Agent99:
|
8
|
+
|
9
|
+
### Order Processing Service
|
10
|
+
|
11
|
+
```ruby
|
12
|
+
require 'agent99'
|
13
|
+
require 'simple_json_schema_builder'
|
14
|
+
|
15
|
+
class OrderRequest < SimpleJsonSchemaBuilder::Base
|
16
|
+
object do
|
17
|
+
object :header, schema: Agent99::HeaderSchema
|
18
|
+
string :customer_id, required: true, format: :uuid
|
19
|
+
array :items, required: true, minItems: 1 do
|
20
|
+
object do
|
21
|
+
string :product_id, required: true, format: :uuid
|
22
|
+
integer :quantity, required: true, minimum: 1
|
23
|
+
number :unit_price, required: true, minimum: 0
|
24
|
+
end
|
25
|
+
end
|
26
|
+
object :shipping_address, required: true do
|
27
|
+
string :street, required: true
|
28
|
+
string :city, required: true
|
29
|
+
string :state, required: true
|
30
|
+
string :zip_code, required: true
|
31
|
+
string :country, required: true
|
32
|
+
end
|
33
|
+
end
|
34
|
+
end
|
35
|
+
|
36
|
+
class OrderProcessingAgent < Agent99::Base
|
37
|
+
def initialize
|
38
|
+
super
|
39
|
+
@orders = {}
|
40
|
+
end
|
41
|
+
|
42
|
+
def info
|
43
|
+
{
|
44
|
+
name: self.class.to_s,
|
45
|
+
type: :hybrid,
|
46
|
+
capabilities: ['order_processing', 'e_commerce'],
|
47
|
+
request_schema: OrderRequest.schema
|
48
|
+
}
|
49
|
+
end
|
50
|
+
|
51
|
+
def process_request(payload)
|
52
|
+
order_id = SecureRandom.uuid
|
53
|
+
|
54
|
+
# Validate inventory
|
55
|
+
inventory_check = check_inventory(payload[:items])
|
56
|
+
unless inventory_check[:available]
|
57
|
+
return send_error("Insufficient inventory", "INVENTORY_ERROR", inventory_check)
|
58
|
+
end
|
59
|
+
|
60
|
+
# Process payment
|
61
|
+
payment_result = process_payment(payload[:customer_id], inventory_check[:total])
|
62
|
+
unless payment_result[:success]
|
63
|
+
return send_error("Payment failed", "PAYMENT_ERROR", payment_result)
|
64
|
+
end
|
65
|
+
|
66
|
+
# Create order
|
67
|
+
order = create_order(order_id, payload, payment_result)
|
68
|
+
|
69
|
+
# Trigger fulfillment
|
70
|
+
trigger_fulfillment(order)
|
71
|
+
|
72
|
+
# Send notifications
|
73
|
+
notify_customer(order)
|
74
|
+
|
75
|
+
send_response(
|
76
|
+
order_id: order_id,
|
77
|
+
status: 'confirmed',
|
78
|
+
total: inventory_check[:total],
|
79
|
+
estimated_delivery: (Time.now + 7.days).iso8601
|
80
|
+
)
|
81
|
+
end
|
82
|
+
|
83
|
+
private
|
84
|
+
|
85
|
+
def check_inventory(items)
|
86
|
+
inventory_agents = discover_agents(['inventory'])
|
87
|
+
return { available: false, error: 'No inventory service' } if inventory_agents.empty?
|
88
|
+
|
89
|
+
inventory_agent = inventory_agents.first
|
90
|
+
total = 0
|
91
|
+
|
92
|
+
items.each do |item|
|
93
|
+
request = {
|
94
|
+
product_id: item[:product_id],
|
95
|
+
quantity: item[:quantity]
|
96
|
+
}
|
97
|
+
|
98
|
+
response = send_request(inventory_agent[:name], request)
|
99
|
+
unless response && response[:available]
|
100
|
+
return {
|
101
|
+
available: false,
|
102
|
+
product_id: item[:product_id],
|
103
|
+
error: 'Insufficient stock'
|
104
|
+
}
|
105
|
+
end
|
106
|
+
|
107
|
+
total += item[:unit_price] * item[:quantity]
|
108
|
+
end
|
109
|
+
|
110
|
+
{ available: true, total: total }
|
111
|
+
end
|
112
|
+
|
113
|
+
def process_payment(customer_id, amount)
|
114
|
+
payment_agents = discover_agents(['payment'])
|
115
|
+
return { success: false, error: 'No payment service' } if payment_agents.empty?
|
116
|
+
|
117
|
+
payment_agent = payment_agents.first
|
118
|
+
request = {
|
119
|
+
customer_id: customer_id,
|
120
|
+
amount: amount,
|
121
|
+
currency: 'USD'
|
122
|
+
}
|
123
|
+
|
124
|
+
response = send_request(payment_agent[:name], request)
|
125
|
+
response || { success: false, error: 'Payment service unavailable' }
|
126
|
+
end
|
127
|
+
|
128
|
+
def create_order(order_id, payload, payment_result)
|
129
|
+
order = {
|
130
|
+
id: order_id,
|
131
|
+
customer_id: payload[:customer_id],
|
132
|
+
items: payload[:items],
|
133
|
+
shipping_address: payload[:shipping_address],
|
134
|
+
payment_id: payment_result[:payment_id],
|
135
|
+
total: payment_result[:amount],
|
136
|
+
status: 'confirmed',
|
137
|
+
created_at: Time.now.iso8601
|
138
|
+
}
|
139
|
+
|
140
|
+
@orders[order_id] = order
|
141
|
+
order
|
142
|
+
end
|
143
|
+
|
144
|
+
def trigger_fulfillment(order)
|
145
|
+
fulfillment_agents = discover_agents(['fulfillment'])
|
146
|
+
return unless fulfillment_agents.any?
|
147
|
+
|
148
|
+
fulfillment_agent = fulfillment_agents.first
|
149
|
+
request = {
|
150
|
+
order_id: order[:id],
|
151
|
+
items: order[:items],
|
152
|
+
shipping_address: order[:shipping_address]
|
153
|
+
}
|
154
|
+
|
155
|
+
# Async fulfillment request
|
156
|
+
Thread.new do
|
157
|
+
send_request(fulfillment_agent[:name], request)
|
158
|
+
end
|
159
|
+
end
|
160
|
+
|
161
|
+
def notify_customer(order)
|
162
|
+
notification_agents = discover_agents(['notification'])
|
163
|
+
return unless notification_agents.any?
|
164
|
+
|
165
|
+
notification_agent = notification_agents.first
|
166
|
+
request = {
|
167
|
+
type: 'order_confirmation',
|
168
|
+
customer_id: order[:customer_id],
|
169
|
+
order_id: order[:id],
|
170
|
+
template_data: {
|
171
|
+
order_total: order[:total],
|
172
|
+
item_count: order[:items].size
|
173
|
+
}
|
174
|
+
}
|
175
|
+
|
176
|
+
# Async notification
|
177
|
+
Thread.new do
|
178
|
+
send_request(notification_agent[:name], request)
|
179
|
+
end
|
180
|
+
end
|
181
|
+
end
|
182
|
+
```
|
183
|
+
|
184
|
+
### Inventory Management Service
|
185
|
+
|
186
|
+
```ruby
|
187
|
+
class InventoryAgent < Agent99::Base
|
188
|
+
def initialize
|
189
|
+
super
|
190
|
+
@inventory = load_inventory_data
|
191
|
+
@mutex = Mutex.new
|
192
|
+
end
|
193
|
+
|
194
|
+
def info
|
195
|
+
{
|
196
|
+
name: self.class.to_s,
|
197
|
+
type: :server,
|
198
|
+
capabilities: ['inventory', 'stock_management']
|
199
|
+
}
|
200
|
+
end
|
201
|
+
|
202
|
+
def process_request(payload)
|
203
|
+
product_id = payload.dig(:product_id)
|
204
|
+
quantity = payload.dig(:quantity)
|
205
|
+
|
206
|
+
@mutex.synchronize do
|
207
|
+
product = @inventory[product_id]
|
208
|
+
|
209
|
+
unless product
|
210
|
+
return send_error("Product not found", "PRODUCT_NOT_FOUND")
|
211
|
+
end
|
212
|
+
|
213
|
+
available_quantity = product[:stock]
|
214
|
+
|
215
|
+
if available_quantity >= quantity
|
216
|
+
# Reserve stock
|
217
|
+
@inventory[product_id][:stock] -= quantity
|
218
|
+
@inventory[product_id][:reserved] += quantity
|
219
|
+
|
220
|
+
send_response(
|
221
|
+
available: true,
|
222
|
+
product_id: product_id,
|
223
|
+
reserved_quantity: quantity,
|
224
|
+
remaining_stock: @inventory[product_id][:stock]
|
225
|
+
)
|
226
|
+
else
|
227
|
+
send_response(
|
228
|
+
available: false,
|
229
|
+
product_id: product_id,
|
230
|
+
requested_quantity: quantity,
|
231
|
+
available_quantity: available_quantity
|
232
|
+
)
|
233
|
+
end
|
234
|
+
end
|
235
|
+
end
|
236
|
+
|
237
|
+
private
|
238
|
+
|
239
|
+
def load_inventory_data
|
240
|
+
# Simulate inventory database
|
241
|
+
{
|
242
|
+
SecureRandom.uuid => { name: 'Widget A', stock: 100, reserved: 0, price: 29.99 },
|
243
|
+
SecureRandom.uuid => { name: 'Widget B', stock: 50, reserved: 0, price: 39.99 },
|
244
|
+
SecureRandom.uuid => { name: 'Widget C', stock: 25, reserved: 0, price: 49.99 }
|
245
|
+
}
|
246
|
+
end
|
247
|
+
end
|
248
|
+
```
|
249
|
+
|
250
|
+
### Payment Processing Service
|
251
|
+
|
252
|
+
```ruby
|
253
|
+
class PaymentAgent < Agent99::Base
|
254
|
+
def initialize
|
255
|
+
super
|
256
|
+
@payments = {}
|
257
|
+
end
|
258
|
+
|
259
|
+
def info
|
260
|
+
{
|
261
|
+
name: self.class.to_s,
|
262
|
+
type: :server,
|
263
|
+
capabilities: ['payment', 'billing']
|
264
|
+
}
|
265
|
+
end
|
266
|
+
|
267
|
+
def process_request(payload)
|
268
|
+
customer_id = payload.dig(:customer_id)
|
269
|
+
amount = payload.dig(:amount)
|
270
|
+
currency = payload.dig(:currency, 'USD')
|
271
|
+
|
272
|
+
# Simulate payment processing
|
273
|
+
payment_id = SecureRandom.uuid
|
274
|
+
|
275
|
+
# Simulate occasional payment failures
|
276
|
+
if rand < 0.05 # 5% failure rate
|
277
|
+
return send_error("Payment declined", "PAYMENT_DECLINED", {
|
278
|
+
reason: 'insufficient_funds',
|
279
|
+
payment_id: payment_id
|
280
|
+
})
|
281
|
+
end
|
282
|
+
|
283
|
+
# Process payment
|
284
|
+
payment_result = {
|
285
|
+
payment_id: payment_id,
|
286
|
+
customer_id: customer_id,
|
287
|
+
amount: amount,
|
288
|
+
currency: currency,
|
289
|
+
status: 'completed',
|
290
|
+
processed_at: Time.now.iso8601,
|
291
|
+
transaction_id: "txn_#{SecureRandom.hex(8)}"
|
292
|
+
}
|
293
|
+
|
294
|
+
@payments[payment_id] = payment_result
|
295
|
+
|
296
|
+
send_response(
|
297
|
+
success: true,
|
298
|
+
payment_id: payment_id,
|
299
|
+
amount: amount,
|
300
|
+
transaction_id: payment_result[:transaction_id]
|
301
|
+
)
|
302
|
+
end
|
303
|
+
end
|
304
|
+
```
|
305
|
+
|
306
|
+
## Event-Driven Architecture Example
|
307
|
+
|
308
|
+
Implementing an event-driven system with Agent99:
|
309
|
+
|
310
|
+
### Event Bus Agent
|
311
|
+
|
312
|
+
```ruby
|
313
|
+
class EventBusAgent < Agent99::Base
|
314
|
+
def initialize
|
315
|
+
super
|
316
|
+
@subscribers = {}
|
317
|
+
@events = []
|
318
|
+
end
|
319
|
+
|
320
|
+
def info
|
321
|
+
{
|
322
|
+
name: self.class.to_s,
|
323
|
+
type: :hybrid,
|
324
|
+
capabilities: ['event_bus', 'pub_sub', 'messaging']
|
325
|
+
}
|
326
|
+
end
|
327
|
+
|
328
|
+
def process_request(payload)
|
329
|
+
action = payload.dig(:action)
|
330
|
+
|
331
|
+
case action
|
332
|
+
when 'publish'
|
333
|
+
publish_event(payload)
|
334
|
+
when 'subscribe'
|
335
|
+
subscribe_to_events(payload)
|
336
|
+
when 'get_events'
|
337
|
+
get_events(payload)
|
338
|
+
else
|
339
|
+
send_error("Unknown action: #{action}", "INVALID_ACTION")
|
340
|
+
end
|
341
|
+
end
|
342
|
+
|
343
|
+
private
|
344
|
+
|
345
|
+
def publish_event(payload)
|
346
|
+
event = {
|
347
|
+
id: SecureRandom.uuid,
|
348
|
+
type: payload[:event_type],
|
349
|
+
source: payload[:source],
|
350
|
+
data: payload[:data],
|
351
|
+
timestamp: Time.now.iso8601
|
352
|
+
}
|
353
|
+
|
354
|
+
@events << event
|
355
|
+
|
356
|
+
# Notify subscribers
|
357
|
+
subscribers = @subscribers[event[:type]] || []
|
358
|
+
subscribers.each do |subscriber|
|
359
|
+
notify_subscriber(subscriber, event)
|
360
|
+
end
|
361
|
+
|
362
|
+
send_response(
|
363
|
+
event_id: event[:id],
|
364
|
+
published: true,
|
365
|
+
subscribers_notified: subscribers.size
|
366
|
+
)
|
367
|
+
end
|
368
|
+
|
369
|
+
def subscribe_to_events(payload)
|
370
|
+
event_type = payload[:event_type]
|
371
|
+
subscriber = payload[:subscriber]
|
372
|
+
|
373
|
+
@subscribers[event_type] ||= []
|
374
|
+
@subscribers[event_type] << subscriber unless @subscribers[event_type].include?(subscriber)
|
375
|
+
|
376
|
+
send_response(
|
377
|
+
subscribed: true,
|
378
|
+
event_type: event_type,
|
379
|
+
subscriber: subscriber
|
380
|
+
)
|
381
|
+
end
|
382
|
+
|
383
|
+
def get_events(payload)
|
384
|
+
event_type = payload[:event_type]
|
385
|
+
since = payload[:since] ? Time.parse(payload[:since]) : (Time.now - 3600)
|
386
|
+
|
387
|
+
filtered_events = @events.select do |event|
|
388
|
+
(event_type.nil? || event[:type] == event_type) &&
|
389
|
+
Time.parse(event[:timestamp]) >= since
|
390
|
+
end
|
391
|
+
|
392
|
+
send_response(
|
393
|
+
events: filtered_events,
|
394
|
+
count: filtered_events.size
|
395
|
+
)
|
396
|
+
end
|
397
|
+
|
398
|
+
def notify_subscriber(subscriber, event)
|
399
|
+
Thread.new do
|
400
|
+
begin
|
401
|
+
agents = discover_agents([subscriber])
|
402
|
+
if agents.any?
|
403
|
+
agent = agents.first
|
404
|
+
send_request(agent[:name], {
|
405
|
+
action: 'handle_event',
|
406
|
+
event: event
|
407
|
+
})
|
408
|
+
end
|
409
|
+
rescue => e
|
410
|
+
logger.error "Failed to notify subscriber #{subscriber}: #{e.message}"
|
411
|
+
end
|
412
|
+
end
|
413
|
+
end
|
414
|
+
end
|
415
|
+
```
|
416
|
+
|
417
|
+
### Event Subscriber Example
|
418
|
+
|
419
|
+
```ruby
|
420
|
+
class AuditAgent < Agent99::Base
|
421
|
+
def initialize
|
422
|
+
super
|
423
|
+
@audit_log = []
|
424
|
+
subscribe_to_events
|
425
|
+
end
|
426
|
+
|
427
|
+
def info
|
428
|
+
{
|
429
|
+
name: self.class.to_s,
|
430
|
+
type: :hybrid,
|
431
|
+
capabilities: ['audit', 'logging', 'compliance']
|
432
|
+
}
|
433
|
+
end
|
434
|
+
|
435
|
+
def process_request(payload)
|
436
|
+
action = payload.dig(:action)
|
437
|
+
|
438
|
+
case action
|
439
|
+
when 'handle_event'
|
440
|
+
handle_event(payload[:event])
|
441
|
+
when 'get_audit_log'
|
442
|
+
get_audit_log(payload)
|
443
|
+
else
|
444
|
+
send_error("Unknown action: #{action}", "INVALID_ACTION")
|
445
|
+
end
|
446
|
+
end
|
447
|
+
|
448
|
+
private
|
449
|
+
|
450
|
+
def subscribe_to_events
|
451
|
+
event_bus_agents = discover_agents(['event_bus'])
|
452
|
+
return unless event_bus_agents.any?
|
453
|
+
|
454
|
+
event_bus = event_bus_agents.first
|
455
|
+
|
456
|
+
# Subscribe to various event types
|
457
|
+
%w[order_created payment_processed user_login].each do |event_type|
|
458
|
+
send_request(event_bus[:name], {
|
459
|
+
action: 'subscribe',
|
460
|
+
event_type: event_type,
|
461
|
+
subscriber: 'audit'
|
462
|
+
})
|
463
|
+
end
|
464
|
+
end
|
465
|
+
|
466
|
+
def handle_event(event)
|
467
|
+
audit_entry = {
|
468
|
+
id: SecureRandom.uuid,
|
469
|
+
event_id: event[:id],
|
470
|
+
event_type: event[:type],
|
471
|
+
source: event[:source],
|
472
|
+
timestamp: event[:timestamp],
|
473
|
+
data: event[:data],
|
474
|
+
processed_at: Time.now.iso8601
|
475
|
+
}
|
476
|
+
|
477
|
+
@audit_log << audit_entry
|
478
|
+
|
479
|
+
# Log to file or database
|
480
|
+
File.open('audit.log', 'a') do |f|
|
481
|
+
f.puts audit_entry.to_json
|
482
|
+
end
|
483
|
+
|
484
|
+
send_response(
|
485
|
+
audit_id: audit_entry[:id],
|
486
|
+
logged: true
|
487
|
+
)
|
488
|
+
end
|
489
|
+
|
490
|
+
def get_audit_log(payload)
|
491
|
+
event_type = payload[:event_type]
|
492
|
+
since = payload[:since] ? Time.parse(payload[:since]) : (Time.now - 86400)
|
493
|
+
|
494
|
+
filtered_entries = @audit_log.select do |entry|
|
495
|
+
(event_type.nil? || entry[:event_type] == event_type) &&
|
496
|
+
Time.parse(entry[:timestamp]) >= since
|
497
|
+
end
|
498
|
+
|
499
|
+
send_response(
|
500
|
+
audit_entries: filtered_entries,
|
501
|
+
count: filtered_entries.size
|
502
|
+
)
|
503
|
+
end
|
504
|
+
end
|
505
|
+
```
|
506
|
+
|
507
|
+
## Distributed Cache Example
|
508
|
+
|
509
|
+
Building a distributed cache using multiple Agent99 agents:
|
510
|
+
|
511
|
+
### Cache Coordinator
|
512
|
+
|
513
|
+
```ruby
|
514
|
+
class CacheCoordinator < Agent99::Base
|
515
|
+
def initialize
|
516
|
+
super
|
517
|
+
@ring = ConsistentHashRing.new
|
518
|
+
@cache_nodes = {}
|
519
|
+
discover_cache_nodes
|
520
|
+
end
|
521
|
+
|
522
|
+
def info
|
523
|
+
{
|
524
|
+
name: self.class.to_s,
|
525
|
+
type: :hybrid,
|
526
|
+
capabilities: ['cache_coordinator', 'distributed_cache']
|
527
|
+
}
|
528
|
+
end
|
529
|
+
|
530
|
+
def process_request(payload)
|
531
|
+
operation = payload.dig(:operation)
|
532
|
+
key = payload.dig(:key)
|
533
|
+
|
534
|
+
case operation
|
535
|
+
when 'get'
|
536
|
+
get_from_cache(key)
|
537
|
+
when 'set'
|
538
|
+
set_in_cache(key, payload[:value], payload[:ttl])
|
539
|
+
when 'delete'
|
540
|
+
delete_from_cache(key)
|
541
|
+
when 'stats'
|
542
|
+
get_cache_stats
|
543
|
+
else
|
544
|
+
send_error("Unknown operation: #{operation}", "INVALID_OPERATION")
|
545
|
+
end
|
546
|
+
end
|
547
|
+
|
548
|
+
private
|
549
|
+
|
550
|
+
def discover_cache_nodes
|
551
|
+
cache_nodes = discover_agents(['cache_node'])
|
552
|
+
|
553
|
+
cache_nodes.each do |node|
|
554
|
+
@ring.add_node(node[:name])
|
555
|
+
@cache_nodes[node[:name]] = node
|
556
|
+
end
|
557
|
+
|
558
|
+
logger.info "Discovered #{cache_nodes.size} cache nodes"
|
559
|
+
end
|
560
|
+
|
561
|
+
def get_from_cache(key)
|
562
|
+
node_name = @ring.get_node(key)
|
563
|
+
node = @cache_nodes[node_name]
|
564
|
+
|
565
|
+
return send_error("No cache nodes available", "NO_CACHE_NODES") unless node
|
566
|
+
|
567
|
+
response = send_request(node[:name], {
|
568
|
+
operation: 'get',
|
569
|
+
key: key
|
570
|
+
})
|
571
|
+
|
572
|
+
if response && response[:found]
|
573
|
+
send_response(
|
574
|
+
found: true,
|
575
|
+
value: response[:value],
|
576
|
+
node: node_name
|
577
|
+
)
|
578
|
+
else
|
579
|
+
send_response(
|
580
|
+
found: false,
|
581
|
+
node: node_name
|
582
|
+
)
|
583
|
+
end
|
584
|
+
end
|
585
|
+
|
586
|
+
def set_in_cache(key, value, ttl = nil)
|
587
|
+
node_name = @ring.get_node(key)
|
588
|
+
node = @cache_nodes[node_name]
|
589
|
+
|
590
|
+
return send_error("No cache nodes available", "NO_CACHE_NODES") unless node
|
591
|
+
|
592
|
+
response = send_request(node[:name], {
|
593
|
+
operation: 'set',
|
594
|
+
key: key,
|
595
|
+
value: value,
|
596
|
+
ttl: ttl
|
597
|
+
})
|
598
|
+
|
599
|
+
send_response(
|
600
|
+
stored: response && response[:stored],
|
601
|
+
node: node_name
|
602
|
+
)
|
603
|
+
end
|
604
|
+
|
605
|
+
def delete_from_cache(key)
|
606
|
+
node_name = @ring.get_node(key)
|
607
|
+
node = @cache_nodes[node_name]
|
608
|
+
|
609
|
+
return send_error("No cache nodes available", "NO_CACHE_NODES") unless node
|
610
|
+
|
611
|
+
response = send_request(node[:name], {
|
612
|
+
operation: 'delete',
|
613
|
+
key: key
|
614
|
+
})
|
615
|
+
|
616
|
+
send_response(
|
617
|
+
deleted: response && response[:deleted],
|
618
|
+
node: node_name
|
619
|
+
)
|
620
|
+
end
|
621
|
+
|
622
|
+
def get_cache_stats
|
623
|
+
stats = {}
|
624
|
+
|
625
|
+
@cache_nodes.each do |node_name, node|
|
626
|
+
response = send_request(node[:name], { operation: 'stats' })
|
627
|
+
stats[node_name] = response if response
|
628
|
+
end
|
629
|
+
|
630
|
+
send_response(
|
631
|
+
node_stats: stats,
|
632
|
+
total_nodes: @cache_nodes.size
|
633
|
+
)
|
634
|
+
end
|
635
|
+
end
|
636
|
+
|
637
|
+
# Simple consistent hash ring implementation
|
638
|
+
class ConsistentHashRing
|
639
|
+
def initialize
|
640
|
+
@ring = {}
|
641
|
+
@sorted_keys = []
|
642
|
+
end
|
643
|
+
|
644
|
+
def add_node(node_name, virtual_nodes = 150)
|
645
|
+
virtual_nodes.times do |i|
|
646
|
+
key = Digest::SHA1.hexdigest("#{node_name}:#{i}").to_i(16)
|
647
|
+
@ring[key] = node_name
|
648
|
+
end
|
649
|
+
@sorted_keys = @ring.keys.sort
|
650
|
+
end
|
651
|
+
|
652
|
+
def get_node(key)
|
653
|
+
return nil if @ring.empty?
|
654
|
+
|
655
|
+
hash = Digest::SHA1.hexdigest(key.to_s).to_i(16)
|
656
|
+
|
657
|
+
# Find first node >= hash
|
658
|
+
idx = @sorted_keys.bsearch_index { |k| k >= hash }
|
659
|
+
idx ||= 0 # Wrap around to first node
|
660
|
+
|
661
|
+
@ring[@sorted_keys[idx]]
|
662
|
+
end
|
663
|
+
end
|
664
|
+
```
|
665
|
+
|
666
|
+
### Cache Node
|
667
|
+
|
668
|
+
```ruby
|
669
|
+
class CacheNodeAgent < Agent99::Base
|
670
|
+
def initialize(node_id = nil)
|
671
|
+
super
|
672
|
+
@node_id = node_id || "cache_#{SecureRandom.hex(4)}"
|
673
|
+
@cache = {}
|
674
|
+
@stats = { gets: 0, sets: 0, deletes: 0, hits: 0, misses: 0 }
|
675
|
+
@mutex = Mutex.new
|
676
|
+
|
677
|
+
# Start TTL cleanup thread
|
678
|
+
start_ttl_cleanup
|
679
|
+
end
|
680
|
+
|
681
|
+
def info
|
682
|
+
{
|
683
|
+
name: "#{self.class}_#{@node_id}",
|
684
|
+
type: :server,
|
685
|
+
capabilities: ['cache_node', 'storage']
|
686
|
+
}
|
687
|
+
end
|
688
|
+
|
689
|
+
def process_request(payload)
|
690
|
+
operation = payload.dig(:operation)
|
691
|
+
|
692
|
+
case operation
|
693
|
+
when 'get'
|
694
|
+
get_value(payload[:key])
|
695
|
+
when 'set'
|
696
|
+
set_value(payload[:key], payload[:value], payload[:ttl])
|
697
|
+
when 'delete'
|
698
|
+
delete_value(payload[:key])
|
699
|
+
when 'stats'
|
700
|
+
get_stats
|
701
|
+
when 'clear'
|
702
|
+
clear_cache
|
703
|
+
else
|
704
|
+
send_error("Unknown operation: #{operation}", "INVALID_OPERATION")
|
705
|
+
end
|
706
|
+
end
|
707
|
+
|
708
|
+
private
|
709
|
+
|
710
|
+
def get_value(key)
|
711
|
+
@mutex.synchronize do
|
712
|
+
@stats[:gets] += 1
|
713
|
+
|
714
|
+
entry = @cache[key]
|
715
|
+
|
716
|
+
if entry && !expired?(entry)
|
717
|
+
@stats[:hits] += 1
|
718
|
+
send_response(
|
719
|
+
found: true,
|
720
|
+
value: entry[:value],
|
721
|
+
expires_at: entry[:expires_at]
|
722
|
+
)
|
723
|
+
else
|
724
|
+
@stats[:misses] += 1
|
725
|
+
@cache.delete(key) if entry # Clean up expired entry
|
726
|
+
send_response(found: false)
|
727
|
+
end
|
728
|
+
end
|
729
|
+
end
|
730
|
+
|
731
|
+
def set_value(key, value, ttl = nil)
|
732
|
+
@mutex.synchronize do
|
733
|
+
@stats[:sets] += 1
|
734
|
+
|
735
|
+
entry = {
|
736
|
+
value: value,
|
737
|
+
created_at: Time.now,
|
738
|
+
expires_at: ttl ? Time.now + ttl : nil
|
739
|
+
}
|
740
|
+
|
741
|
+
@cache[key] = entry
|
742
|
+
|
743
|
+
send_response(
|
744
|
+
stored: true,
|
745
|
+
expires_at: entry[:expires_at]
|
746
|
+
)
|
747
|
+
end
|
748
|
+
end
|
749
|
+
|
750
|
+
def delete_value(key)
|
751
|
+
@mutex.synchronize do
|
752
|
+
@stats[:deletes] += 1
|
753
|
+
deleted = @cache.delete(key)
|
754
|
+
|
755
|
+
send_response(deleted: !deleted.nil?)
|
756
|
+
end
|
757
|
+
end
|
758
|
+
|
759
|
+
def get_stats
|
760
|
+
@mutex.synchronize do
|
761
|
+
send_response(
|
762
|
+
node_id: @node_id,
|
763
|
+
stats: @stats.dup,
|
764
|
+
cache_size: @cache.size,
|
765
|
+
memory_usage: estimate_memory_usage
|
766
|
+
)
|
767
|
+
end
|
768
|
+
end
|
769
|
+
|
770
|
+
def clear_cache
|
771
|
+
@mutex.synchronize do
|
772
|
+
cleared_count = @cache.size
|
773
|
+
@cache.clear
|
774
|
+
|
775
|
+
send_response(
|
776
|
+
cleared: true,
|
777
|
+
entries_removed: cleared_count
|
778
|
+
)
|
779
|
+
end
|
780
|
+
end
|
781
|
+
|
782
|
+
def expired?(entry)
|
783
|
+
entry[:expires_at] && entry[:expires_at] < Time.now
|
784
|
+
end
|
785
|
+
|
786
|
+
def estimate_memory_usage
|
787
|
+
# Simple memory estimation
|
788
|
+
@cache.to_s.bytesize
|
789
|
+
end
|
790
|
+
|
791
|
+
def start_ttl_cleanup
|
792
|
+
Thread.new do
|
793
|
+
loop do
|
794
|
+
sleep(60) # Run every minute
|
795
|
+
|
796
|
+
@mutex.synchronize do
|
797
|
+
expired_keys = @cache.select { |k, v| expired?(v) }.keys
|
798
|
+
expired_keys.each { |key| @cache.delete(key) }
|
799
|
+
|
800
|
+
logger.debug "Cleaned up #{expired_keys.size} expired cache entries" if expired_keys.any?
|
801
|
+
end
|
802
|
+
end
|
803
|
+
end
|
804
|
+
end
|
805
|
+
end
|
806
|
+
```
|
807
|
+
|
808
|
+
## Real-time Analytics Pipeline
|
809
|
+
|
810
|
+
Building a real-time analytics system:
|
811
|
+
|
812
|
+
### Data Ingestion Agent
|
813
|
+
|
814
|
+
```ruby
|
815
|
+
class DataIngestionAgent < Agent99::Base
|
816
|
+
def initialize
|
817
|
+
super
|
818
|
+
@buffer = []
|
819
|
+
@buffer_mutex = Mutex.new
|
820
|
+
@batch_size = 100
|
821
|
+
@flush_interval = 30 # seconds
|
822
|
+
|
823
|
+
start_batch_processor
|
824
|
+
end
|
825
|
+
|
826
|
+
def info
|
827
|
+
{
|
828
|
+
name: self.class.to_s,
|
829
|
+
type: :hybrid,
|
830
|
+
capabilities: ['data_ingestion', 'stream_processing']
|
831
|
+
}
|
832
|
+
end
|
833
|
+
|
834
|
+
def process_request(payload)
|
835
|
+
action = payload.dig(:action)
|
836
|
+
|
837
|
+
case action
|
838
|
+
when 'ingest'
|
839
|
+
ingest_data(payload[:data])
|
840
|
+
when 'flush'
|
841
|
+
flush_buffer
|
842
|
+
when 'stats'
|
843
|
+
get_ingestion_stats
|
844
|
+
else
|
845
|
+
send_error("Unknown action: #{action}", "INVALID_ACTION")
|
846
|
+
end
|
847
|
+
end
|
848
|
+
|
849
|
+
private
|
850
|
+
|
851
|
+
def ingest_data(data)
|
852
|
+
enriched_data = {
|
853
|
+
id: SecureRandom.uuid,
|
854
|
+
raw_data: data,
|
855
|
+
ingested_at: Time.now.iso8601,
|
856
|
+
source_ip: header_value('source_ip'),
|
857
|
+
user_agent: header_value('user_agent')
|
858
|
+
}
|
859
|
+
|
860
|
+
@buffer_mutex.synchronize do
|
861
|
+
@buffer << enriched_data
|
862
|
+
|
863
|
+
if @buffer.size >= @batch_size
|
864
|
+
flush_buffer_unsafe
|
865
|
+
end
|
866
|
+
end
|
867
|
+
|
868
|
+
send_response(
|
869
|
+
ingested: true,
|
870
|
+
data_id: enriched_data[:id],
|
871
|
+
buffer_size: @buffer.size
|
872
|
+
)
|
873
|
+
end
|
874
|
+
|
875
|
+
def flush_buffer
|
876
|
+
@buffer_mutex.synchronize do
|
877
|
+
flush_buffer_unsafe
|
878
|
+
end
|
879
|
+
end
|
880
|
+
|
881
|
+
def flush_buffer_unsafe
|
882
|
+
return if @buffer.empty?
|
883
|
+
|
884
|
+
batch = @buffer.dup
|
885
|
+
@buffer.clear
|
886
|
+
|
887
|
+
# Send to analytics processor
|
888
|
+
analytics_agents = discover_agents(['analytics_processor'])
|
889
|
+
|
890
|
+
if analytics_agents.any?
|
891
|
+
analytics_agent = analytics_agents.first
|
892
|
+
|
893
|
+
Thread.new do
|
894
|
+
send_request(analytics_agent[:name], {
|
895
|
+
action: 'process_batch',
|
896
|
+
batch: batch,
|
897
|
+
batch_size: batch.size
|
898
|
+
})
|
899
|
+
end
|
900
|
+
else
|
901
|
+
logger.warn "No analytics processors available, data lost"
|
902
|
+
end
|
903
|
+
|
904
|
+
logger.info "Flushed batch of #{batch.size} records"
|
905
|
+
end
|
906
|
+
|
907
|
+
def start_batch_processor
|
908
|
+
Thread.new do
|
909
|
+
loop do
|
910
|
+
sleep(@flush_interval)
|
911
|
+
flush_buffer
|
912
|
+
end
|
913
|
+
end
|
914
|
+
end
|
915
|
+
|
916
|
+
def get_ingestion_stats
|
917
|
+
@buffer_mutex.synchronize do
|
918
|
+
send_response(
|
919
|
+
buffer_size: @buffer.size,
|
920
|
+
batch_size: @batch_size,
|
921
|
+
flush_interval: @flush_interval
|
922
|
+
)
|
923
|
+
end
|
924
|
+
end
|
925
|
+
end
|
926
|
+
```
|
927
|
+
|
928
|
+
These advanced examples demonstrate:
|
929
|
+
|
930
|
+
- **Complex microservices architectures** with multiple interacting services
|
931
|
+
- **Event-driven patterns** with pub/sub messaging
|
932
|
+
- **Distributed systems concepts** like consistent hashing and caching
|
933
|
+
- **Real-time data processing** with buffering and batch processing
|
934
|
+
- **Error handling and resilience** patterns
|
935
|
+
- **Performance optimization** techniques
|
936
|
+
- **Production-ready patterns** with monitoring and stats
|
937
|
+
|
938
|
+
Each example can be extended further with additional features like:
|
939
|
+
- Persistence layers (databases, file systems)
|
940
|
+
- Authentication and authorization
|
941
|
+
- Rate limiting and throttling
|
942
|
+
- Circuit breakers and retry logic
|
943
|
+
- Distributed tracing and monitoring
|
944
|
+
- Configuration management
|
945
|
+
- Health checks and service discovery
|
946
|
+
|
947
|
+
## Next Steps
|
948
|
+
|
949
|
+
- **[Multi-Agent Processing](../advanced-topics/multi-agent-processing.md)** - Coordination patterns
|
950
|
+
- **[Performance Considerations](../operations/performance-considerations.md)** - Optimization techniques
|
951
|
+
- **[Configuration](../operations/configuration.md)** - Production deployment settings
|