jarviscore-framework 0.3.0__py3-none-any.whl → 0.3.2__py3-none-any.whl
This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
- examples/cloud_deployment_example.py +3 -3
- examples/{listeneragent_cognitive_discovery_example.py → customagent_cognitive_discovery_example.py} +55 -14
- examples/customagent_distributed_example.py +140 -1
- examples/fastapi_integration_example.py +74 -11
- jarviscore/__init__.py +8 -11
- jarviscore/cli/smoketest.py +1 -1
- jarviscore/core/mesh.py +158 -0
- jarviscore/data/examples/cloud_deployment_example.py +3 -3
- jarviscore/data/examples/custom_profile_decorator.py +134 -0
- jarviscore/data/examples/custom_profile_wrap.py +168 -0
- jarviscore/data/examples/{listeneragent_cognitive_discovery_example.py → customagent_cognitive_discovery_example.py} +55 -14
- jarviscore/data/examples/customagent_distributed_example.py +140 -1
- jarviscore/data/examples/fastapi_integration_example.py +74 -11
- jarviscore/docs/API_REFERENCE.md +576 -47
- jarviscore/docs/CHANGELOG.md +131 -0
- jarviscore/docs/CONFIGURATION.md +1 -1
- jarviscore/docs/CUSTOMAGENT_GUIDE.md +591 -153
- jarviscore/docs/GETTING_STARTED.md +186 -329
- jarviscore/docs/TROUBLESHOOTING.md +1 -1
- jarviscore/docs/USER_GUIDE.md +292 -12
- jarviscore/integrations/fastapi.py +4 -4
- jarviscore/p2p/coordinator.py +36 -7
- jarviscore/p2p/messages.py +13 -0
- jarviscore/p2p/peer_client.py +380 -21
- jarviscore/p2p/peer_tool.py +17 -11
- jarviscore/profiles/__init__.py +2 -4
- jarviscore/profiles/customagent.py +302 -74
- jarviscore/testing/__init__.py +35 -0
- jarviscore/testing/mocks.py +578 -0
- {jarviscore_framework-0.3.0.dist-info → jarviscore_framework-0.3.2.dist-info}/METADATA +61 -46
- {jarviscore_framework-0.3.0.dist-info → jarviscore_framework-0.3.2.dist-info}/RECORD +42 -34
- tests/test_13_dx_improvements.py +37 -37
- tests/test_15_llm_cognitive_discovery.py +18 -18
- tests/test_16_unified_dx_flow.py +3 -3
- tests/test_17_session_context.py +489 -0
- tests/test_18_mesh_diagnostics.py +465 -0
- tests/test_19_async_requests.py +516 -0
- tests/test_20_load_balancing.py +546 -0
- tests/test_21_mock_testing.py +776 -0
- jarviscore/profiles/listeneragent.py +0 -292
- {jarviscore_framework-0.3.0.dist-info → jarviscore_framework-0.3.2.dist-info}/WHEEL +0 -0
- {jarviscore_framework-0.3.0.dist-info → jarviscore_framework-0.3.2.dist-info}/licenses/LICENSE +0 -0
- {jarviscore_framework-0.3.0.dist-info → jarviscore_framework-0.3.2.dist-info}/top_level.txt +0 -0
|
@@ -11,16 +11,21 @@ CustomAgent lets you integrate your **existing agent code** with JarvisCore's ne
|
|
|
11
11
|
|
|
12
12
|
1. [Prerequisites](#prerequisites)
|
|
13
13
|
2. [Choose Your Mode](#choose-your-mode)
|
|
14
|
-
3. [P2P Mode](#p2p-mode)
|
|
15
|
-
4. [
|
|
16
|
-
5. [
|
|
17
|
-
6. [
|
|
18
|
-
7. [
|
|
14
|
+
3. [P2P Mode](#p2p-mode) - Handler-based peer communication
|
|
15
|
+
4. [Distributed Mode](#distributed-mode) - Workflow tasks + P2P
|
|
16
|
+
5. [Cognitive Discovery (v0.3.0)](#cognitive-discovery-v030) - Dynamic peer awareness for LLMs
|
|
17
|
+
6. [FastAPI Integration (v0.3.0)](#fastapi-integration-v030) - 3-line setup with JarvisLifespan
|
|
18
|
+
7. [Framework Integration Patterns](#framework-integration-patterns) - aiohttp, Flask, Django
|
|
19
19
|
8. [Cloud Deployment (v0.3.0)](#cloud-deployment-v030) - Self-registration for containers
|
|
20
20
|
9. [API Reference](#api-reference)
|
|
21
21
|
10. [Multi-Node Deployment](#multi-node-deployment)
|
|
22
22
|
11. [Error Handling](#error-handling)
|
|
23
23
|
12. [Troubleshooting](#troubleshooting)
|
|
24
|
+
13. [Session Context Propagation (v0.3.2)](#session-context-propagation-v032) - Request tracking and metadata
|
|
25
|
+
14. [Async Request Pattern (v0.3.2)](#async-request-pattern-v032) - Non-blocking parallel requests
|
|
26
|
+
15. [Load Balancing Strategies (v0.3.2)](#load-balancing-strategies-v032) - Round-robin and random selection
|
|
27
|
+
16. [Mesh Diagnostics (v0.3.2)](#mesh-diagnostics-v032) - Health monitoring and debugging
|
|
28
|
+
17. [Testing with MockMesh (v0.3.2)](#testing-with-mockmesh-v032) - Unit testing patterns
|
|
24
29
|
|
|
25
30
|
---
|
|
26
31
|
|
|
@@ -102,7 +107,7 @@ class MyLLMClient:
|
|
|
102
107
|
|
|
103
108
|
### Quick Comparison
|
|
104
109
|
|
|
105
|
-
| Feature | P2P Mode (CustomAgent) | P2P Mode (
|
|
110
|
+
| Feature | P2P Mode (CustomAgent) | P2P Mode (CustomAgent) | Distributed Mode |
|
|
106
111
|
|---------|------------------------|--------------------------|------------------|
|
|
107
112
|
| **Primary method** | `run()` - continuous loop | `on_peer_request()` handlers | `execute_task()` - on-demand |
|
|
108
113
|
| **Communication** | Direct peer messaging | Handler-based (no loop) | Workflow orchestration |
|
|
@@ -110,7 +115,7 @@ class MyLLMClient:
|
|
|
110
115
|
| **Coordination** | Agents self-coordinate | Framework handles loop | Framework coordinates |
|
|
111
116
|
| **Supports workflows** | No | No | Yes |
|
|
112
117
|
|
|
113
|
-
> **
|
|
118
|
+
> **CustomAgent** includes built-in P2P handlers - just implement `on_peer_request()` and `on_peer_notify()`. No need to write your own `run()` loop.
|
|
114
119
|
|
|
115
120
|
---
|
|
116
121
|
|
|
@@ -118,6 +123,46 @@ class MyLLMClient:
|
|
|
118
123
|
|
|
119
124
|
P2P mode is for agents that run continuously and communicate directly with each other.
|
|
120
125
|
|
|
126
|
+
### v0.3.1 Update: Handler-Based Pattern
|
|
127
|
+
|
|
128
|
+
**We've simplified P2P agents!** No more manual `run()` loops.
|
|
129
|
+
|
|
130
|
+
```
|
|
131
|
+
┌────────────────────────────────────────────────────────────────┐
|
|
132
|
+
│ OLD vs NEW Pattern │
|
|
133
|
+
├────────────────────────────────────────────────────────────────┤
|
|
134
|
+
│ │
|
|
135
|
+
│ ❌ OLD (v0.2.x) - Manual Loop │
|
|
136
|
+
│ ┌──────────────────────────────────────────────┐ │
|
|
137
|
+
│ │ async def run(self): │ │
|
|
138
|
+
│ │ while not self.shutdown_requested: │ │
|
|
139
|
+
│ │ msg = await self.peers.receive() │ ← Polling │
|
|
140
|
+
│ │ if msg and msg.is_request: │ │
|
|
141
|
+
│ │ result = self.process(msg) │ │
|
|
142
|
+
│ │ await self.peers.respond(...) │ ← Manual │
|
|
143
|
+
│ │ await asyncio.sleep(0.1) │ │
|
|
144
|
+
│ └──────────────────────────────────────────────┘ │
|
|
145
|
+
│ │
|
|
146
|
+
│ ✅ NEW (v0.3.0+) - Handler-Based │
|
|
147
|
+
│ ┌──────────────────────────────────────────────┐ │
|
|
148
|
+
│ │ async def on_peer_request(self, msg): │ │
|
|
149
|
+
│ │ result = self.process(msg) │ │
|
|
150
|
+
│ │ return result │ ← Simple! │
|
|
151
|
+
│ └──────────────────────────────────────────────┘ │
|
|
152
|
+
│ ▲ │
|
|
153
|
+
│ │ │
|
|
154
|
+
│ └─ Framework calls this automatically │
|
|
155
|
+
│ │
|
|
156
|
+
└────────────────────────────────────────────────────────────────┘
|
|
157
|
+
```
|
|
158
|
+
|
|
159
|
+
**Benefits:**
|
|
160
|
+
- ✅ **Less Code**: No boilerplate loops
|
|
161
|
+
- ✅ **Simpler**: Just return your result
|
|
162
|
+
- ✅ **Automatic**: Framework handles message dispatch
|
|
163
|
+
- ✅ **Error Handling**: Built-in exception capture
|
|
164
|
+
- ✅ **FastAPI Ready**: Works with `JarvisLifespan` out of the box
|
|
165
|
+
|
|
121
166
|
### Migration Overview
|
|
122
167
|
|
|
123
168
|
```
|
|
@@ -163,59 +208,89 @@ if __name__ == "__main__":
|
|
|
163
208
|
|
|
164
209
|
### Step 3: Modify Your Agent Code → `agents.py`
|
|
165
210
|
|
|
166
|
-
|
|
211
|
+
**🚨 IMPORTANT CHANGE (v0.3.0+)**: We've moved from `run()` loops to **handler-based** agents!
|
|
167
212
|
|
|
213
|
+
#### ❌ OLD Pattern (Deprecated)
|
|
168
214
|
```python
|
|
169
|
-
#
|
|
170
|
-
|
|
215
|
+
# DON'T DO THIS ANYMORE!
|
|
216
|
+
class ResearcherAgent(CustomAgent):
|
|
217
|
+
async def run(self): # ❌ Manual loop
|
|
218
|
+
while not self.shutdown_requested:
|
|
219
|
+
msg = await self.peers.receive(timeout=0.5)
|
|
220
|
+
if msg and msg.is_request:
|
|
221
|
+
result = self.llm.chat(f"Research: {msg.data['question']}")
|
|
222
|
+
await self.peers.respond(msg, {"response": result})
|
|
223
|
+
await asyncio.sleep(0.1)
|
|
224
|
+
```
|
|
225
|
+
**Problems**: Manual loops, boilerplate, error-prone
|
|
226
|
+
|
|
227
|
+
#### ✅ NEW Pattern (Recommended)
|
|
228
|
+
```python
|
|
229
|
+
# agents.py (MODERN VERSION)
|
|
171
230
|
from jarviscore.profiles import CustomAgent
|
|
172
231
|
|
|
173
232
|
|
|
174
233
|
class ResearcherAgent(CustomAgent):
|
|
175
|
-
"""Your agent, now framework-integrated."""
|
|
234
|
+
"""Your agent, now framework-integrated with handlers."""
|
|
176
235
|
|
|
177
|
-
#
|
|
236
|
+
# Required class attributes for discovery
|
|
178
237
|
role = "researcher"
|
|
179
238
|
capabilities = ["research", "analysis"]
|
|
239
|
+
description = "Research specialist that gathers and synthesizes information"
|
|
180
240
|
|
|
181
241
|
async def setup(self):
|
|
182
|
-
"""
|
|
242
|
+
"""Called once on startup. Initialize your LLM here."""
|
|
183
243
|
await super().setup()
|
|
184
244
|
self.llm = MyLLMClient() # Your existing initialization
|
|
185
245
|
|
|
186
|
-
async def
|
|
187
|
-
"""
|
|
188
|
-
|
|
189
|
-
|
|
190
|
-
|
|
191
|
-
|
|
192
|
-
|
|
193
|
-
|
|
194
|
-
|
|
195
|
-
|
|
196
|
-
|
|
246
|
+
async def on_peer_request(self, msg):
|
|
247
|
+
"""
|
|
248
|
+
Handle incoming requests from other agents.
|
|
249
|
+
|
|
250
|
+
This is called AUTOMATICALLY when another agent asks you a question.
|
|
251
|
+
No loops, no polling, no boilerplate!
|
|
252
|
+
"""
|
|
253
|
+
query = msg.data.get("question", "")
|
|
254
|
+
|
|
255
|
+
# YOUR EXISTING LOGIC:
|
|
256
|
+
result = self.llm.chat(f"Research: {query}")
|
|
257
|
+
|
|
258
|
+
# Just return the data - framework handles the response
|
|
259
|
+
return {"response": result}
|
|
197
260
|
|
|
198
261
|
async def execute_task(self, task: dict) -> dict:
|
|
199
262
|
"""
|
|
200
|
-
Required by base Agent class
|
|
201
|
-
|
|
202
|
-
In P2P mode, your
|
|
203
|
-
This
|
|
204
|
-
to be implemented, or you get TypeError on instantiation.
|
|
263
|
+
Required by base Agent class for workflow mode.
|
|
264
|
+
|
|
265
|
+
In pure P2P mode, your logic is in on_peer_request().
|
|
266
|
+
This is used when agent is part of a workflow pipeline.
|
|
205
267
|
"""
|
|
206
|
-
return {"status": "success", "note": "This agent uses
|
|
268
|
+
return {"status": "success", "note": "This agent uses handlers for P2P mode"}
|
|
207
269
|
```
|
|
208
270
|
|
|
209
271
|
**What changed:**
|
|
210
272
|
|
|
211
|
-
| Before | After |
|
|
212
|
-
|
|
213
|
-
| `
|
|
214
|
-
| `
|
|
215
|
-
| `
|
|
216
|
-
|
|
|
273
|
+
| Before (v0.2.x) | After (v0.3.0+) | Why? |
|
|
274
|
+
|-----------------|-----------------|------|
|
|
275
|
+
| `async def run(self):` with `while` loop | `async def on_peer_request(self, msg):` handler | Automatic dispatch, less boilerplate |
|
|
276
|
+
| Manual `await self.peers.receive()` | Framework calls your handler | No polling needed |
|
|
277
|
+
| Manual `await self.peers.respond(msg, data)` | Just `return data` | Simpler error handling |
|
|
278
|
+
| `asyncio.create_task(agent.run())` | Not needed - handlers run automatically | Cleaner lifecycle |
|
|
279
|
+
|
|
280
|
+
#### Migration Checklist (v0.2.x → v0.3.0+)
|
|
217
281
|
|
|
218
|
-
|
|
282
|
+
If you have existing agents using the `run()` loop pattern:
|
|
283
|
+
|
|
284
|
+
- [ ] Replace `async def run(self):` with `async def on_peer_request(self, msg):`
|
|
285
|
+
- [ ] Remove `while not self.shutdown_requested:` loop
|
|
286
|
+
- [ ] Remove `msg = await self.peers.receive(timeout=0.5)` polling
|
|
287
|
+
- [ ] Change `await self.peers.respond(msg, data)` to `return data`
|
|
288
|
+
- [ ] Remove manual `asyncio.create_task(agent.run())` calls in main.py
|
|
289
|
+
- [ ] Consider using `JarvisLifespan` for FastAPI integration (see Step 4)
|
|
290
|
+
- [ ] Add `description` class attribute for better cognitive discovery
|
|
291
|
+
- [ ] Use `get_cognitive_context()` instead of hardcoded peer lists
|
|
292
|
+
|
|
293
|
+
> **Note**: The `run()` method is **still supported** for backward compatibility, but handlers are now the recommended approach. For the full pattern with **LLM-driven peer communication** (where your LLM autonomously decides when to call other agents), see the [Complete Example](#complete-example-llm-driven-peer-communication) below.
|
|
219
294
|
|
|
220
295
|
### Step 4: Create New Entry Point → `main.py`
|
|
221
296
|
|
|
@@ -316,22 +391,22 @@ This is the **key pattern** for P2P mode. Your LLM gets peer tools added to its
|
|
|
316
391
|
**The key insight**: You add peer tools to your LLM's toolset. The LLM decides when to use them.
|
|
317
392
|
|
|
318
393
|
```python
|
|
319
|
-
# agents.py
|
|
320
|
-
import asyncio
|
|
394
|
+
# agents.py - UPDATED FOR v0.3.0+
|
|
321
395
|
from jarviscore.profiles import CustomAgent
|
|
322
396
|
|
|
323
397
|
|
|
324
398
|
class AnalystAgent(CustomAgent):
|
|
325
399
|
"""
|
|
326
|
-
Analyst agent -
|
|
400
|
+
Analyst agent - specialist in data analysis.
|
|
327
401
|
|
|
328
|
-
|
|
329
|
-
|
|
330
|
-
|
|
331
|
-
|
|
402
|
+
NEW PATTERN (v0.3.0+):
|
|
403
|
+
- Uses @on_peer_request HANDLER instead of run() loop
|
|
404
|
+
- Automatically receives and responds to peer requests
|
|
405
|
+
- No manual message polling needed!
|
|
332
406
|
"""
|
|
333
407
|
role = "analyst"
|
|
334
408
|
capabilities = ["analysis", "data_interpretation", "reporting"]
|
|
409
|
+
description = "Expert data analyst for statistics and insights"
|
|
335
410
|
|
|
336
411
|
async def setup(self):
|
|
337
412
|
await super().setup()
|
|
@@ -406,19 +481,20 @@ Analyze data thoroughly and provide insights."""
|
|
|
406
481
|
|
|
407
482
|
return response.get("content", "Analysis complete.")
|
|
408
483
|
|
|
409
|
-
async def
|
|
410
|
-
"""
|
|
411
|
-
|
|
412
|
-
|
|
413
|
-
|
|
414
|
-
|
|
415
|
-
|
|
484
|
+
async def on_peer_request(self, msg):
|
|
485
|
+
"""
|
|
486
|
+
Handle incoming requests from peers.
|
|
487
|
+
|
|
488
|
+
✅ NEW: This is called automatically when another agent sends a request.
|
|
489
|
+
❌ OLD: Manual while loop with receive() polling
|
|
490
|
+
"""
|
|
491
|
+
query = msg.data.get("question", msg.data.get("query", ""))
|
|
416
492
|
|
|
417
|
-
|
|
418
|
-
|
|
493
|
+
# Process with LLM
|
|
494
|
+
result = await self.process_with_llm(query)
|
|
419
495
|
|
|
420
|
-
|
|
421
|
-
|
|
496
|
+
# Just return the data - framework handles the response!
|
|
497
|
+
return {"response": result}
|
|
422
498
|
|
|
423
499
|
async def execute_task(self, task: dict) -> dict:
|
|
424
500
|
"""Required by base class."""
|
|
@@ -429,13 +505,16 @@ class AssistantAgent(CustomAgent):
|
|
|
429
505
|
"""
|
|
430
506
|
Assistant agent - coordinates with other specialists.
|
|
431
507
|
|
|
432
|
-
|
|
508
|
+
NEW PATTERN (v0.3.0+):
|
|
433
509
|
1. Has its own LLM for reasoning
|
|
434
|
-
2.
|
|
435
|
-
3.
|
|
510
|
+
2. Uses get_cognitive_context() to discover available peers
|
|
511
|
+
3. Peer tools (ask_peer, broadcast) added to LLM toolset
|
|
512
|
+
4. LLM AUTONOMOUSLY decides when to ask other agents
|
|
513
|
+
5. Uses on_peer_request handler instead of run() loop
|
|
436
514
|
"""
|
|
437
515
|
role = "assistant"
|
|
438
516
|
capabilities = ["chat", "coordination", "search"]
|
|
517
|
+
description = "General assistant that delegates specialized tasks to experts"
|
|
439
518
|
|
|
440
519
|
async def setup(self):
|
|
441
520
|
await super().setup()
|
|
@@ -535,16 +614,16 @@ Be concise in your responses."""
|
|
|
535
614
|
|
|
536
615
|
return response.get("content", "")
|
|
537
616
|
|
|
538
|
-
async def
|
|
539
|
-
"""
|
|
540
|
-
|
|
541
|
-
|
|
542
|
-
|
|
543
|
-
|
|
544
|
-
|
|
545
|
-
|
|
546
|
-
|
|
547
|
-
|
|
617
|
+
async def on_peer_request(self, msg):
|
|
618
|
+
"""
|
|
619
|
+
Handle incoming requests from other agents.
|
|
620
|
+
|
|
621
|
+
✅ NEW: Handler-based - called automatically on request
|
|
622
|
+
❌ OLD: Manual while loop with receive() polling
|
|
623
|
+
"""
|
|
624
|
+
query = msg.data.get("query", "")
|
|
625
|
+
result = await self.chat(query)
|
|
626
|
+
return {"response": result}
|
|
548
627
|
|
|
549
628
|
async def execute_task(self, task: dict) -> dict:
|
|
550
629
|
"""Required by base class."""
|
|
@@ -552,13 +631,14 @@ Be concise in your responses."""
|
|
|
552
631
|
```
|
|
553
632
|
|
|
554
633
|
```python
|
|
555
|
-
# main.py
|
|
634
|
+
# main.py - UPDATED FOR v0.3.0+ (Handler-Based Pattern)
|
|
556
635
|
import asyncio
|
|
557
636
|
from jarviscore import Mesh
|
|
558
637
|
from agents import AnalystAgent, AssistantAgent
|
|
559
638
|
|
|
560
639
|
|
|
561
640
|
async def main():
|
|
641
|
+
"""Simple P2P mesh without web server."""
|
|
562
642
|
mesh = Mesh(
|
|
563
643
|
mode="p2p",
|
|
564
644
|
config={
|
|
@@ -567,17 +647,15 @@ async def main():
|
|
|
567
647
|
}
|
|
568
648
|
)
|
|
569
649
|
|
|
570
|
-
# Add both agents
|
|
650
|
+
# Add both agents - they'll use handlers automatically
|
|
571
651
|
mesh.add(AnalystAgent)
|
|
572
652
|
assistant = mesh.add(AssistantAgent)
|
|
573
653
|
|
|
574
654
|
await mesh.start()
|
|
575
655
|
|
|
576
|
-
#
|
|
577
|
-
|
|
578
|
-
|
|
579
|
-
|
|
580
|
-
# Give time for setup
|
|
656
|
+
# ✅ NO MORE MANUAL run() TASKS! Handlers are automatic.
|
|
657
|
+
|
|
658
|
+
# Give time for mesh to stabilize
|
|
581
659
|
await asyncio.sleep(0.5)
|
|
582
660
|
|
|
583
661
|
# User asks a question - LLM will autonomously decide to use ask_peer
|
|
@@ -590,8 +668,6 @@ async def main():
|
|
|
590
668
|
# Output: [{'tool': 'ask_peer', 'args': {'role': 'analyst', 'question': '...'}}]
|
|
591
669
|
|
|
592
670
|
# Cleanup
|
|
593
|
-
analyst.request_shutdown()
|
|
594
|
-
analyst_task.cancel()
|
|
595
671
|
await mesh.stop()
|
|
596
672
|
|
|
597
673
|
|
|
@@ -599,6 +675,59 @@ if __name__ == "__main__":
|
|
|
599
675
|
asyncio.run(main())
|
|
600
676
|
```
|
|
601
677
|
|
|
678
|
+
**Or better yet, use FastAPI + JarvisLifespan:**
|
|
679
|
+
|
|
680
|
+
```python
|
|
681
|
+
# main.py - PRODUCTION PATTERN (FastAPI + JarvisLifespan)
|
|
682
|
+
from fastapi import FastAPI, Request
|
|
683
|
+
from fastapi.responses import JSONResponse
|
|
684
|
+
from jarviscore.integrations import JarvisLifespan
|
|
685
|
+
from agents import AnalystAgent, AssistantAgent
|
|
686
|
+
import uvicorn
|
|
687
|
+
|
|
688
|
+
|
|
689
|
+
# ✅ ONE-LINE MESH SETUP with JarvisLifespan!
|
|
690
|
+
app = FastAPI(lifespan=JarvisLifespan([AnalystAgent, AssistantAgent]))
|
|
691
|
+
|
|
692
|
+
|
|
693
|
+
@app.post("/chat")
|
|
694
|
+
async def chat(request: Request):
|
|
695
|
+
"""Chat endpoint - assistant may autonomously delegate to analyst."""
|
|
696
|
+
data = await request.json()
|
|
697
|
+
message = data.get("message", "")
|
|
698
|
+
|
|
699
|
+
# Get assistant from mesh (JarvisLifespan manages it)
|
|
700
|
+
assistant = app.state.mesh.get_agent("assistant")
|
|
701
|
+
|
|
702
|
+
# Chat - LLM autonomously discovers and delegates if needed
|
|
703
|
+
response = await assistant.chat(message)
|
|
704
|
+
|
|
705
|
+
return JSONResponse(response)
|
|
706
|
+
|
|
707
|
+
|
|
708
|
+
@app.get("/agents")
|
|
709
|
+
async def list_agents():
|
|
710
|
+
"""Show what each agent sees (cognitive context)."""
|
|
711
|
+
mesh = app.state.mesh
|
|
712
|
+
agents_info = {}
|
|
713
|
+
|
|
714
|
+
for agent in mesh.agents:
|
|
715
|
+
if agent.peers:
|
|
716
|
+
context = agent.peers.get_cognitive_context(format="markdown")
|
|
717
|
+
agents_info[agent.role] = {
|
|
718
|
+
"role": agent.role,
|
|
719
|
+
"capabilities": agent.capabilities,
|
|
720
|
+
"peers_visible": len(agent.peers.get_all_peers()),
|
|
721
|
+
"cognitive_context": context[:200] + "..."
|
|
722
|
+
}
|
|
723
|
+
|
|
724
|
+
return JSONResponse(agents_info)
|
|
725
|
+
|
|
726
|
+
|
|
727
|
+
if __name__ == "__main__":
|
|
728
|
+
uvicorn.run(app, host="0.0.0.0", port=8000)
|
|
729
|
+
```
|
|
730
|
+
|
|
602
731
|
### Key Concepts for P2P Mode
|
|
603
732
|
|
|
604
733
|
#### Adding Peer Tools to Your LLM
|
|
@@ -666,46 +795,16 @@ async def run(self):
|
|
|
666
795
|
|
|
667
796
|
---
|
|
668
797
|
|
|
669
|
-
##
|
|
670
|
-
|
|
671
|
-
**ListenerAgent** is for developers who want P2P communication without writing the `run()` loop themselves.
|
|
798
|
+
## P2P Message Handlers
|
|
672
799
|
|
|
673
|
-
|
|
674
|
-
|
|
675
|
-
Every P2P CustomAgent needs this boilerplate:
|
|
676
|
-
|
|
677
|
-
```python
|
|
678
|
-
# BEFORE (CustomAgent) - You write the same loop every time
|
|
679
|
-
class MyAgent(CustomAgent):
|
|
680
|
-
role = "processor"
|
|
681
|
-
capabilities = ["processing"]
|
|
682
|
-
|
|
683
|
-
async def run(self):
|
|
684
|
-
"""You have to write this loop for every P2P agent."""
|
|
685
|
-
while not self.shutdown_requested:
|
|
686
|
-
if self.peers:
|
|
687
|
-
msg = await self.peers.receive(timeout=0.5)
|
|
688
|
-
if msg and msg.is_request:
|
|
689
|
-
# Handle request
|
|
690
|
-
result = self.process(msg.data)
|
|
691
|
-
await self.peers.respond(msg, {"response": result})
|
|
692
|
-
elif msg and msg.is_notify:
|
|
693
|
-
# Handle notification
|
|
694
|
-
self.handle_notify(msg.data)
|
|
695
|
-
await asyncio.sleep(0.1)
|
|
696
|
-
|
|
697
|
-
async def execute_task(self, task):
|
|
698
|
-
"""Still required even though you're using run()."""
|
|
699
|
-
return {"status": "success"}
|
|
700
|
-
```
|
|
800
|
+
CustomAgent includes built-in handlers for P2P communication - just implement the handlers you need.
|
|
701
801
|
|
|
702
|
-
###
|
|
802
|
+
### Handler-Based P2P (Recommended)
|
|
703
803
|
|
|
704
804
|
```python
|
|
705
|
-
|
|
706
|
-
from jarviscore.profiles import ListenerAgent
|
|
805
|
+
from jarviscore.profiles import CustomAgent
|
|
707
806
|
|
|
708
|
-
class MyAgent(
|
|
807
|
+
class MyAgent(CustomAgent):
|
|
709
808
|
role = "processor"
|
|
710
809
|
capabilities = ["processing"]
|
|
711
810
|
|
|
@@ -718,27 +817,25 @@ class MyAgent(ListenerAgent):
|
|
|
718
817
|
print(f"Notification received: {msg.data}")
|
|
719
818
|
```
|
|
720
819
|
|
|
721
|
-
**What you no longer need:**
|
|
722
|
-
- ❌ `run()` loop with `while not self.shutdown_requested`
|
|
723
|
-
- ❌ `self.peers.receive()` and `self.peers.respond()` boilerplate
|
|
724
|
-
- ❌ `execute_task()` stub method
|
|
725
|
-
- ❌ `asyncio.sleep()` timing
|
|
726
|
-
|
|
727
820
|
**What the framework handles:**
|
|
728
|
-
-
|
|
729
|
-
-
|
|
730
|
-
-
|
|
731
|
-
-
|
|
732
|
-
-
|
|
821
|
+
- Message receiving loop (`run()` is built-in)
|
|
822
|
+
- Routing requests to `on_peer_request()`
|
|
823
|
+
- Routing notifications to `on_peer_notify()`
|
|
824
|
+
- Automatic response sending (configurable with `auto_respond`)
|
|
825
|
+
- Shutdown handling
|
|
826
|
+
|
|
827
|
+
**Configuration:**
|
|
828
|
+
- `listen_timeout` (float): Seconds to wait for messages (default: 1.0)
|
|
829
|
+
- `auto_respond` (bool): Auto-send `on_peer_request()` return value (default: True)
|
|
733
830
|
|
|
734
|
-
### Complete
|
|
831
|
+
### Complete P2P Example
|
|
735
832
|
|
|
736
833
|
```python
|
|
737
834
|
# agents.py
|
|
738
|
-
from jarviscore.profiles import
|
|
835
|
+
from jarviscore.profiles import CustomAgent
|
|
739
836
|
|
|
740
837
|
|
|
741
|
-
class AnalystAgent(
|
|
838
|
+
class AnalystAgent(CustomAgent):
|
|
742
839
|
"""A data analyst that responds to peer requests."""
|
|
743
840
|
|
|
744
841
|
role = "analyst"
|
|
@@ -778,7 +875,7 @@ class AnalystAgent(ListenerAgent):
|
|
|
778
875
|
print(f"[{self.role}] Received notification: {msg.data}")
|
|
779
876
|
|
|
780
877
|
|
|
781
|
-
class AssistantAgent(
|
|
878
|
+
class AssistantAgent(CustomAgent):
|
|
782
879
|
"""An assistant that coordinates with specialists."""
|
|
783
880
|
|
|
784
881
|
role = "assistant"
|
|
@@ -828,18 +925,17 @@ if __name__ == "__main__":
|
|
|
828
925
|
asyncio.run(main())
|
|
829
926
|
```
|
|
830
927
|
|
|
831
|
-
### When to Use
|
|
928
|
+
### When to Use Handlers vs Custom run()
|
|
832
929
|
|
|
833
|
-
| Use
|
|
834
|
-
|
|
835
|
-
|
|
|
836
|
-
|
|
|
837
|
-
| You
|
|
838
|
-
| You want less boilerplate | You have complex coordination logic |
|
|
930
|
+
| Use handlers (`on_peer_request`) when... | Override `run()` when... |
|
|
931
|
+
|------------------------------------------|--------------------------|
|
|
932
|
+
| Request/response pattern fits your use case | You need custom message loop timing |
|
|
933
|
+
| You're integrating with FastAPI | You need to initiate messages proactively |
|
|
934
|
+
| You want minimal boilerplate | You have complex coordination logic |
|
|
839
935
|
|
|
840
|
-
###
|
|
936
|
+
### CustomAgent with FastAPI
|
|
841
937
|
|
|
842
|
-
|
|
938
|
+
CustomAgent works seamlessly with FastAPI. See [FastAPI Integration](#fastapi-integration-v030) below.
|
|
843
939
|
|
|
844
940
|
---
|
|
845
941
|
|
|
@@ -1446,11 +1542,11 @@ async def process(data: dict):
|
|
|
1446
1542
|
```python
|
|
1447
1543
|
# AFTER: 3 lines to integrate
|
|
1448
1544
|
from fastapi import FastAPI
|
|
1449
|
-
from jarviscore.profiles import
|
|
1545
|
+
from jarviscore.profiles import CustomAgent
|
|
1450
1546
|
from jarviscore.integrations.fastapi import JarvisLifespan
|
|
1451
1547
|
|
|
1452
1548
|
|
|
1453
|
-
class ProcessorAgent(
|
|
1549
|
+
class ProcessorAgent(CustomAgent):
|
|
1454
1550
|
role = "processor"
|
|
1455
1551
|
capabilities = ["processing"]
|
|
1456
1552
|
|
|
@@ -1492,7 +1588,7 @@ app = FastAPI(
|
|
|
1492
1588
|
# app.py
|
|
1493
1589
|
from fastapi import FastAPI, HTTPException
|
|
1494
1590
|
from pydantic import BaseModel
|
|
1495
|
-
from jarviscore.profiles import
|
|
1591
|
+
from jarviscore.profiles import CustomAgent
|
|
1496
1592
|
from jarviscore.integrations.fastapi import JarvisLifespan
|
|
1497
1593
|
|
|
1498
1594
|
|
|
@@ -1500,7 +1596,7 @@ class AnalysisRequest(BaseModel):
|
|
|
1500
1596
|
data: str
|
|
1501
1597
|
|
|
1502
1598
|
|
|
1503
|
-
class AnalystAgent(
|
|
1599
|
+
class AnalystAgent(CustomAgent):
|
|
1504
1600
|
"""Agent that handles both API requests and P2P messages."""
|
|
1505
1601
|
|
|
1506
1602
|
role = "analyst"
|
|
@@ -1634,11 +1730,11 @@ await mesh.start()
|
|
|
1634
1730
|
# Each agent can join any mesh independently
|
|
1635
1731
|
|
|
1636
1732
|
# agent_container.py (runs in Docker/K8s)
|
|
1637
|
-
from jarviscore.profiles import
|
|
1733
|
+
from jarviscore.profiles import CustomAgent
|
|
1638
1734
|
import os
|
|
1639
1735
|
|
|
1640
1736
|
|
|
1641
|
-
class WorkerAgent(
|
|
1737
|
+
class WorkerAgent(CustomAgent):
|
|
1642
1738
|
role = "worker"
|
|
1643
1739
|
capabilities = ["processing"]
|
|
1644
1740
|
|
|
@@ -1685,10 +1781,10 @@ CMD ["python", "agent.py"]
|
|
|
1685
1781
|
# agent.py
|
|
1686
1782
|
import asyncio
|
|
1687
1783
|
import os
|
|
1688
|
-
from jarviscore.profiles import
|
|
1784
|
+
from jarviscore.profiles import CustomAgent
|
|
1689
1785
|
|
|
1690
1786
|
|
|
1691
|
-
class WorkerAgent(
|
|
1787
|
+
class WorkerAgent(CustomAgent):
|
|
1692
1788
|
role = "worker"
|
|
1693
1789
|
capabilities = ["processing"]
|
|
1694
1790
|
|
|
@@ -1835,6 +1931,346 @@ if agent.peers:
|
|
|
1835
1931
|
|
|
1836
1932
|
---
|
|
1837
1933
|
|
|
1934
|
+
## Session Context Propagation (v0.3.2)
|
|
1935
|
+
|
|
1936
|
+
Pass metadata (mission IDs, trace IDs, priorities) through message flows:
|
|
1937
|
+
|
|
1938
|
+
### Sending Context
|
|
1939
|
+
|
|
1940
|
+
```python
|
|
1941
|
+
# All messaging methods accept context parameter
|
|
1942
|
+
await self.peers.notify("logger", {"event": "started"},
|
|
1943
|
+
context={"mission_id": "m-123", "trace_id": "t-abc"})
|
|
1944
|
+
|
|
1945
|
+
response = await self.peers.request("analyst", {"query": "..."},
|
|
1946
|
+
context={"priority": "high", "user_id": "u-456"})
|
|
1947
|
+
|
|
1948
|
+
await self.peers.broadcast({"alert": "ready"},
|
|
1949
|
+
context={"source": "coordinator"})
|
|
1950
|
+
```
|
|
1951
|
+
|
|
1952
|
+
### Receiving Context
|
|
1953
|
+
|
|
1954
|
+
```python
|
|
1955
|
+
async def on_peer_request(self, msg):
|
|
1956
|
+
# Context is available on the message
|
|
1957
|
+
mission_id = msg.context.get("mission_id") if msg.context else None
|
|
1958
|
+
trace_id = msg.context.get("trace_id") if msg.context else None
|
|
1959
|
+
|
|
1960
|
+
self._logger.info(f"Request for mission {mission_id}, trace {trace_id}")
|
|
1961
|
+
|
|
1962
|
+
return {"result": "processed"}
|
|
1963
|
+
```
|
|
1964
|
+
|
|
1965
|
+
### Auto-Propagation in respond()
|
|
1966
|
+
|
|
1967
|
+
Context automatically propagates from request to response:
|
|
1968
|
+
|
|
1969
|
+
```python
|
|
1970
|
+
async def on_peer_request(self, msg):
|
|
1971
|
+
# msg.context = {"mission_id": "m-123", "trace_id": "t-abc"}
|
|
1972
|
+
result = await self.process(msg.data)
|
|
1973
|
+
|
|
1974
|
+
# Context auto-propagates - original sender receives same context
|
|
1975
|
+
await self.peers.respond(msg, {"result": result})
|
|
1976
|
+
|
|
1977
|
+
# Override if needed
|
|
1978
|
+
await self.peers.respond(msg, {"result": result},
|
|
1979
|
+
context={"status": "completed", "mission_id": msg.context.get("mission_id")})
|
|
1980
|
+
```
|
|
1981
|
+
|
|
1982
|
+
---
|
|
1983
|
+
|
|
1984
|
+
## Async Request Pattern (v0.3.2)
|
|
1985
|
+
|
|
1986
|
+
Fire multiple requests without blocking, collect responses later:
|
|
1987
|
+
|
|
1988
|
+
### Fire-and-Collect Pattern
|
|
1989
|
+
|
|
1990
|
+
```python
|
|
1991
|
+
async def parallel_analysis(self, data_chunks):
|
|
1992
|
+
# Fire off requests to all available analysts
|
|
1993
|
+
analysts = self.peers.discover(role="analyst")
|
|
1994
|
+
request_ids = []
|
|
1995
|
+
|
|
1996
|
+
for i, (analyst, chunk) in enumerate(zip(analysts, data_chunks)):
|
|
1997
|
+
req_id = await self.peers.ask_async(
|
|
1998
|
+
analyst.agent_id,
|
|
1999
|
+
{"chunk_id": i, "data": chunk},
|
|
2000
|
+
context={"batch_id": "batch-001"}
|
|
2001
|
+
)
|
|
2002
|
+
request_ids.append((req_id, analyst.agent_id))
|
|
2003
|
+
|
|
2004
|
+
# Do other work while analysts process
|
|
2005
|
+
await self.update_status("processing")
|
|
2006
|
+
|
|
2007
|
+
# Collect results
|
|
2008
|
+
results = []
|
|
2009
|
+
for req_id, analyst_id in request_ids:
|
|
2010
|
+
response = await self.peers.check_inbox(req_id, timeout=30)
|
|
2011
|
+
if response:
|
|
2012
|
+
results.append(response)
|
|
2013
|
+
else:
|
|
2014
|
+
self._logger.warning(f"Timeout waiting for {analyst_id}")
|
|
2015
|
+
|
|
2016
|
+
return results
|
|
2017
|
+
```
|
|
2018
|
+
|
|
2019
|
+
### API Methods
|
|
2020
|
+
|
|
2021
|
+
```python
|
|
2022
|
+
# Fire async request - returns immediately with request_id
|
|
2023
|
+
req_id = await self.peers.ask_async(target, message, timeout=120, context=None)
|
|
2024
|
+
|
|
2025
|
+
# Check for response
|
|
2026
|
+
response = await self.peers.check_inbox(req_id, timeout=0) # Non-blocking
|
|
2027
|
+
response = await self.peers.check_inbox(req_id, timeout=10) # Wait up to 10s
|
|
2028
|
+
|
|
2029
|
+
# Manage pending requests
|
|
2030
|
+
pending = self.peers.get_pending_async_requests()
|
|
2031
|
+
self.peers.clear_inbox(req_id) # Clear specific
|
|
2032
|
+
self.peers.clear_inbox() # Clear all
|
|
2033
|
+
```
|
|
2034
|
+
|
|
2035
|
+
---
|
|
2036
|
+
|
|
2037
|
+
## Load Balancing Strategies (v0.3.2)
|
|
2038
|
+
|
|
2039
|
+
Distribute work across multiple peers:
|
|
2040
|
+
|
|
2041
|
+
### Discovery Strategies
|
|
2042
|
+
|
|
2043
|
+
```python
|
|
2044
|
+
# Default: first in discovery order (deterministic)
|
|
2045
|
+
workers = self.peers.discover(role="worker", strategy="first")
|
|
2046
|
+
|
|
2047
|
+
# Random: shuffle for basic distribution
|
|
2048
|
+
workers = self.peers.discover(role="worker", strategy="random")
|
|
2049
|
+
|
|
2050
|
+
# Round-robin: rotate through workers on each call
|
|
2051
|
+
workers = self.peers.discover(role="worker", strategy="round_robin")
|
|
2052
|
+
|
|
2053
|
+
# Least-recent: prefer workers not used recently
|
|
2054
|
+
workers = self.peers.discover(role="worker", strategy="least_recent")
|
|
2055
|
+
```
|
|
2056
|
+
|
|
2057
|
+
### discover_one() Convenience
|
|
2058
|
+
|
|
2059
|
+
```python
|
|
2060
|
+
# Get single peer with strategy applied
|
|
2061
|
+
worker = self.peers.discover_one(role="worker", strategy="round_robin")
|
|
2062
|
+
if worker:
|
|
2063
|
+
response = await self.peers.request(worker.agent_id, {"task": "..."})
|
|
2064
|
+
```
|
|
2065
|
+
|
|
2066
|
+
### Tracking Usage for least_recent
|
|
2067
|
+
|
|
2068
|
+
```python
|
|
2069
|
+
# Track usage to influence least_recent ordering
|
|
2070
|
+
worker = self.peers.discover_one(role="worker", strategy="least_recent")
|
|
2071
|
+
response = await self.peers.request(worker.agent_id, {"task": "..."})
|
|
2072
|
+
self.peers.record_peer_usage(worker.agent_id) # Update timestamp
|
|
2073
|
+
```
|
|
2074
|
+
|
|
2075
|
+
### Example: Load-Balanced Task Distribution
|
|
2076
|
+
|
|
2077
|
+
```python
|
|
2078
|
+
class Coordinator(CustomAgent):
|
|
2079
|
+
role = "coordinator"
|
|
2080
|
+
capabilities = ["coordination"]
|
|
2081
|
+
|
|
2082
|
+
async def distribute_work(self, tasks):
|
|
2083
|
+
results = []
|
|
2084
|
+
for task in tasks:
|
|
2085
|
+
# Round-robin automatically rotates through workers
|
|
2086
|
+
worker = self.peers.discover_one(
|
|
2087
|
+
capability="processing",
|
|
2088
|
+
strategy="round_robin"
|
|
2089
|
+
)
|
|
2090
|
+
if worker:
|
|
2091
|
+
response = await self.peers.request(
|
|
2092
|
+
worker.agent_id,
|
|
2093
|
+
{"task": task}
|
|
2094
|
+
)
|
|
2095
|
+
results.append(response)
|
|
2096
|
+
return results
|
|
2097
|
+
```
|
|
2098
|
+
|
|
2099
|
+
---
|
|
2100
|
+
|
|
2101
|
+
## Mesh Diagnostics (v0.3.2)
|
|
2102
|
+
|
|
2103
|
+
Monitor mesh health for debugging and operations:
|
|
2104
|
+
|
|
2105
|
+
### Getting Diagnostics
|
|
2106
|
+
|
|
2107
|
+
```python
|
|
2108
|
+
# From mesh
|
|
2109
|
+
diag = mesh.get_diagnostics()
|
|
2110
|
+
|
|
2111
|
+
# Structure:
|
|
2112
|
+
# {
|
|
2113
|
+
# "local_node": {
|
|
2114
|
+
# "mode": "p2p",
|
|
2115
|
+
# "started": True,
|
|
2116
|
+
# "agent_count": 3,
|
|
2117
|
+
# "bind_address": "127.0.0.1:7950"
|
|
2118
|
+
# },
|
|
2119
|
+
# "known_peers": [
|
|
2120
|
+
# {"role": "analyst", "node_id": "10.0.0.2:7950", "status": "alive"}
|
|
2121
|
+
# ],
|
|
2122
|
+
# "local_agents": [
|
|
2123
|
+
# {"role": "coordinator", "agent_id": "...", "capabilities": [...]}
|
|
2124
|
+
# ],
|
|
2125
|
+
# "connectivity_status": "healthy"
|
|
2126
|
+
# }
|
|
2127
|
+
```
|
|
2128
|
+
|
|
2129
|
+
### Connectivity Status
|
|
2130
|
+
|
|
2131
|
+
| Status | Meaning |
|
|
2132
|
+
|--------|---------|
|
|
2133
|
+
| `healthy` | P2P active, peers connected |
|
|
2134
|
+
| `isolated` | P2P active, no peers found |
|
|
2135
|
+
| `degraded` | Some connectivity issues |
|
|
2136
|
+
| `not_started` | Mesh not started yet |
|
|
2137
|
+
| `local_only` | Autonomous mode (no P2P) |
|
|
2138
|
+
|
|
2139
|
+
### FastAPI Health Endpoint
|
|
2140
|
+
|
|
2141
|
+
```python
|
|
2142
|
+
from fastapi import FastAPI, Request
|
|
2143
|
+
from jarviscore.integrations.fastapi import JarvisLifespan
|
|
2144
|
+
|
|
2145
|
+
app = FastAPI(lifespan=JarvisLifespan(agent, mode="p2p"))
|
|
2146
|
+
|
|
2147
|
+
@app.get("/health")
|
|
2148
|
+
async def health(request: Request):
|
|
2149
|
+
mesh = request.app.state.jarvis_mesh
|
|
2150
|
+
diag = mesh.get_diagnostics()
|
|
2151
|
+
return {
|
|
2152
|
+
"status": diag["connectivity_status"],
|
|
2153
|
+
"agents": diag["local_node"]["agent_count"],
|
|
2154
|
+
"peers": len(diag.get("known_peers", []))
|
|
2155
|
+
}
|
|
2156
|
+
```
|
|
2157
|
+
|
|
2158
|
+
---
|
|
2159
|
+
|
|
2160
|
+
## Testing with MockMesh (v0.3.2)
|
|
2161
|
+
|
|
2162
|
+
Unit test agents without real P2P infrastructure:
|
|
2163
|
+
|
|
2164
|
+
### Basic Test Setup
|
|
2165
|
+
|
|
2166
|
+
```python
|
|
2167
|
+
import pytest
|
|
2168
|
+
from jarviscore.testing import MockMesh, MockPeerClient
|
|
2169
|
+
from jarviscore.profiles import CustomAgent
|
|
2170
|
+
|
|
2171
|
+
class AnalystAgent(CustomAgent):
|
|
2172
|
+
role = "analyst"
|
|
2173
|
+
capabilities = ["analysis"]
|
|
2174
|
+
|
|
2175
|
+
async def on_peer_request(self, msg):
|
|
2176
|
+
return {"analysis": f"Analyzed: {msg.data.get('query')}"}
|
|
2177
|
+
|
|
2178
|
+
@pytest.mark.asyncio
|
|
2179
|
+
async def test_analyst_responds():
|
|
2180
|
+
mesh = MockMesh()
|
|
2181
|
+
mesh.add(AnalystAgent)
|
|
2182
|
+
await mesh.start()
|
|
2183
|
+
|
|
2184
|
+
analyst = mesh.get_agent("analyst")
|
|
2185
|
+
|
|
2186
|
+
# Inject a test message
|
|
2187
|
+
from jarviscore.p2p.messages import MessageType
|
|
2188
|
+
analyst.peers.inject_message(
|
|
2189
|
+
sender="tester",
|
|
2190
|
+
message_type=MessageType.REQUEST,
|
|
2191
|
+
data={"query": "test data"},
|
|
2192
|
+
correlation_id="test-123"
|
|
2193
|
+
)
|
|
2194
|
+
|
|
2195
|
+
# Receive and verify
|
|
2196
|
+
msg = await analyst.peers.receive(timeout=1)
|
|
2197
|
+
assert msg is not None
|
|
2198
|
+
assert msg.data["query"] == "test data"
|
|
2199
|
+
|
|
2200
|
+
await mesh.stop()
|
|
2201
|
+
```
|
|
2202
|
+
|
|
2203
|
+
### Mocking Peer Responses
|
|
2204
|
+
|
|
2205
|
+
```python
|
|
2206
|
+
@pytest.mark.asyncio
|
|
2207
|
+
async def test_coordinator_delegates():
|
|
2208
|
+
class CoordinatorAgent(CustomAgent):
|
|
2209
|
+
role = "coordinator"
|
|
2210
|
+
capabilities = ["coordination"]
|
|
2211
|
+
|
|
2212
|
+
async def on_peer_request(self, msg):
|
|
2213
|
+
# This agent delegates to analyst
|
|
2214
|
+
analysis = await self.peers.request("analyst", {"data": msg.data})
|
|
2215
|
+
return {"coordinated": True, "analysis": analysis}
|
|
2216
|
+
|
|
2217
|
+
mesh = MockMesh()
|
|
2218
|
+
mesh.add(CoordinatorAgent)
|
|
2219
|
+
await mesh.start()
|
|
2220
|
+
|
|
2221
|
+
coordinator = mesh.get_agent("coordinator")
|
|
2222
|
+
|
|
2223
|
+
# Mock the analyst response
|
|
2224
|
+
coordinator.peers.add_mock_peer("analyst", capabilities=["analysis"])
|
|
2225
|
+
coordinator.peers.set_mock_response("analyst", {"result": "mocked analysis"})
|
|
2226
|
+
|
|
2227
|
+
# Test the flow
|
|
2228
|
+
response = await coordinator.peers.request("analyst", {"test": "data"})
|
|
2229
|
+
|
|
2230
|
+
assert response["result"] == "mocked analysis"
|
|
2231
|
+
coordinator.peers.assert_requested("analyst")
|
|
2232
|
+
|
|
2233
|
+
await mesh.stop()
|
|
2234
|
+
```
|
|
2235
|
+
|
|
2236
|
+
### Assertion Helpers
|
|
2237
|
+
|
|
2238
|
+
```python
|
|
2239
|
+
# Verify notifications were sent
|
|
2240
|
+
agent.peers.assert_notified("target_role")
|
|
2241
|
+
agent.peers.assert_notified("target", message_contains={"event": "completed"})
|
|
2242
|
+
|
|
2243
|
+
# Verify requests were sent
|
|
2244
|
+
agent.peers.assert_requested("analyst")
|
|
2245
|
+
agent.peers.assert_requested("analyst", message_contains={"query": "test"})
|
|
2246
|
+
|
|
2247
|
+
# Verify broadcasts
|
|
2248
|
+
agent.peers.assert_broadcasted()
|
|
2249
|
+
agent.peers.assert_broadcasted(message_contains={"alert": "important"})
|
|
2250
|
+
|
|
2251
|
+
# Access sent messages for custom assertions
|
|
2252
|
+
notifications = agent.peers.get_sent_notifications()
|
|
2253
|
+
requests = agent.peers.get_sent_requests()
|
|
2254
|
+
broadcasts = agent.peers.get_sent_broadcasts()
|
|
2255
|
+
|
|
2256
|
+
# Reset between tests
|
|
2257
|
+
agent.peers.reset()
|
|
2258
|
+
```
|
|
2259
|
+
|
|
2260
|
+
### Custom Response Handler
|
|
2261
|
+
|
|
2262
|
+
```python
|
|
2263
|
+
async def dynamic_handler(target, message, context):
|
|
2264
|
+
"""Return different responses based on message content."""
|
|
2265
|
+
if "urgent" in message.get("query", ""):
|
|
2266
|
+
return {"priority": "high", "result": "fast response"}
|
|
2267
|
+
return {"priority": "normal", "result": "standard response"}
|
|
2268
|
+
|
|
2269
|
+
agent.peers.set_request_handler(dynamic_handler)
|
|
2270
|
+
```
|
|
2271
|
+
|
|
2272
|
+
---
|
|
2273
|
+
|
|
1838
2274
|
## API Reference
|
|
1839
2275
|
|
|
1840
2276
|
### CustomAgent Class Attributes
|
|
@@ -1855,20 +2291,22 @@ if agent.peers:
|
|
|
1855
2291
|
| `leave_mesh()` | Both | **(v0.3.0)** Gracefully leave the mesh |
|
|
1856
2292
|
| `serve_forever()` | Both | **(v0.3.0)** Block until shutdown signal |
|
|
1857
2293
|
|
|
1858
|
-
###
|
|
2294
|
+
### P2P Message Handlers (v0.3.1)
|
|
1859
2295
|
|
|
1860
|
-
|
|
2296
|
+
CustomAgent includes built-in P2P message handlers for handler-based communication.
|
|
1861
2297
|
|
|
1862
2298
|
| Attribute/Method | Type | Description |
|
|
1863
2299
|
|------------------|------|-------------|
|
|
1864
|
-
| `
|
|
1865
|
-
| `
|
|
1866
|
-
| `on_peer_request(msg)` | async method | Handle incoming requests. Return
|
|
2300
|
+
| `listen_timeout` | `float` | Seconds to wait for messages in `run()` loop. Default: 1.0 |
|
|
2301
|
+
| `auto_respond` | `bool` | Auto-send `on_peer_request` return value. Default: True |
|
|
2302
|
+
| `on_peer_request(msg)` | async method | Handle incoming requests. Return value sent as response |
|
|
1867
2303
|
| `on_peer_notify(msg)` | async method | Handle broadcast notifications. No return needed |
|
|
2304
|
+
| `on_error(error, msg)` | async method | Handle errors during message processing |
|
|
2305
|
+
| `run()` | async method | Built-in listener loop that dispatches to handlers |
|
|
1868
2306
|
|
|
1869
|
-
**Note:**
|
|
2307
|
+
**Note:** Override `on_peer_request()` and `on_peer_notify()` for your business logic. The `run()` method handles the message dispatch automatically.
|
|
1870
2308
|
|
|
1871
|
-
### Why `execute_task()`
|
|
2309
|
+
### Why `execute_task()` Exists in CustomAgent
|
|
1872
2310
|
|
|
1873
2311
|
You may notice that P2P agents must implement `execute_task()` even though they primarily use `run()`. Here's why:
|
|
1874
2312
|
|
|
@@ -2172,10 +2610,10 @@ For complete, runnable examples, see:
|
|
|
2172
2610
|
|
|
2173
2611
|
- `examples/customagent_p2p_example.py` - P2P mode with LLM-driven peer communication
|
|
2174
2612
|
- `examples/customagent_distributed_example.py` - Distributed mode with workflows
|
|
2175
|
-
- `examples/
|
|
2613
|
+
- `examples/customagent_cognitive_discovery_example.py` - CustomAgent + cognitive discovery (v0.3.0)
|
|
2176
2614
|
- `examples/fastapi_integration_example.py` - FastAPI + JarvisLifespan (v0.3.0)
|
|
2177
2615
|
- `examples/cloud_deployment_example.py` - Self-registration with join_mesh (v0.3.0)
|
|
2178
2616
|
|
|
2179
2617
|
---
|
|
2180
2618
|
|
|
2181
|
-
*CustomAgent Guide - JarvisCore Framework v0.3.
|
|
2619
|
+
*CustomAgent Guide - JarvisCore Framework v0.3.2*
|