jarviscore-framework 0.3.0__py3-none-any.whl → 0.3.1__py3-none-any.whl
This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
- examples/cloud_deployment_example.py +3 -3
- examples/{listeneragent_cognitive_discovery_example.py → customagent_cognitive_discovery_example.py} +6 -6
- examples/fastapi_integration_example.py +4 -4
- jarviscore/__init__.py +8 -11
- jarviscore/cli/smoketest.py +1 -1
- jarviscore/core/mesh.py +9 -0
- jarviscore/data/examples/cloud_deployment_example.py +3 -3
- jarviscore/data/examples/custom_profile_decorator.py +134 -0
- jarviscore/data/examples/custom_profile_wrap.py +168 -0
- jarviscore/data/examples/{listeneragent_cognitive_discovery_example.py → customagent_cognitive_discovery_example.py} +6 -6
- jarviscore/data/examples/fastapi_integration_example.py +4 -4
- jarviscore/docs/API_REFERENCE.md +32 -45
- jarviscore/docs/CHANGELOG.md +42 -0
- jarviscore/docs/CONFIGURATION.md +1 -1
- jarviscore/docs/CUSTOMAGENT_GUIDE.md +246 -153
- jarviscore/docs/GETTING_STARTED.md +186 -329
- jarviscore/docs/TROUBLESHOOTING.md +1 -1
- jarviscore/docs/USER_GUIDE.md +8 -9
- jarviscore/integrations/fastapi.py +4 -4
- jarviscore/p2p/peer_client.py +29 -2
- jarviscore/profiles/__init__.py +2 -4
- jarviscore/profiles/customagent.py +295 -74
- {jarviscore_framework-0.3.0.dist-info → jarviscore_framework-0.3.1.dist-info}/METADATA +61 -46
- {jarviscore_framework-0.3.0.dist-info → jarviscore_framework-0.3.1.dist-info}/RECORD +30 -29
- tests/test_13_dx_improvements.py +37 -37
- tests/test_15_llm_cognitive_discovery.py +18 -18
- tests/test_16_unified_dx_flow.py +3 -3
- jarviscore/profiles/listeneragent.py +0 -292
- {jarviscore_framework-0.3.0.dist-info → jarviscore_framework-0.3.1.dist-info}/WHEEL +0 -0
- {jarviscore_framework-0.3.0.dist-info → jarviscore_framework-0.3.1.dist-info}/licenses/LICENSE +0 -0
- {jarviscore_framework-0.3.0.dist-info → jarviscore_framework-0.3.1.dist-info}/top_level.txt +0 -0
|
@@ -11,11 +11,11 @@ CustomAgent lets you integrate your **existing agent code** with JarvisCore's ne
|
|
|
11
11
|
|
|
12
12
|
1. [Prerequisites](#prerequisites)
|
|
13
13
|
2. [Choose Your Mode](#choose-your-mode)
|
|
14
|
-
3. [P2P Mode](#p2p-mode)
|
|
15
|
-
4. [
|
|
16
|
-
5. [
|
|
17
|
-
6. [
|
|
18
|
-
7. [
|
|
14
|
+
3. [P2P Mode](#p2p-mode) - Handler-based peer communication
|
|
15
|
+
4. [Distributed Mode](#distributed-mode) - Workflow tasks + P2P
|
|
16
|
+
5. [Cognitive Discovery (v0.3.0)](#cognitive-discovery-v030) - Dynamic peer awareness for LLMs
|
|
17
|
+
6. [FastAPI Integration (v0.3.0)](#fastapi-integration-v030) - 3-line setup with JarvisLifespan
|
|
18
|
+
7. [Framework Integration Patterns](#framework-integration-patterns) - aiohttp, Flask, Django
|
|
19
19
|
8. [Cloud Deployment (v0.3.0)](#cloud-deployment-v030) - Self-registration for containers
|
|
20
20
|
9. [API Reference](#api-reference)
|
|
21
21
|
10. [Multi-Node Deployment](#multi-node-deployment)
|
|
@@ -102,7 +102,7 @@ class MyLLMClient:
|
|
|
102
102
|
|
|
103
103
|
### Quick Comparison
|
|
104
104
|
|
|
105
|
-
| Feature | P2P Mode (CustomAgent) | P2P Mode (
|
|
105
|
+
| Feature | P2P Mode (CustomAgent) | P2P Mode (CustomAgent) | Distributed Mode |
|
|
106
106
|
|---------|------------------------|--------------------------|------------------|
|
|
107
107
|
| **Primary method** | `run()` - continuous loop | `on_peer_request()` handlers | `execute_task()` - on-demand |
|
|
108
108
|
| **Communication** | Direct peer messaging | Handler-based (no loop) | Workflow orchestration |
|
|
@@ -110,7 +110,7 @@ class MyLLMClient:
|
|
|
110
110
|
| **Coordination** | Agents self-coordinate | Framework handles loop | Framework coordinates |
|
|
111
111
|
| **Supports workflows** | No | No | Yes |
|
|
112
112
|
|
|
113
|
-
> **
|
|
113
|
+
> **CustomAgent** includes built-in P2P handlers - just implement `on_peer_request()` and `on_peer_notify()`. No need to write your own `run()` loop.
|
|
114
114
|
|
|
115
115
|
---
|
|
116
116
|
|
|
@@ -118,6 +118,46 @@ class MyLLMClient:
|
|
|
118
118
|
|
|
119
119
|
P2P mode is for agents that run continuously and communicate directly with each other.
|
|
120
120
|
|
|
121
|
+
### v0.3.1 Update: Handler-Based Pattern
|
|
122
|
+
|
|
123
|
+
**We've simplified P2P agents!** No more manual `run()` loops.
|
|
124
|
+
|
|
125
|
+
```
|
|
126
|
+
┌────────────────────────────────────────────────────────────────┐
|
|
127
|
+
│ OLD vs NEW Pattern │
|
|
128
|
+
├────────────────────────────────────────────────────────────────┤
|
|
129
|
+
│ │
|
|
130
|
+
│ ❌ OLD (v0.2.x) - Manual Loop │
|
|
131
|
+
│ ┌──────────────────────────────────────────────┐ │
|
|
132
|
+
│ │ async def run(self): │ │
|
|
133
|
+
│ │ while not self.shutdown_requested: │ │
|
|
134
|
+
│ │ msg = await self.peers.receive() │ ← Polling │
|
|
135
|
+
│ │ if msg and msg.is_request: │ │
|
|
136
|
+
│ │ result = self.process(msg) │ │
|
|
137
|
+
│ │ await self.peers.respond(...) │ ← Manual │
|
|
138
|
+
│ │ await asyncio.sleep(0.1) │ │
|
|
139
|
+
│ └──────────────────────────────────────────────┘ │
|
|
140
|
+
│ │
|
|
141
|
+
│ ✅ NEW (v0.3.0+) - Handler-Based │
|
|
142
|
+
│ ┌──────────────────────────────────────────────┐ │
|
|
143
|
+
│ │ async def on_peer_request(self, msg): │ │
|
|
144
|
+
│ │ result = self.process(msg) │ │
|
|
145
|
+
│ │ return result │ ← Simple! │
|
|
146
|
+
│ └──────────────────────────────────────────────┘ │
|
|
147
|
+
│ ▲ │
|
|
148
|
+
│ │ │
|
|
149
|
+
│ └─ Framework calls this automatically │
|
|
150
|
+
│ │
|
|
151
|
+
└────────────────────────────────────────────────────────────────┘
|
|
152
|
+
```
|
|
153
|
+
|
|
154
|
+
**Benefits:**
|
|
155
|
+
- ✅ **Less Code**: No boilerplate loops
|
|
156
|
+
- ✅ **Simpler**: Just return your result
|
|
157
|
+
- ✅ **Automatic**: Framework handles message dispatch
|
|
158
|
+
- ✅ **Error Handling**: Built-in exception capture
|
|
159
|
+
- ✅ **FastAPI Ready**: Works with `JarvisLifespan` out of the box
|
|
160
|
+
|
|
121
161
|
### Migration Overview
|
|
122
162
|
|
|
123
163
|
```
|
|
@@ -163,59 +203,89 @@ if __name__ == "__main__":
|
|
|
163
203
|
|
|
164
204
|
### Step 3: Modify Your Agent Code → `agents.py`
|
|
165
205
|
|
|
166
|
-
|
|
206
|
+
**🚨 IMPORTANT CHANGE (v0.3.0+)**: We've moved from `run()` loops to **handler-based** agents!
|
|
167
207
|
|
|
208
|
+
#### ❌ OLD Pattern (Deprecated)
|
|
168
209
|
```python
|
|
169
|
-
#
|
|
170
|
-
|
|
210
|
+
# DON'T DO THIS ANYMORE!
|
|
211
|
+
class ResearcherAgent(CustomAgent):
|
|
212
|
+
async def run(self): # ❌ Manual loop
|
|
213
|
+
while not self.shutdown_requested:
|
|
214
|
+
msg = await self.peers.receive(timeout=0.5)
|
|
215
|
+
if msg and msg.is_request:
|
|
216
|
+
result = self.llm.chat(f"Research: {msg.data['question']}")
|
|
217
|
+
await self.peers.respond(msg, {"response": result})
|
|
218
|
+
await asyncio.sleep(0.1)
|
|
219
|
+
```
|
|
220
|
+
**Problems**: Manual loops, boilerplate, error-prone
|
|
221
|
+
|
|
222
|
+
#### ✅ NEW Pattern (Recommended)
|
|
223
|
+
```python
|
|
224
|
+
# agents.py (MODERN VERSION)
|
|
171
225
|
from jarviscore.profiles import CustomAgent
|
|
172
226
|
|
|
173
227
|
|
|
174
228
|
class ResearcherAgent(CustomAgent):
|
|
175
|
-
"""Your agent, now framework-integrated."""
|
|
229
|
+
"""Your agent, now framework-integrated with handlers."""
|
|
176
230
|
|
|
177
|
-
#
|
|
231
|
+
# Required class attributes for discovery
|
|
178
232
|
role = "researcher"
|
|
179
233
|
capabilities = ["research", "analysis"]
|
|
234
|
+
description = "Research specialist that gathers and synthesizes information"
|
|
180
235
|
|
|
181
236
|
async def setup(self):
|
|
182
|
-
"""
|
|
237
|
+
"""Called once on startup. Initialize your LLM here."""
|
|
183
238
|
await super().setup()
|
|
184
239
|
self.llm = MyLLMClient() # Your existing initialization
|
|
185
240
|
|
|
186
|
-
async def
|
|
187
|
-
"""
|
|
188
|
-
|
|
189
|
-
|
|
190
|
-
|
|
191
|
-
|
|
192
|
-
|
|
193
|
-
|
|
194
|
-
|
|
195
|
-
|
|
196
|
-
|
|
241
|
+
async def on_peer_request(self, msg):
|
|
242
|
+
"""
|
|
243
|
+
Handle incoming requests from other agents.
|
|
244
|
+
|
|
245
|
+
This is called AUTOMATICALLY when another agent asks you a question.
|
|
246
|
+
No loops, no polling, no boilerplate!
|
|
247
|
+
"""
|
|
248
|
+
query = msg.data.get("question", "")
|
|
249
|
+
|
|
250
|
+
# YOUR EXISTING LOGIC:
|
|
251
|
+
result = self.llm.chat(f"Research: {query}")
|
|
252
|
+
|
|
253
|
+
# Just return the data - framework handles the response
|
|
254
|
+
return {"response": result}
|
|
197
255
|
|
|
198
256
|
async def execute_task(self, task: dict) -> dict:
|
|
199
257
|
"""
|
|
200
|
-
Required by base Agent class
|
|
201
|
-
|
|
202
|
-
In P2P mode, your
|
|
203
|
-
This
|
|
204
|
-
to be implemented, or you get TypeError on instantiation.
|
|
258
|
+
Required by base Agent class for workflow mode.
|
|
259
|
+
|
|
260
|
+
In pure P2P mode, your logic is in on_peer_request().
|
|
261
|
+
This is used when agent is part of a workflow pipeline.
|
|
205
262
|
"""
|
|
206
|
-
return {"status": "success", "note": "This agent uses
|
|
263
|
+
return {"status": "success", "note": "This agent uses handlers for P2P mode"}
|
|
207
264
|
```
|
|
208
265
|
|
|
209
266
|
**What changed:**
|
|
210
267
|
|
|
211
|
-
| Before | After |
|
|
212
|
-
|
|
213
|
-
| `
|
|
214
|
-
| `
|
|
215
|
-
| `
|
|
216
|
-
|
|
|
268
|
+
| Before (v0.2.x) | After (v0.3.0+) | Why? |
|
|
269
|
+
|-----------------|-----------------|------|
|
|
270
|
+
| `async def run(self):` with `while` loop | `async def on_peer_request(self, msg):` handler | Automatic dispatch, less boilerplate |
|
|
271
|
+
| Manual `await self.peers.receive()` | Framework calls your handler | No polling needed |
|
|
272
|
+
| Manual `await self.peers.respond(msg, data)` | Just `return data` | Simpler error handling |
|
|
273
|
+
| `asyncio.create_task(agent.run())` | Not needed - handlers run automatically | Cleaner lifecycle |
|
|
274
|
+
|
|
275
|
+
#### Migration Checklist (v0.2.x → v0.3.0+)
|
|
217
276
|
|
|
218
|
-
|
|
277
|
+
If you have existing agents using the `run()` loop pattern:
|
|
278
|
+
|
|
279
|
+
- [ ] Replace `async def run(self):` with `async def on_peer_request(self, msg):`
|
|
280
|
+
- [ ] Remove `while not self.shutdown_requested:` loop
|
|
281
|
+
- [ ] Remove `msg = await self.peers.receive(timeout=0.5)` polling
|
|
282
|
+
- [ ] Change `await self.peers.respond(msg, data)` to `return data`
|
|
283
|
+
- [ ] Remove manual `asyncio.create_task(agent.run())` calls in main.py
|
|
284
|
+
- [ ] Consider using `JarvisLifespan` for FastAPI integration (see Step 4)
|
|
285
|
+
- [ ] Add `description` class attribute for better cognitive discovery
|
|
286
|
+
- [ ] Use `get_cognitive_context()` instead of hardcoded peer lists
|
|
287
|
+
|
|
288
|
+
> **Note**: The `run()` method is **still supported** for backward compatibility, but handlers are now the recommended approach. For the full pattern with **LLM-driven peer communication** (where your LLM autonomously decides when to call other agents), see the [Complete Example](#complete-example-llm-driven-peer-communication) below.
|
|
219
289
|
|
|
220
290
|
### Step 4: Create New Entry Point → `main.py`
|
|
221
291
|
|
|
@@ -316,22 +386,22 @@ This is the **key pattern** for P2P mode. Your LLM gets peer tools added to its
|
|
|
316
386
|
**The key insight**: You add peer tools to your LLM's toolset. The LLM decides when to use them.
|
|
317
387
|
|
|
318
388
|
```python
|
|
319
|
-
# agents.py
|
|
320
|
-
import asyncio
|
|
389
|
+
# agents.py - UPDATED FOR v0.3.0+
|
|
321
390
|
from jarviscore.profiles import CustomAgent
|
|
322
391
|
|
|
323
392
|
|
|
324
393
|
class AnalystAgent(CustomAgent):
|
|
325
394
|
"""
|
|
326
|
-
Analyst agent -
|
|
395
|
+
Analyst agent - specialist in data analysis.
|
|
327
396
|
|
|
328
|
-
|
|
329
|
-
|
|
330
|
-
|
|
331
|
-
|
|
397
|
+
NEW PATTERN (v0.3.0+):
|
|
398
|
+
- Uses @on_peer_request HANDLER instead of run() loop
|
|
399
|
+
- Automatically receives and responds to peer requests
|
|
400
|
+
- No manual message polling needed!
|
|
332
401
|
"""
|
|
333
402
|
role = "analyst"
|
|
334
403
|
capabilities = ["analysis", "data_interpretation", "reporting"]
|
|
404
|
+
description = "Expert data analyst for statistics and insights"
|
|
335
405
|
|
|
336
406
|
async def setup(self):
|
|
337
407
|
await super().setup()
|
|
@@ -406,19 +476,20 @@ Analyze data thoroughly and provide insights."""
|
|
|
406
476
|
|
|
407
477
|
return response.get("content", "Analysis complete.")
|
|
408
478
|
|
|
409
|
-
async def
|
|
410
|
-
"""
|
|
411
|
-
|
|
412
|
-
|
|
413
|
-
|
|
414
|
-
|
|
415
|
-
|
|
479
|
+
async def on_peer_request(self, msg):
|
|
480
|
+
"""
|
|
481
|
+
Handle incoming requests from peers.
|
|
482
|
+
|
|
483
|
+
✅ NEW: This is called automatically when another agent sends a request.
|
|
484
|
+
❌ OLD: Manual while loop with receive() polling
|
|
485
|
+
"""
|
|
486
|
+
query = msg.data.get("question", msg.data.get("query", ""))
|
|
416
487
|
|
|
417
|
-
|
|
418
|
-
|
|
488
|
+
# Process with LLM
|
|
489
|
+
result = await self.process_with_llm(query)
|
|
419
490
|
|
|
420
|
-
|
|
421
|
-
|
|
491
|
+
# Just return the data - framework handles the response!
|
|
492
|
+
return {"response": result}
|
|
422
493
|
|
|
423
494
|
async def execute_task(self, task: dict) -> dict:
|
|
424
495
|
"""Required by base class."""
|
|
@@ -429,13 +500,16 @@ class AssistantAgent(CustomAgent):
|
|
|
429
500
|
"""
|
|
430
501
|
Assistant agent - coordinates with other specialists.
|
|
431
502
|
|
|
432
|
-
|
|
503
|
+
NEW PATTERN (v0.3.0+):
|
|
433
504
|
1. Has its own LLM for reasoning
|
|
434
|
-
2.
|
|
435
|
-
3.
|
|
505
|
+
2. Uses get_cognitive_context() to discover available peers
|
|
506
|
+
3. Peer tools (ask_peer, broadcast) added to LLM toolset
|
|
507
|
+
4. LLM AUTONOMOUSLY decides when to ask other agents
|
|
508
|
+
5. Uses on_peer_request handler instead of run() loop
|
|
436
509
|
"""
|
|
437
510
|
role = "assistant"
|
|
438
511
|
capabilities = ["chat", "coordination", "search"]
|
|
512
|
+
description = "General assistant that delegates specialized tasks to experts"
|
|
439
513
|
|
|
440
514
|
async def setup(self):
|
|
441
515
|
await super().setup()
|
|
@@ -535,16 +609,16 @@ Be concise in your responses."""
|
|
|
535
609
|
|
|
536
610
|
return response.get("content", "")
|
|
537
611
|
|
|
538
|
-
async def
|
|
539
|
-
"""
|
|
540
|
-
|
|
541
|
-
|
|
542
|
-
|
|
543
|
-
|
|
544
|
-
|
|
545
|
-
|
|
546
|
-
|
|
547
|
-
|
|
612
|
+
async def on_peer_request(self, msg):
|
|
613
|
+
"""
|
|
614
|
+
Handle incoming requests from other agents.
|
|
615
|
+
|
|
616
|
+
✅ NEW: Handler-based - called automatically on request
|
|
617
|
+
❌ OLD: Manual while loop with receive() polling
|
|
618
|
+
"""
|
|
619
|
+
query = msg.data.get("query", "")
|
|
620
|
+
result = await self.chat(query)
|
|
621
|
+
return {"response": result}
|
|
548
622
|
|
|
549
623
|
async def execute_task(self, task: dict) -> dict:
|
|
550
624
|
"""Required by base class."""
|
|
@@ -552,13 +626,14 @@ Be concise in your responses."""
|
|
|
552
626
|
```
|
|
553
627
|
|
|
554
628
|
```python
|
|
555
|
-
# main.py
|
|
629
|
+
# main.py - UPDATED FOR v0.3.0+ (Handler-Based Pattern)
|
|
556
630
|
import asyncio
|
|
557
631
|
from jarviscore import Mesh
|
|
558
632
|
from agents import AnalystAgent, AssistantAgent
|
|
559
633
|
|
|
560
634
|
|
|
561
635
|
async def main():
|
|
636
|
+
"""Simple P2P mesh without web server."""
|
|
562
637
|
mesh = Mesh(
|
|
563
638
|
mode="p2p",
|
|
564
639
|
config={
|
|
@@ -567,17 +642,15 @@ async def main():
|
|
|
567
642
|
}
|
|
568
643
|
)
|
|
569
644
|
|
|
570
|
-
# Add both agents
|
|
645
|
+
# Add both agents - they'll use handlers automatically
|
|
571
646
|
mesh.add(AnalystAgent)
|
|
572
647
|
assistant = mesh.add(AssistantAgent)
|
|
573
648
|
|
|
574
649
|
await mesh.start()
|
|
575
650
|
|
|
576
|
-
#
|
|
577
|
-
|
|
578
|
-
|
|
579
|
-
|
|
580
|
-
# Give time for setup
|
|
651
|
+
# ✅ NO MORE MANUAL run() TASKS! Handlers are automatic.
|
|
652
|
+
|
|
653
|
+
# Give time for mesh to stabilize
|
|
581
654
|
await asyncio.sleep(0.5)
|
|
582
655
|
|
|
583
656
|
# User asks a question - LLM will autonomously decide to use ask_peer
|
|
@@ -590,8 +663,6 @@ async def main():
|
|
|
590
663
|
# Output: [{'tool': 'ask_peer', 'args': {'role': 'analyst', 'question': '...'}}]
|
|
591
664
|
|
|
592
665
|
# Cleanup
|
|
593
|
-
analyst.request_shutdown()
|
|
594
|
-
analyst_task.cancel()
|
|
595
666
|
await mesh.stop()
|
|
596
667
|
|
|
597
668
|
|
|
@@ -599,6 +670,59 @@ if __name__ == "__main__":
|
|
|
599
670
|
asyncio.run(main())
|
|
600
671
|
```
|
|
601
672
|
|
|
673
|
+
**Or better yet, use FastAPI + JarvisLifespan:**
|
|
674
|
+
|
|
675
|
+
```python
|
|
676
|
+
# main.py - PRODUCTION PATTERN (FastAPI + JarvisLifespan)
|
|
677
|
+
from fastapi import FastAPI, Request
|
|
678
|
+
from fastapi.responses import JSONResponse
|
|
679
|
+
from jarviscore.integrations import JarvisLifespan
|
|
680
|
+
from agents import AnalystAgent, AssistantAgent
|
|
681
|
+
import uvicorn
|
|
682
|
+
|
|
683
|
+
|
|
684
|
+
# ✅ ONE-LINE MESH SETUP with JarvisLifespan!
|
|
685
|
+
app = FastAPI(lifespan=JarvisLifespan([AnalystAgent, AssistantAgent]))
|
|
686
|
+
|
|
687
|
+
|
|
688
|
+
@app.post("/chat")
|
|
689
|
+
async def chat(request: Request):
|
|
690
|
+
"""Chat endpoint - assistant may autonomously delegate to analyst."""
|
|
691
|
+
data = await request.json()
|
|
692
|
+
message = data.get("message", "")
|
|
693
|
+
|
|
694
|
+
# Get assistant from mesh (JarvisLifespan manages it)
|
|
695
|
+
assistant = app.state.mesh.get_agent("assistant")
|
|
696
|
+
|
|
697
|
+
# Chat - LLM autonomously discovers and delegates if needed
|
|
698
|
+
response = await assistant.chat(message)
|
|
699
|
+
|
|
700
|
+
return JSONResponse(response)
|
|
701
|
+
|
|
702
|
+
|
|
703
|
+
@app.get("/agents")
|
|
704
|
+
async def list_agents():
|
|
705
|
+
"""Show what each agent sees (cognitive context)."""
|
|
706
|
+
mesh = app.state.mesh
|
|
707
|
+
agents_info = {}
|
|
708
|
+
|
|
709
|
+
for agent in mesh.agents:
|
|
710
|
+
if agent.peers:
|
|
711
|
+
context = agent.peers.get_cognitive_context(format="markdown")
|
|
712
|
+
agents_info[agent.role] = {
|
|
713
|
+
"role": agent.role,
|
|
714
|
+
"capabilities": agent.capabilities,
|
|
715
|
+
"peers_visible": len(agent.peers.get_all_peers()),
|
|
716
|
+
"cognitive_context": context[:200] + "..."
|
|
717
|
+
}
|
|
718
|
+
|
|
719
|
+
return JSONResponse(agents_info)
|
|
720
|
+
|
|
721
|
+
|
|
722
|
+
if __name__ == "__main__":
|
|
723
|
+
uvicorn.run(app, host="0.0.0.0", port=8000)
|
|
724
|
+
```
|
|
725
|
+
|
|
602
726
|
### Key Concepts for P2P Mode
|
|
603
727
|
|
|
604
728
|
#### Adding Peer Tools to Your LLM
|
|
@@ -666,46 +790,16 @@ async def run(self):
|
|
|
666
790
|
|
|
667
791
|
---
|
|
668
792
|
|
|
669
|
-
##
|
|
670
|
-
|
|
671
|
-
**ListenerAgent** is for developers who want P2P communication without writing the `run()` loop themselves.
|
|
672
|
-
|
|
673
|
-
### The Problem with CustomAgent for P2P
|
|
674
|
-
|
|
675
|
-
Every P2P CustomAgent needs this boilerplate:
|
|
676
|
-
|
|
677
|
-
```python
|
|
678
|
-
# BEFORE (CustomAgent) - You write the same loop every time
|
|
679
|
-
class MyAgent(CustomAgent):
|
|
680
|
-
role = "processor"
|
|
681
|
-
capabilities = ["processing"]
|
|
682
|
-
|
|
683
|
-
async def run(self):
|
|
684
|
-
"""You have to write this loop for every P2P agent."""
|
|
685
|
-
while not self.shutdown_requested:
|
|
686
|
-
if self.peers:
|
|
687
|
-
msg = await self.peers.receive(timeout=0.5)
|
|
688
|
-
if msg and msg.is_request:
|
|
689
|
-
# Handle request
|
|
690
|
-
result = self.process(msg.data)
|
|
691
|
-
await self.peers.respond(msg, {"response": result})
|
|
692
|
-
elif msg and msg.is_notify:
|
|
693
|
-
# Handle notification
|
|
694
|
-
self.handle_notify(msg.data)
|
|
695
|
-
await asyncio.sleep(0.1)
|
|
793
|
+
## P2P Message Handlers
|
|
696
794
|
|
|
697
|
-
|
|
698
|
-
"""Still required even though you're using run()."""
|
|
699
|
-
return {"status": "success"}
|
|
700
|
-
```
|
|
795
|
+
CustomAgent includes built-in handlers for P2P communication - just implement the handlers you need.
|
|
701
796
|
|
|
702
|
-
###
|
|
797
|
+
### Handler-Based P2P (Recommended)
|
|
703
798
|
|
|
704
799
|
```python
|
|
705
|
-
|
|
706
|
-
from jarviscore.profiles import ListenerAgent
|
|
800
|
+
from jarviscore.profiles import CustomAgent
|
|
707
801
|
|
|
708
|
-
class MyAgent(
|
|
802
|
+
class MyAgent(CustomAgent):
|
|
709
803
|
role = "processor"
|
|
710
804
|
capabilities = ["processing"]
|
|
711
805
|
|
|
@@ -718,27 +812,25 @@ class MyAgent(ListenerAgent):
|
|
|
718
812
|
print(f"Notification received: {msg.data}")
|
|
719
813
|
```
|
|
720
814
|
|
|
721
|
-
**What you no longer need:**
|
|
722
|
-
- ❌ `run()` loop with `while not self.shutdown_requested`
|
|
723
|
-
- ❌ `self.peers.receive()` and `self.peers.respond()` boilerplate
|
|
724
|
-
- ❌ `execute_task()` stub method
|
|
725
|
-
- ❌ `asyncio.sleep()` timing
|
|
726
|
-
|
|
727
815
|
**What the framework handles:**
|
|
728
|
-
-
|
|
729
|
-
-
|
|
730
|
-
-
|
|
731
|
-
-
|
|
732
|
-
-
|
|
816
|
+
- Message receiving loop (`run()` is built-in)
|
|
817
|
+
- Routing requests to `on_peer_request()`
|
|
818
|
+
- Routing notifications to `on_peer_notify()`
|
|
819
|
+
- Automatic response sending (configurable with `auto_respond`)
|
|
820
|
+
- Shutdown handling
|
|
733
821
|
|
|
734
|
-
|
|
822
|
+
**Configuration:**
|
|
823
|
+
- `listen_timeout` (float): Seconds to wait for messages (default: 1.0)
|
|
824
|
+
- `auto_respond` (bool): Auto-send `on_peer_request()` return value (default: True)
|
|
825
|
+
|
|
826
|
+
### Complete P2P Example
|
|
735
827
|
|
|
736
828
|
```python
|
|
737
829
|
# agents.py
|
|
738
|
-
from jarviscore.profiles import
|
|
830
|
+
from jarviscore.profiles import CustomAgent
|
|
739
831
|
|
|
740
832
|
|
|
741
|
-
class AnalystAgent(
|
|
833
|
+
class AnalystAgent(CustomAgent):
|
|
742
834
|
"""A data analyst that responds to peer requests."""
|
|
743
835
|
|
|
744
836
|
role = "analyst"
|
|
@@ -778,7 +870,7 @@ class AnalystAgent(ListenerAgent):
|
|
|
778
870
|
print(f"[{self.role}] Received notification: {msg.data}")
|
|
779
871
|
|
|
780
872
|
|
|
781
|
-
class AssistantAgent(
|
|
873
|
+
class AssistantAgent(CustomAgent):
|
|
782
874
|
"""An assistant that coordinates with specialists."""
|
|
783
875
|
|
|
784
876
|
role = "assistant"
|
|
@@ -828,18 +920,17 @@ if __name__ == "__main__":
|
|
|
828
920
|
asyncio.run(main())
|
|
829
921
|
```
|
|
830
922
|
|
|
831
|
-
### When to Use
|
|
923
|
+
### When to Use Handlers vs Custom run()
|
|
832
924
|
|
|
833
|
-
| Use
|
|
834
|
-
|
|
835
|
-
|
|
|
836
|
-
|
|
|
837
|
-
| You
|
|
838
|
-
| You want less boilerplate | You have complex coordination logic |
|
|
925
|
+
| Use handlers (`on_peer_request`) when... | Override `run()` when... |
|
|
926
|
+
|------------------------------------------|--------------------------|
|
|
927
|
+
| Request/response pattern fits your use case | You need custom message loop timing |
|
|
928
|
+
| You're integrating with FastAPI | You need to initiate messages proactively |
|
|
929
|
+
| You want minimal boilerplate | You have complex coordination logic |
|
|
839
930
|
|
|
840
|
-
###
|
|
931
|
+
### CustomAgent with FastAPI
|
|
841
932
|
|
|
842
|
-
|
|
933
|
+
CustomAgent works seamlessly with FastAPI. See [FastAPI Integration](#fastapi-integration-v030) below.
|
|
843
934
|
|
|
844
935
|
---
|
|
845
936
|
|
|
@@ -1446,11 +1537,11 @@ async def process(data: dict):
|
|
|
1446
1537
|
```python
|
|
1447
1538
|
# AFTER: 3 lines to integrate
|
|
1448
1539
|
from fastapi import FastAPI
|
|
1449
|
-
from jarviscore.profiles import
|
|
1540
|
+
from jarviscore.profiles import CustomAgent
|
|
1450
1541
|
from jarviscore.integrations.fastapi import JarvisLifespan
|
|
1451
1542
|
|
|
1452
1543
|
|
|
1453
|
-
class ProcessorAgent(
|
|
1544
|
+
class ProcessorAgent(CustomAgent):
|
|
1454
1545
|
role = "processor"
|
|
1455
1546
|
capabilities = ["processing"]
|
|
1456
1547
|
|
|
@@ -1492,7 +1583,7 @@ app = FastAPI(
|
|
|
1492
1583
|
# app.py
|
|
1493
1584
|
from fastapi import FastAPI, HTTPException
|
|
1494
1585
|
from pydantic import BaseModel
|
|
1495
|
-
from jarviscore.profiles import
|
|
1586
|
+
from jarviscore.profiles import CustomAgent
|
|
1496
1587
|
from jarviscore.integrations.fastapi import JarvisLifespan
|
|
1497
1588
|
|
|
1498
1589
|
|
|
@@ -1500,7 +1591,7 @@ class AnalysisRequest(BaseModel):
|
|
|
1500
1591
|
data: str
|
|
1501
1592
|
|
|
1502
1593
|
|
|
1503
|
-
class AnalystAgent(
|
|
1594
|
+
class AnalystAgent(CustomAgent):
|
|
1504
1595
|
"""Agent that handles both API requests and P2P messages."""
|
|
1505
1596
|
|
|
1506
1597
|
role = "analyst"
|
|
@@ -1634,11 +1725,11 @@ await mesh.start()
|
|
|
1634
1725
|
# Each agent can join any mesh independently
|
|
1635
1726
|
|
|
1636
1727
|
# agent_container.py (runs in Docker/K8s)
|
|
1637
|
-
from jarviscore.profiles import
|
|
1728
|
+
from jarviscore.profiles import CustomAgent
|
|
1638
1729
|
import os
|
|
1639
1730
|
|
|
1640
1731
|
|
|
1641
|
-
class WorkerAgent(
|
|
1732
|
+
class WorkerAgent(CustomAgent):
|
|
1642
1733
|
role = "worker"
|
|
1643
1734
|
capabilities = ["processing"]
|
|
1644
1735
|
|
|
@@ -1685,10 +1776,10 @@ CMD ["python", "agent.py"]
|
|
|
1685
1776
|
# agent.py
|
|
1686
1777
|
import asyncio
|
|
1687
1778
|
import os
|
|
1688
|
-
from jarviscore.profiles import
|
|
1779
|
+
from jarviscore.profiles import CustomAgent
|
|
1689
1780
|
|
|
1690
1781
|
|
|
1691
|
-
class WorkerAgent(
|
|
1782
|
+
class WorkerAgent(CustomAgent):
|
|
1692
1783
|
role = "worker"
|
|
1693
1784
|
capabilities = ["processing"]
|
|
1694
1785
|
|
|
@@ -1855,20 +1946,22 @@ if agent.peers:
|
|
|
1855
1946
|
| `leave_mesh()` | Both | **(v0.3.0)** Gracefully leave the mesh |
|
|
1856
1947
|
| `serve_forever()` | Both | **(v0.3.0)** Block until shutdown signal |
|
|
1857
1948
|
|
|
1858
|
-
###
|
|
1949
|
+
### P2P Message Handlers (v0.3.1)
|
|
1859
1950
|
|
|
1860
|
-
|
|
1951
|
+
CustomAgent includes built-in P2P message handlers for handler-based communication.
|
|
1861
1952
|
|
|
1862
1953
|
| Attribute/Method | Type | Description |
|
|
1863
1954
|
|------------------|------|-------------|
|
|
1864
|
-
| `
|
|
1865
|
-
| `
|
|
1866
|
-
| `on_peer_request(msg)` | async method | Handle incoming requests. Return
|
|
1955
|
+
| `listen_timeout` | `float` | Seconds to wait for messages in `run()` loop. Default: 1.0 |
|
|
1956
|
+
| `auto_respond` | `bool` | Auto-send `on_peer_request` return value. Default: True |
|
|
1957
|
+
| `on_peer_request(msg)` | async method | Handle incoming requests. Return value sent as response |
|
|
1867
1958
|
| `on_peer_notify(msg)` | async method | Handle broadcast notifications. No return needed |
|
|
1959
|
+
| `on_error(error, msg)` | async method | Handle errors during message processing |
|
|
1960
|
+
| `run()` | async method | Built-in listener loop that dispatches to handlers |
|
|
1868
1961
|
|
|
1869
|
-
**Note:**
|
|
1962
|
+
**Note:** Override `on_peer_request()` and `on_peer_notify()` for your business logic. The `run()` method handles the message dispatch automatically.
|
|
1870
1963
|
|
|
1871
|
-
### Why `execute_task()`
|
|
1964
|
+
### Why `execute_task()` Exists in CustomAgent
|
|
1872
1965
|
|
|
1873
1966
|
You may notice that P2P agents must implement `execute_task()` even though they primarily use `run()`. Here's why:
|
|
1874
1967
|
|
|
@@ -2172,10 +2265,10 @@ For complete, runnable examples, see:
|
|
|
2172
2265
|
|
|
2173
2266
|
- `examples/customagent_p2p_example.py` - P2P mode with LLM-driven peer communication
|
|
2174
2267
|
- `examples/customagent_distributed_example.py` - Distributed mode with workflows
|
|
2175
|
-
- `examples/
|
|
2268
|
+
- `examples/customagent_cognitive_discovery_example.py` - CustomAgent + cognitive discovery (v0.4.0)
|
|
2176
2269
|
- `examples/fastapi_integration_example.py` - FastAPI + JarvisLifespan (v0.3.0)
|
|
2177
2270
|
- `examples/cloud_deployment_example.py` - Self-registration with join_mesh (v0.3.0)
|
|
2178
2271
|
|
|
2179
2272
|
---
|
|
2180
2273
|
|
|
2181
|
-
*CustomAgent Guide - JarvisCore Framework v0.3.
|
|
2274
|
+
*CustomAgent Guide - JarvisCore Framework v0.3.1*
|