jarviscore-framework 0.3.0__py3-none-any.whl → 0.3.2__py3-none-any.whl

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
Files changed (43) hide show
  1. examples/cloud_deployment_example.py +3 -3
  2. examples/{listeneragent_cognitive_discovery_example.py → customagent_cognitive_discovery_example.py} +55 -14
  3. examples/customagent_distributed_example.py +140 -1
  4. examples/fastapi_integration_example.py +74 -11
  5. jarviscore/__init__.py +8 -11
  6. jarviscore/cli/smoketest.py +1 -1
  7. jarviscore/core/mesh.py +158 -0
  8. jarviscore/data/examples/cloud_deployment_example.py +3 -3
  9. jarviscore/data/examples/custom_profile_decorator.py +134 -0
  10. jarviscore/data/examples/custom_profile_wrap.py +168 -0
  11. jarviscore/data/examples/{listeneragent_cognitive_discovery_example.py → customagent_cognitive_discovery_example.py} +55 -14
  12. jarviscore/data/examples/customagent_distributed_example.py +140 -1
  13. jarviscore/data/examples/fastapi_integration_example.py +74 -11
  14. jarviscore/docs/API_REFERENCE.md +576 -47
  15. jarviscore/docs/CHANGELOG.md +131 -0
  16. jarviscore/docs/CONFIGURATION.md +1 -1
  17. jarviscore/docs/CUSTOMAGENT_GUIDE.md +591 -153
  18. jarviscore/docs/GETTING_STARTED.md +186 -329
  19. jarviscore/docs/TROUBLESHOOTING.md +1 -1
  20. jarviscore/docs/USER_GUIDE.md +292 -12
  21. jarviscore/integrations/fastapi.py +4 -4
  22. jarviscore/p2p/coordinator.py +36 -7
  23. jarviscore/p2p/messages.py +13 -0
  24. jarviscore/p2p/peer_client.py +380 -21
  25. jarviscore/p2p/peer_tool.py +17 -11
  26. jarviscore/profiles/__init__.py +2 -4
  27. jarviscore/profiles/customagent.py +302 -74
  28. jarviscore/testing/__init__.py +35 -0
  29. jarviscore/testing/mocks.py +578 -0
  30. {jarviscore_framework-0.3.0.dist-info → jarviscore_framework-0.3.2.dist-info}/METADATA +61 -46
  31. {jarviscore_framework-0.3.0.dist-info → jarviscore_framework-0.3.2.dist-info}/RECORD +42 -34
  32. tests/test_13_dx_improvements.py +37 -37
  33. tests/test_15_llm_cognitive_discovery.py +18 -18
  34. tests/test_16_unified_dx_flow.py +3 -3
  35. tests/test_17_session_context.py +489 -0
  36. tests/test_18_mesh_diagnostics.py +465 -0
  37. tests/test_19_async_requests.py +516 -0
  38. tests/test_20_load_balancing.py +546 -0
  39. tests/test_21_mock_testing.py +776 -0
  40. jarviscore/profiles/listeneragent.py +0 -292
  41. {jarviscore_framework-0.3.0.dist-info → jarviscore_framework-0.3.2.dist-info}/WHEEL +0 -0
  42. {jarviscore_framework-0.3.0.dist-info → jarviscore_framework-0.3.2.dist-info}/licenses/LICENSE +0 -0
  43. {jarviscore_framework-0.3.0.dist-info → jarviscore_framework-0.3.2.dist-info}/top_level.txt +0 -0
@@ -11,16 +11,21 @@ CustomAgent lets you integrate your **existing agent code** with JarvisCore's ne
11
11
 
12
12
  1. [Prerequisites](#prerequisites)
13
13
  2. [Choose Your Mode](#choose-your-mode)
14
- 3. [P2P Mode](#p2p-mode)
15
- 4. [ListenerAgent (v0.3.0)](#listeneragent-v030) - API-first agents without run() loops
16
- 5. [Distributed Mode](#distributed-mode)
17
- 6. [Cognitive Discovery (v0.3.0)](#cognitive-discovery-v030) - Dynamic peer awareness for LLMs
18
- 7. [FastAPI Integration (v0.3.0)](#fastapi-integration-v030) - 3-line setup with JarvisLifespan
14
+ 3. [P2P Mode](#p2p-mode) - Handler-based peer communication
15
+ 4. [Distributed Mode](#distributed-mode) - Workflow tasks + P2P
16
+ 5. [Cognitive Discovery (v0.3.0)](#cognitive-discovery-v030) - Dynamic peer awareness for LLMs
17
+ 6. [FastAPI Integration (v0.3.0)](#fastapi-integration-v030) - 3-line setup with JarvisLifespan
18
+ 7. [Framework Integration Patterns](#framework-integration-patterns) - aiohttp, Flask, Django
19
19
  8. [Cloud Deployment (v0.3.0)](#cloud-deployment-v030) - Self-registration for containers
20
20
  9. [API Reference](#api-reference)
21
21
  10. [Multi-Node Deployment](#multi-node-deployment)
22
22
  11. [Error Handling](#error-handling)
23
23
  12. [Troubleshooting](#troubleshooting)
24
+ 13. [Session Context Propagation (v0.3.2)](#session-context-propagation-v032) - Request tracking and metadata
25
+ 14. [Async Request Pattern (v0.3.2)](#async-request-pattern-v032) - Non-blocking parallel requests
26
+ 15. [Load Balancing Strategies (v0.3.2)](#load-balancing-strategies-v032) - Round-robin and random selection
27
+ 16. [Mesh Diagnostics (v0.3.2)](#mesh-diagnostics-v032) - Health monitoring and debugging
28
+ 17. [Testing with MockMesh (v0.3.2)](#testing-with-mockmesh-v032) - Unit testing patterns
24
29
 
25
30
  ---
26
31
 
@@ -102,7 +107,7 @@ class MyLLMClient:
102
107
 
103
108
  ### Quick Comparison
104
109
 
105
- | Feature | P2P Mode (CustomAgent) | P2P Mode (ListenerAgent) | Distributed Mode |
110
+ | Feature | P2P Mode (CustomAgent) | P2P Mode (CustomAgent) | Distributed Mode |
106
111
  |---------|------------------------|--------------------------|------------------|
107
112
  | **Primary method** | `run()` - continuous loop | `on_peer_request()` handlers | `execute_task()` - on-demand |
108
113
  | **Communication** | Direct peer messaging | Handler-based (no loop) | Workflow orchestration |
@@ -110,7 +115,7 @@ class MyLLMClient:
110
115
  | **Coordination** | Agents self-coordinate | Framework handles loop | Framework coordinates |
111
116
  | **Supports workflows** | No | No | Yes |
112
117
 
113
- > **New in v0.3.0**: `ListenerAgent` lets you write P2P agents without managing the `run()` loop yourself. Just implement `on_peer_request()` and `on_peer_notify()` handlers.
118
+ > **CustomAgent** includes built-in P2P handlers - just implement `on_peer_request()` and `on_peer_notify()`. No need to write your own `run()` loop.
114
119
 
115
120
  ---
116
121
 
@@ -118,6 +123,46 @@ class MyLLMClient:
118
123
 
119
124
  P2P mode is for agents that run continuously and communicate directly with each other.
120
125
 
126
+ ### v0.3.1 Update: Handler-Based Pattern
127
+
128
+ **We've simplified P2P agents!** No more manual `run()` loops.
129
+
130
+ ```
131
+ ┌────────────────────────────────────────────────────────────────┐
132
+ │ OLD vs NEW Pattern │
133
+ ├────────────────────────────────────────────────────────────────┤
134
+ │ │
135
+ │ ❌ OLD (v0.2.x) - Manual Loop │
136
+ │ ┌──────────────────────────────────────────────┐ │
137
+ │ │ async def run(self): │ │
138
+ │ │ while not self.shutdown_requested: │ │
139
+ │ │ msg = await self.peers.receive() │ ← Polling │
140
+ │ │ if msg and msg.is_request: │ │
141
+ │ │ result = self.process(msg) │ │
142
+ │ │ await self.peers.respond(...) │ ← Manual │
143
+ │ │ await asyncio.sleep(0.1) │ │
144
+ │ └──────────────────────────────────────────────┘ │
145
+ │ │
146
+ │ ✅ NEW (v0.3.0+) - Handler-Based │
147
+ │ ┌──────────────────────────────────────────────┐ │
148
+ │ │ async def on_peer_request(self, msg): │ │
149
+ │ │ result = self.process(msg) │ │
150
+ │ │ return result │ ← Simple! │
151
+ │ └──────────────────────────────────────────────┘ │
152
+ │ ▲ │
153
+ │ │ │
154
+ │ └─ Framework calls this automatically │
155
+ │ │
156
+ └────────────────────────────────────────────────────────────────┘
157
+ ```
158
+
159
+ **Benefits:**
160
+ - ✅ **Less Code**: No boilerplate loops
161
+ - ✅ **Simpler**: Just return your result
162
+ - ✅ **Automatic**: Framework handles message dispatch
163
+ - ✅ **Error Handling**: Built-in exception capture
164
+ - ✅ **FastAPI Ready**: Works with `JarvisLifespan` out of the box
165
+
121
166
  ### Migration Overview
122
167
 
123
168
  ```
@@ -163,59 +208,89 @@ if __name__ == "__main__":
163
208
 
164
209
  ### Step 3: Modify Your Agent Code → `agents.py`
165
210
 
166
- Convert your existing class to inherit from `CustomAgent`:
211
+ **🚨 IMPORTANT CHANGE (v0.3.0+)**: We've moved from `run()` loops to **handler-based** agents!
167
212
 
213
+ #### ❌ OLD Pattern (Deprecated)
168
214
  ```python
169
- # agents.py (MODIFIED VERSION OF YOUR CODE)
170
- import asyncio
215
+ # DON'T DO THIS ANYMORE!
216
+ class ResearcherAgent(CustomAgent):
217
+ async def run(self): # ❌ Manual loop
218
+ while not self.shutdown_requested:
219
+ msg = await self.peers.receive(timeout=0.5)
220
+ if msg and msg.is_request:
221
+ result = self.llm.chat(f"Research: {msg.data['question']}")
222
+ await self.peers.respond(msg, {"response": result})
223
+ await asyncio.sleep(0.1)
224
+ ```
225
+ **Problems**: Manual loops, boilerplate, error-prone
226
+
227
+ #### ✅ NEW Pattern (Recommended)
228
+ ```python
229
+ # agents.py (MODERN VERSION)
171
230
  from jarviscore.profiles import CustomAgent
172
231
 
173
232
 
174
233
  class ResearcherAgent(CustomAgent):
175
- """Your agent, now framework-integrated."""
234
+ """Your agent, now framework-integrated with handlers."""
176
235
 
177
- # NEW: Required class attributes for discovery
236
+ # Required class attributes for discovery
178
237
  role = "researcher"
179
238
  capabilities = ["research", "analysis"]
239
+ description = "Research specialist that gathers and synthesizes information"
180
240
 
181
241
  async def setup(self):
182
- """NEW: Called once on startup. Move your __init__ logic here."""
242
+ """Called once on startup. Initialize your LLM here."""
183
243
  await super().setup()
184
244
  self.llm = MyLLMClient() # Your existing initialization
185
245
 
186
- async def run(self):
187
- """NEW: Main loop - replaces your if __name__ == '__main__' block."""
188
- while not self.shutdown_requested:
189
- if self.peers:
190
- msg = await self.peers.receive(timeout=0.5)
191
- if msg and msg.is_request:
192
- query = msg.data.get("question", "")
193
- # YOUR EXISTING LOGIC:
194
- result = self.llm.chat(f"Research: {query}")
195
- await self.peers.respond(msg, {"response": result})
196
- await asyncio.sleep(0.1)
246
+ async def on_peer_request(self, msg):
247
+ """
248
+ Handle incoming requests from other agents.
249
+
250
+ This is called AUTOMATICALLY when another agent asks you a question.
251
+ No loops, no polling, no boilerplate!
252
+ """
253
+ query = msg.data.get("question", "")
254
+
255
+ # YOUR EXISTING LOGIC:
256
+ result = self.llm.chat(f"Research: {query}")
257
+
258
+ # Just return the data - framework handles the response
259
+ return {"response": result}
197
260
 
198
261
  async def execute_task(self, task: dict) -> dict:
199
262
  """
200
- Required by base Agent class (@abstractmethod).
201
-
202
- In P2P mode, your main logic lives in run(), not here.
203
- This must exist because Python requires all abstract methods
204
- to be implemented, or you get TypeError on instantiation.
263
+ Required by base Agent class for workflow mode.
264
+
265
+ In pure P2P mode, your logic is in on_peer_request().
266
+ This is used when agent is part of a workflow pipeline.
205
267
  """
206
- return {"status": "success", "note": "This agent uses run() for P2P mode"}
268
+ return {"status": "success", "note": "This agent uses handlers for P2P mode"}
207
269
  ```
208
270
 
209
271
  **What changed:**
210
272
 
211
- | Before | After |
212
- |--------|-------|
213
- | `class MyResearcher:` | `class ResearcherAgent(CustomAgent):` |
214
- | `def __init__(self):` | `async def setup(self):` + `await super().setup()` |
215
- | `if __name__ == "__main__":` | `async def run(self):` loop |
216
- | Direct method calls | Peer message handling |
273
+ | Before (v0.2.x) | After (v0.3.0+) | Why? |
274
+ |-----------------|-----------------|------|
275
+ | `async def run(self):` with `while` loop | `async def on_peer_request(self, msg):` handler | Automatic dispatch, less boilerplate |
276
+ | Manual `await self.peers.receive()` | Framework calls your handler | No polling needed |
277
+ | Manual `await self.peers.respond(msg, data)` | Just `return data` | Simpler error handling |
278
+ | `asyncio.create_task(agent.run())` | Not needed - handlers run automatically | Cleaner lifecycle |
279
+
280
+ #### Migration Checklist (v0.2.x → v0.3.0+)
217
281
 
218
- > **Note**: This is a minimal example. For the full pattern with **LLM-driven peer communication** (where your LLM autonomously decides when to call other agents), see the [Complete Example](#complete-example-llm-driven-peer-communication) below.
282
+ If you have existing agents using the `run()` loop pattern:
283
+
284
+ - [ ] Replace `async def run(self):` with `async def on_peer_request(self, msg):`
285
+ - [ ] Remove `while not self.shutdown_requested:` loop
286
+ - [ ] Remove `msg = await self.peers.receive(timeout=0.5)` polling
287
+ - [ ] Change `await self.peers.respond(msg, data)` to `return data`
288
+ - [ ] Remove manual `asyncio.create_task(agent.run())` calls in main.py
289
+ - [ ] Consider using `JarvisLifespan` for FastAPI integration (see Step 4)
290
+ - [ ] Add `description` class attribute for better cognitive discovery
291
+ - [ ] Use `get_cognitive_context()` instead of hardcoded peer lists
292
+
293
+ > **Note**: The `run()` method is **still supported** for backward compatibility, but handlers are now the recommended approach. For the full pattern with **LLM-driven peer communication** (where your LLM autonomously decides when to call other agents), see the [Complete Example](#complete-example-llm-driven-peer-communication) below.
219
294
 
220
295
  ### Step 4: Create New Entry Point → `main.py`
221
296
 
@@ -316,22 +391,22 @@ This is the **key pattern** for P2P mode. Your LLM gets peer tools added to its
316
391
  **The key insight**: You add peer tools to your LLM's toolset. The LLM decides when to use them.
317
392
 
318
393
  ```python
319
- # agents.py
320
- import asyncio
394
+ # agents.py - UPDATED FOR v0.3.0+
321
395
  from jarviscore.profiles import CustomAgent
322
396
 
323
397
 
324
398
  class AnalystAgent(CustomAgent):
325
399
  """
326
- Analyst agent - specialists in data analysis.
400
+ Analyst agent - specialist in data analysis.
327
401
 
328
- This agent:
329
- 1. Listens for incoming requests from peers
330
- 2. Processes requests using its own LLM
331
- 3. Responds with analysis results
402
+ NEW PATTERN (v0.3.0+):
403
+ - Uses @on_peer_request HANDLER instead of run() loop
404
+ - Automatically receives and responds to peer requests
405
+ - No manual message polling needed!
332
406
  """
333
407
  role = "analyst"
334
408
  capabilities = ["analysis", "data_interpretation", "reporting"]
409
+ description = "Expert data analyst for statistics and insights"
335
410
 
336
411
  async def setup(self):
337
412
  await super().setup()
@@ -406,19 +481,20 @@ Analyze data thoroughly and provide insights."""
406
481
 
407
482
  return response.get("content", "Analysis complete.")
408
483
 
409
- async def run(self):
410
- """Listen for incoming requests from peers."""
411
- while not self.shutdown_requested:
412
- if self.peers:
413
- msg = await self.peers.receive(timeout=0.5)
414
- if msg and msg.is_request:
415
- query = msg.data.get("question", msg.data.get("query", ""))
484
+ async def on_peer_request(self, msg):
485
+ """
486
+ Handle incoming requests from peers.
487
+
488
+ NEW: This is called automatically when another agent sends a request.
489
+ OLD: Manual while loop with receive() polling
490
+ """
491
+ query = msg.data.get("question", msg.data.get("query", ""))
416
492
 
417
- # Process with LLM
418
- result = await self.process_with_llm(query)
493
+ # Process with LLM
494
+ result = await self.process_with_llm(query)
419
495
 
420
- await self.peers.respond(msg, {"response": result})
421
- await asyncio.sleep(0.1)
496
+ # Just return the data - framework handles the response!
497
+ return {"response": result}
422
498
 
423
499
  async def execute_task(self, task: dict) -> dict:
424
500
  """Required by base class."""
@@ -429,13 +505,16 @@ class AssistantAgent(CustomAgent):
429
505
  """
430
506
  Assistant agent - coordinates with other specialists.
431
507
 
432
- This agent:
508
+ NEW PATTERN (v0.3.0+):
433
509
  1. Has its own LLM for reasoning
434
- 2. Has peer tools (ask_peer, broadcast) in its toolset
435
- 3. LLM AUTONOMOUSLY decides when to ask other agents
510
+ 2. Uses get_cognitive_context() to discover available peers
511
+ 3. Peer tools (ask_peer, broadcast) added to LLM toolset
512
+ 4. LLM AUTONOMOUSLY decides when to ask other agents
513
+ 5. Uses on_peer_request handler instead of run() loop
436
514
  """
437
515
  role = "assistant"
438
516
  capabilities = ["chat", "coordination", "search"]
517
+ description = "General assistant that delegates specialized tasks to experts"
439
518
 
440
519
  async def setup(self):
441
520
  await super().setup()
@@ -535,16 +614,16 @@ Be concise in your responses."""
535
614
 
536
615
  return response.get("content", "")
537
616
 
538
- async def run(self):
539
- """Main loop - listen for incoming requests."""
540
- while not self.shutdown_requested:
541
- if self.peers:
542
- msg = await self.peers.receive(timeout=0.5)
543
- if msg and msg.is_request:
544
- query = msg.data.get("query", "")
545
- result = await self.chat(query)
546
- await self.peers.respond(msg, {"response": result})
547
- await asyncio.sleep(0.1)
617
+ async def on_peer_request(self, msg):
618
+ """
619
+ Handle incoming requests from other agents.
620
+
621
+ NEW: Handler-based - called automatically on request
622
+ OLD: Manual while loop with receive() polling
623
+ """
624
+ query = msg.data.get("query", "")
625
+ result = await self.chat(query)
626
+ return {"response": result}
548
627
 
549
628
  async def execute_task(self, task: dict) -> dict:
550
629
  """Required by base class."""
@@ -552,13 +631,14 @@ Be concise in your responses."""
552
631
  ```
553
632
 
554
633
  ```python
555
- # main.py
634
+ # main.py - UPDATED FOR v0.3.0+ (Handler-Based Pattern)
556
635
  import asyncio
557
636
  from jarviscore import Mesh
558
637
  from agents import AnalystAgent, AssistantAgent
559
638
 
560
639
 
561
640
  async def main():
641
+ """Simple P2P mesh without web server."""
562
642
  mesh = Mesh(
563
643
  mode="p2p",
564
644
  config={
@@ -567,17 +647,15 @@ async def main():
567
647
  }
568
648
  )
569
649
 
570
- # Add both agents
650
+ # Add both agents - they'll use handlers automatically
571
651
  mesh.add(AnalystAgent)
572
652
  assistant = mesh.add(AssistantAgent)
573
653
 
574
654
  await mesh.start()
575
655
 
576
- # Start analyst listening in background
577
- analyst = mesh.get_agent("analyst")
578
- analyst_task = asyncio.create_task(analyst.run())
579
-
580
- # Give time for setup
656
+ # NO MORE MANUAL run() TASKS! Handlers are automatic.
657
+
658
+ # Give time for mesh to stabilize
581
659
  await asyncio.sleep(0.5)
582
660
 
583
661
  # User asks a question - LLM will autonomously decide to use ask_peer
@@ -590,8 +668,6 @@ async def main():
590
668
  # Output: [{'tool': 'ask_peer', 'args': {'role': 'analyst', 'question': '...'}}]
591
669
 
592
670
  # Cleanup
593
- analyst.request_shutdown()
594
- analyst_task.cancel()
595
671
  await mesh.stop()
596
672
 
597
673
 
@@ -599,6 +675,59 @@ if __name__ == "__main__":
599
675
  asyncio.run(main())
600
676
  ```
601
677
 
678
+ **Or better yet, use FastAPI + JarvisLifespan:**
679
+
680
+ ```python
681
+ # main.py - PRODUCTION PATTERN (FastAPI + JarvisLifespan)
682
+ from fastapi import FastAPI, Request
683
+ from fastapi.responses import JSONResponse
684
+ from jarviscore.integrations import JarvisLifespan
685
+ from agents import AnalystAgent, AssistantAgent
686
+ import uvicorn
687
+
688
+
689
+ # ✅ ONE-LINE MESH SETUP with JarvisLifespan!
690
+ app = FastAPI(lifespan=JarvisLifespan([AnalystAgent, AssistantAgent]))
691
+
692
+
693
+ @app.post("/chat")
694
+ async def chat(request: Request):
695
+ """Chat endpoint - assistant may autonomously delegate to analyst."""
696
+ data = await request.json()
697
+ message = data.get("message", "")
698
+
699
+ # Get assistant from mesh (JarvisLifespan manages it)
700
+ assistant = app.state.mesh.get_agent("assistant")
701
+
702
+ # Chat - LLM autonomously discovers and delegates if needed
703
+ response = await assistant.chat(message)
704
+
705
+ return JSONResponse(response)
706
+
707
+
708
+ @app.get("/agents")
709
+ async def list_agents():
710
+ """Show what each agent sees (cognitive context)."""
711
+ mesh = app.state.mesh
712
+ agents_info = {}
713
+
714
+ for agent in mesh.agents:
715
+ if agent.peers:
716
+ context = agent.peers.get_cognitive_context(format="markdown")
717
+ agents_info[agent.role] = {
718
+ "role": agent.role,
719
+ "capabilities": agent.capabilities,
720
+ "peers_visible": len(agent.peers.get_all_peers()),
721
+ "cognitive_context": context[:200] + "..."
722
+ }
723
+
724
+ return JSONResponse(agents_info)
725
+
726
+
727
+ if __name__ == "__main__":
728
+ uvicorn.run(app, host="0.0.0.0", port=8000)
729
+ ```
730
+
602
731
  ### Key Concepts for P2P Mode
603
732
 
604
733
  #### Adding Peer Tools to Your LLM
@@ -666,46 +795,16 @@ async def run(self):
666
795
 
667
796
  ---
668
797
 
669
- ## ListenerAgent (v0.3.0)
670
-
671
- **ListenerAgent** is for developers who want P2P communication without writing the `run()` loop themselves.
798
+ ## P2P Message Handlers
672
799
 
673
- ### The Problem with CustomAgent for P2P
674
-
675
- Every P2P CustomAgent needs this boilerplate:
676
-
677
- ```python
678
- # BEFORE (CustomAgent) - You write the same loop every time
679
- class MyAgent(CustomAgent):
680
- role = "processor"
681
- capabilities = ["processing"]
682
-
683
- async def run(self):
684
- """You have to write this loop for every P2P agent."""
685
- while not self.shutdown_requested:
686
- if self.peers:
687
- msg = await self.peers.receive(timeout=0.5)
688
- if msg and msg.is_request:
689
- # Handle request
690
- result = self.process(msg.data)
691
- await self.peers.respond(msg, {"response": result})
692
- elif msg and msg.is_notify:
693
- # Handle notification
694
- self.handle_notify(msg.data)
695
- await asyncio.sleep(0.1)
696
-
697
- async def execute_task(self, task):
698
- """Still required even though you're using run()."""
699
- return {"status": "success"}
700
- ```
800
+ CustomAgent includes built-in handlers for P2P communication - just implement the handlers you need.
701
801
 
702
- ### The Solution: ListenerAgent
802
+ ### Handler-Based P2P (Recommended)
703
803
 
704
804
  ```python
705
- # AFTER (ListenerAgent) - Just implement the handlers
706
- from jarviscore.profiles import ListenerAgent
805
+ from jarviscore.profiles import CustomAgent
707
806
 
708
- class MyAgent(ListenerAgent):
807
+ class MyAgent(CustomAgent):
709
808
  role = "processor"
710
809
  capabilities = ["processing"]
711
810
 
@@ -718,27 +817,25 @@ class MyAgent(ListenerAgent):
718
817
  print(f"Notification received: {msg.data}")
719
818
  ```
720
819
 
721
- **What you no longer need:**
722
- - ❌ `run()` loop with `while not self.shutdown_requested`
723
- - ❌ `self.peers.receive()` and `self.peers.respond()` boilerplate
724
- - ❌ `execute_task()` stub method
725
- - ❌ `asyncio.sleep()` timing
726
-
727
820
  **What the framework handles:**
728
- - Message receiving loop
729
- - Routing requests to `on_peer_request()`
730
- - Routing notifications to `on_peer_notify()`
731
- - Automatic response sending
732
- - Shutdown handling
821
+ - Message receiving loop (`run()` is built-in)
822
+ - Routing requests to `on_peer_request()`
823
+ - Routing notifications to `on_peer_notify()`
824
+ - Automatic response sending (configurable with `auto_respond`)
825
+ - Shutdown handling
826
+
827
+ **Configuration:**
828
+ - `listen_timeout` (float): Seconds to wait for messages (default: 1.0)
829
+ - `auto_respond` (bool): Auto-send `on_peer_request()` return value (default: True)
733
830
 
734
- ### Complete ListenerAgent Example
831
+ ### Complete P2P Example
735
832
 
736
833
  ```python
737
834
  # agents.py
738
- from jarviscore.profiles import ListenerAgent
835
+ from jarviscore.profiles import CustomAgent
739
836
 
740
837
 
741
- class AnalystAgent(ListenerAgent):
838
+ class AnalystAgent(CustomAgent):
742
839
  """A data analyst that responds to peer requests."""
743
840
 
744
841
  role = "analyst"
@@ -778,7 +875,7 @@ class AnalystAgent(ListenerAgent):
778
875
  print(f"[{self.role}] Received notification: {msg.data}")
779
876
 
780
877
 
781
- class AssistantAgent(ListenerAgent):
878
+ class AssistantAgent(CustomAgent):
782
879
  """An assistant that coordinates with specialists."""
783
880
 
784
881
  role = "assistant"
@@ -828,18 +925,17 @@ if __name__ == "__main__":
828
925
  asyncio.run(main())
829
926
  ```
830
927
 
831
- ### When to Use ListenerAgent vs CustomAgent
928
+ ### When to Use Handlers vs Custom run()
832
929
 
833
- | Use ListenerAgent when... | Use CustomAgent when... |
834
- |---------------------------|-------------------------|
835
- | You want the simplest P2P agent | You need custom message loop timing |
836
- | Request/response pattern fits your use case | You need to initiate messages proactively |
837
- | You're integrating with FastAPI | You need fine-grained control over the loop |
838
- | You want less boilerplate | You have complex coordination logic |
930
+ | Use handlers (`on_peer_request`) when... | Override `run()` when... |
931
+ |------------------------------------------|--------------------------|
932
+ | Request/response pattern fits your use case | You need custom message loop timing |
933
+ | You're integrating with FastAPI | You need to initiate messages proactively |
934
+ | You want minimal boilerplate | You have complex coordination logic |
839
935
 
840
- ### ListenerAgent with FastAPI
936
+ ### CustomAgent with FastAPI
841
937
 
842
- ListenerAgent shines with FastAPI integration. See [FastAPI Integration](#fastapi-integration-v030) below.
938
+ CustomAgent works seamlessly with FastAPI. See [FastAPI Integration](#fastapi-integration-v030) below.
843
939
 
844
940
  ---
845
941
 
@@ -1446,11 +1542,11 @@ async def process(data: dict):
1446
1542
  ```python
1447
1543
  # AFTER: 3 lines to integrate
1448
1544
  from fastapi import FastAPI
1449
- from jarviscore.profiles import ListenerAgent
1545
+ from jarviscore.profiles import CustomAgent
1450
1546
  from jarviscore.integrations.fastapi import JarvisLifespan
1451
1547
 
1452
1548
 
1453
- class ProcessorAgent(ListenerAgent):
1549
+ class ProcessorAgent(CustomAgent):
1454
1550
  role = "processor"
1455
1551
  capabilities = ["processing"]
1456
1552
 
@@ -1492,7 +1588,7 @@ app = FastAPI(
1492
1588
  # app.py
1493
1589
  from fastapi import FastAPI, HTTPException
1494
1590
  from pydantic import BaseModel
1495
- from jarviscore.profiles import ListenerAgent
1591
+ from jarviscore.profiles import CustomAgent
1496
1592
  from jarviscore.integrations.fastapi import JarvisLifespan
1497
1593
 
1498
1594
 
@@ -1500,7 +1596,7 @@ class AnalysisRequest(BaseModel):
1500
1596
  data: str
1501
1597
 
1502
1598
 
1503
- class AnalystAgent(ListenerAgent):
1599
+ class AnalystAgent(CustomAgent):
1504
1600
  """Agent that handles both API requests and P2P messages."""
1505
1601
 
1506
1602
  role = "analyst"
@@ -1634,11 +1730,11 @@ await mesh.start()
1634
1730
  # Each agent can join any mesh independently
1635
1731
 
1636
1732
  # agent_container.py (runs in Docker/K8s)
1637
- from jarviscore.profiles import ListenerAgent
1733
+ from jarviscore.profiles import CustomAgent
1638
1734
  import os
1639
1735
 
1640
1736
 
1641
- class WorkerAgent(ListenerAgent):
1737
+ class WorkerAgent(CustomAgent):
1642
1738
  role = "worker"
1643
1739
  capabilities = ["processing"]
1644
1740
 
@@ -1685,10 +1781,10 @@ CMD ["python", "agent.py"]
1685
1781
  # agent.py
1686
1782
  import asyncio
1687
1783
  import os
1688
- from jarviscore.profiles import ListenerAgent
1784
+ from jarviscore.profiles import CustomAgent
1689
1785
 
1690
1786
 
1691
- class WorkerAgent(ListenerAgent):
1787
+ class WorkerAgent(CustomAgent):
1692
1788
  role = "worker"
1693
1789
  capabilities = ["processing"]
1694
1790
 
@@ -1835,6 +1931,346 @@ if agent.peers:
1835
1931
 
1836
1932
  ---
1837
1933
 
1934
+ ## Session Context Propagation (v0.3.2)
1935
+
1936
+ Pass metadata (mission IDs, trace IDs, priorities) through message flows:
1937
+
1938
+ ### Sending Context
1939
+
1940
+ ```python
1941
+ # All messaging methods accept context parameter
1942
+ await self.peers.notify("logger", {"event": "started"},
1943
+ context={"mission_id": "m-123", "trace_id": "t-abc"})
1944
+
1945
+ response = await self.peers.request("analyst", {"query": "..."},
1946
+ context={"priority": "high", "user_id": "u-456"})
1947
+
1948
+ await self.peers.broadcast({"alert": "ready"},
1949
+ context={"source": "coordinator"})
1950
+ ```
1951
+
1952
+ ### Receiving Context
1953
+
1954
+ ```python
1955
+ async def on_peer_request(self, msg):
1956
+ # Context is available on the message
1957
+ mission_id = msg.context.get("mission_id") if msg.context else None
1958
+ trace_id = msg.context.get("trace_id") if msg.context else None
1959
+
1960
+ self._logger.info(f"Request for mission {mission_id}, trace {trace_id}")
1961
+
1962
+ return {"result": "processed"}
1963
+ ```
1964
+
1965
+ ### Auto-Propagation in respond()
1966
+
1967
+ Context automatically propagates from request to response:
1968
+
1969
+ ```python
1970
+ async def on_peer_request(self, msg):
1971
+ # msg.context = {"mission_id": "m-123", "trace_id": "t-abc"}
1972
+ result = await self.process(msg.data)
1973
+
1974
+ # Context auto-propagates - original sender receives same context
1975
+ await self.peers.respond(msg, {"result": result})
1976
+
1977
+ # Override if needed
1978
+ await self.peers.respond(msg, {"result": result},
1979
+ context={"status": "completed", "mission_id": msg.context.get("mission_id")})
1980
+ ```
1981
+
1982
+ ---
1983
+
1984
+ ## Async Request Pattern (v0.3.2)
1985
+
1986
+ Fire multiple requests without blocking, collect responses later:
1987
+
1988
+ ### Fire-and-Collect Pattern
1989
+
1990
+ ```python
1991
+ async def parallel_analysis(self, data_chunks):
1992
+ # Fire off requests to all available analysts
1993
+ analysts = self.peers.discover(role="analyst")
1994
+ request_ids = []
1995
+
1996
+ for i, (analyst, chunk) in enumerate(zip(analysts, data_chunks)):
1997
+ req_id = await self.peers.ask_async(
1998
+ analyst.agent_id,
1999
+ {"chunk_id": i, "data": chunk},
2000
+ context={"batch_id": "batch-001"}
2001
+ )
2002
+ request_ids.append((req_id, analyst.agent_id))
2003
+
2004
+ # Do other work while analysts process
2005
+ await self.update_status("processing")
2006
+
2007
+ # Collect results
2008
+ results = []
2009
+ for req_id, analyst_id in request_ids:
2010
+ response = await self.peers.check_inbox(req_id, timeout=30)
2011
+ if response:
2012
+ results.append(response)
2013
+ else:
2014
+ self._logger.warning(f"Timeout waiting for {analyst_id}")
2015
+
2016
+ return results
2017
+ ```
2018
+
2019
+ ### API Methods
2020
+
2021
+ ```python
2022
+ # Fire async request - returns immediately with request_id
2023
+ req_id = await self.peers.ask_async(target, message, timeout=120, context=None)
2024
+
2025
+ # Check for response
2026
+ response = await self.peers.check_inbox(req_id, timeout=0) # Non-blocking
2027
+ response = await self.peers.check_inbox(req_id, timeout=10) # Wait up to 10s
2028
+
2029
+ # Manage pending requests
2030
+ pending = self.peers.get_pending_async_requests()
2031
+ self.peers.clear_inbox(req_id) # Clear specific
2032
+ self.peers.clear_inbox() # Clear all
2033
+ ```
2034
+
2035
+ ---
2036
+
2037
+ ## Load Balancing Strategies (v0.3.2)
2038
+
2039
+ Distribute work across multiple peers:
2040
+
2041
+ ### Discovery Strategies
2042
+
2043
+ ```python
2044
+ # Default: first in discovery order (deterministic)
2045
+ workers = self.peers.discover(role="worker", strategy="first")
2046
+
2047
+ # Random: shuffle for basic distribution
2048
+ workers = self.peers.discover(role="worker", strategy="random")
2049
+
2050
+ # Round-robin: rotate through workers on each call
2051
+ workers = self.peers.discover(role="worker", strategy="round_robin")
2052
+
2053
+ # Least-recent: prefer workers not used recently
2054
+ workers = self.peers.discover(role="worker", strategy="least_recent")
2055
+ ```
2056
+
2057
+ ### discover_one() Convenience
2058
+
2059
+ ```python
2060
+ # Get single peer with strategy applied
2061
+ worker = self.peers.discover_one(role="worker", strategy="round_robin")
2062
+ if worker:
2063
+ response = await self.peers.request(worker.agent_id, {"task": "..."})
2064
+ ```
2065
+
2066
+ ### Tracking Usage for least_recent
2067
+
2068
+ ```python
2069
+ # Track usage to influence least_recent ordering
2070
+ worker = self.peers.discover_one(role="worker", strategy="least_recent")
2071
+ response = await self.peers.request(worker.agent_id, {"task": "..."})
2072
+ self.peers.record_peer_usage(worker.agent_id) # Update timestamp
2073
+ ```
2074
+
2075
+ ### Example: Load-Balanced Task Distribution
2076
+
2077
+ ```python
2078
+ class Coordinator(CustomAgent):
2079
+ role = "coordinator"
2080
+ capabilities = ["coordination"]
2081
+
2082
+ async def distribute_work(self, tasks):
2083
+ results = []
2084
+ for task in tasks:
2085
+ # Round-robin automatically rotates through workers
2086
+ worker = self.peers.discover_one(
2087
+ capability="processing",
2088
+ strategy="round_robin"
2089
+ )
2090
+ if worker:
2091
+ response = await self.peers.request(
2092
+ worker.agent_id,
2093
+ {"task": task}
2094
+ )
2095
+ results.append(response)
2096
+ return results
2097
+ ```
2098
+
2099
+ ---
2100
+
2101
+ ## Mesh Diagnostics (v0.3.2)
2102
+
2103
+ Monitor mesh health for debugging and operations:
2104
+
2105
+ ### Getting Diagnostics
2106
+
2107
+ ```python
2108
+ # From mesh
2109
+ diag = mesh.get_diagnostics()
2110
+
2111
+ # Structure:
2112
+ # {
2113
+ # "local_node": {
2114
+ # "mode": "p2p",
2115
+ # "started": True,
2116
+ # "agent_count": 3,
2117
+ # "bind_address": "127.0.0.1:7950"
2118
+ # },
2119
+ # "known_peers": [
2120
+ # {"role": "analyst", "node_id": "10.0.0.2:7950", "status": "alive"}
2121
+ # ],
2122
+ # "local_agents": [
2123
+ # {"role": "coordinator", "agent_id": "...", "capabilities": [...]}
2124
+ # ],
2125
+ # "connectivity_status": "healthy"
2126
+ # }
2127
+ ```
2128
+
2129
+ ### Connectivity Status
2130
+
2131
+ | Status | Meaning |
2132
+ |--------|---------|
2133
+ | `healthy` | P2P active, peers connected |
2134
+ | `isolated` | P2P active, no peers found |
2135
+ | `degraded` | Some connectivity issues |
2136
+ | `not_started` | Mesh not started yet |
2137
+ | `local_only` | Autonomous mode (no P2P) |
2138
+
2139
+ ### FastAPI Health Endpoint
2140
+
2141
+ ```python
2142
+ from fastapi import FastAPI, Request
2143
+ from jarviscore.integrations.fastapi import JarvisLifespan
2144
+
2145
+ app = FastAPI(lifespan=JarvisLifespan(agent, mode="p2p"))
2146
+
2147
+ @app.get("/health")
2148
+ async def health(request: Request):
2149
+ mesh = request.app.state.jarvis_mesh
2150
+ diag = mesh.get_diagnostics()
2151
+ return {
2152
+ "status": diag["connectivity_status"],
2153
+ "agents": diag["local_node"]["agent_count"],
2154
+ "peers": len(diag.get("known_peers", []))
2155
+ }
2156
+ ```
2157
+
2158
+ ---
2159
+
2160
+ ## Testing with MockMesh (v0.3.2)
2161
+
2162
+ Unit test agents without real P2P infrastructure:
2163
+
2164
+ ### Basic Test Setup
2165
+
2166
+ ```python
2167
+ import pytest
2168
+ from jarviscore.testing import MockMesh, MockPeerClient
2169
+ from jarviscore.profiles import CustomAgent
2170
+
2171
+ class AnalystAgent(CustomAgent):
2172
+ role = "analyst"
2173
+ capabilities = ["analysis"]
2174
+
2175
+ async def on_peer_request(self, msg):
2176
+ return {"analysis": f"Analyzed: {msg.data.get('query')}"}
2177
+
2178
+ @pytest.mark.asyncio
2179
+ async def test_analyst_responds():
2180
+ mesh = MockMesh()
2181
+ mesh.add(AnalystAgent)
2182
+ await mesh.start()
2183
+
2184
+ analyst = mesh.get_agent("analyst")
2185
+
2186
+ # Inject a test message
2187
+ from jarviscore.p2p.messages import MessageType
2188
+ analyst.peers.inject_message(
2189
+ sender="tester",
2190
+ message_type=MessageType.REQUEST,
2191
+ data={"query": "test data"},
2192
+ correlation_id="test-123"
2193
+ )
2194
+
2195
+ # Receive and verify
2196
+ msg = await analyst.peers.receive(timeout=1)
2197
+ assert msg is not None
2198
+ assert msg.data["query"] == "test data"
2199
+
2200
+ await mesh.stop()
2201
+ ```
2202
+
2203
+ ### Mocking Peer Responses
2204
+
2205
+ ```python
2206
+ @pytest.mark.asyncio
2207
+ async def test_coordinator_delegates():
2208
+ class CoordinatorAgent(CustomAgent):
2209
+ role = "coordinator"
2210
+ capabilities = ["coordination"]
2211
+
2212
+ async def on_peer_request(self, msg):
2213
+ # This agent delegates to analyst
2214
+ analysis = await self.peers.request("analyst", {"data": msg.data})
2215
+ return {"coordinated": True, "analysis": analysis}
2216
+
2217
+ mesh = MockMesh()
2218
+ mesh.add(CoordinatorAgent)
2219
+ await mesh.start()
2220
+
2221
+ coordinator = mesh.get_agent("coordinator")
2222
+
2223
+ # Mock the analyst response
2224
+ coordinator.peers.add_mock_peer("analyst", capabilities=["analysis"])
2225
+ coordinator.peers.set_mock_response("analyst", {"result": "mocked analysis"})
2226
+
2227
+ # Test the flow
2228
+ response = await coordinator.peers.request("analyst", {"test": "data"})
2229
+
2230
+ assert response["result"] == "mocked analysis"
2231
+ coordinator.peers.assert_requested("analyst")
2232
+
2233
+ await mesh.stop()
2234
+ ```
2235
+
2236
+ ### Assertion Helpers
2237
+
2238
+ ```python
2239
+ # Verify notifications were sent
2240
+ agent.peers.assert_notified("target_role")
2241
+ agent.peers.assert_notified("target", message_contains={"event": "completed"})
2242
+
2243
+ # Verify requests were sent
2244
+ agent.peers.assert_requested("analyst")
2245
+ agent.peers.assert_requested("analyst", message_contains={"query": "test"})
2246
+
2247
+ # Verify broadcasts
2248
+ agent.peers.assert_broadcasted()
2249
+ agent.peers.assert_broadcasted(message_contains={"alert": "important"})
2250
+
2251
+ # Access sent messages for custom assertions
2252
+ notifications = agent.peers.get_sent_notifications()
2253
+ requests = agent.peers.get_sent_requests()
2254
+ broadcasts = agent.peers.get_sent_broadcasts()
2255
+
2256
+ # Reset between tests
2257
+ agent.peers.reset()
2258
+ ```
2259
+
2260
+ ### Custom Response Handler
2261
+
2262
+ ```python
2263
+ async def dynamic_handler(target, message, context):
2264
+ """Return different responses based on message content."""
2265
+ if "urgent" in message.get("query", ""):
2266
+ return {"priority": "high", "result": "fast response"}
2267
+ return {"priority": "normal", "result": "standard response"}
2268
+
2269
+ agent.peers.set_request_handler(dynamic_handler)
2270
+ ```
2271
+
2272
+ ---
2273
+
1838
2274
  ## API Reference
1839
2275
 
1840
2276
  ### CustomAgent Class Attributes
@@ -1855,20 +2291,22 @@ if agent.peers:
1855
2291
  | `leave_mesh()` | Both | **(v0.3.0)** Gracefully leave the mesh |
1856
2292
  | `serve_forever()` | Both | **(v0.3.0)** Block until shutdown signal |
1857
2293
 
1858
- ### ListenerAgent Class (v0.3.0)
2294
+ ### P2P Message Handlers (v0.3.1)
1859
2295
 
1860
- ListenerAgent extends CustomAgent with handler-based P2P communication.
2296
+ CustomAgent includes built-in P2P message handlers for handler-based communication.
1861
2297
 
1862
2298
  | Attribute/Method | Type | Description |
1863
2299
  |------------------|------|-------------|
1864
- | `role` | `str` | Required. Unique identifier for this agent type |
1865
- | `capabilities` | `list[str]` | Required. List of capabilities for discovery |
1866
- | `on_peer_request(msg)` | async method | Handle incoming requests. Return dict to respond |
2300
+ | `listen_timeout` | `float` | Seconds to wait for messages in `run()` loop. Default: 1.0 |
2301
+ | `auto_respond` | `bool` | Auto-send `on_peer_request` return value. Default: True |
2302
+ | `on_peer_request(msg)` | async method | Handle incoming requests. Return value sent as response |
1867
2303
  | `on_peer_notify(msg)` | async method | Handle broadcast notifications. No return needed |
2304
+ | `on_error(error, msg)` | async method | Handle errors during message processing |
2305
+ | `run()` | async method | Built-in listener loop that dispatches to handlers |
1868
2306
 
1869
- **Note:** ListenerAgent does not require `run()` or `execute_task()` implementations.
2307
+ **Note:** Override `on_peer_request()` and `on_peer_notify()` for your business logic. The `run()` method handles the message dispatch automatically.
1870
2308
 
1871
- ### Why `execute_task()` is Required in P2P Mode
2309
+ ### Why `execute_task()` Exists in CustomAgent
1872
2310
 
1873
2311
  You may notice that P2P agents must implement `execute_task()` even though they primarily use `run()`. Here's why:
1874
2312
 
@@ -2172,10 +2610,10 @@ For complete, runnable examples, see:
2172
2610
 
2173
2611
  - `examples/customagent_p2p_example.py` - P2P mode with LLM-driven peer communication
2174
2612
  - `examples/customagent_distributed_example.py` - Distributed mode with workflows
2175
- - `examples/listeneragent_cognitive_discovery_example.py` - ListenerAgent + cognitive discovery (v0.3.0)
2613
+ - `examples/customagent_cognitive_discovery_example.py` - CustomAgent + cognitive discovery (v0.3.0)
2176
2614
  - `examples/fastapi_integration_example.py` - FastAPI + JarvisLifespan (v0.3.0)
2177
2615
  - `examples/cloud_deployment_example.py` - Self-registration with join_mesh (v0.3.0)
2178
2616
 
2179
2617
  ---
2180
2618
 
2181
- *CustomAgent Guide - JarvisCore Framework v0.3.0*
2619
+ *CustomAgent Guide - JarvisCore Framework v0.3.2*