jarviscore-framework 0.2.1__py3-none-any.whl → 0.3.1__py3-none-any.whl

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
Files changed (37) hide show
  1. examples/cloud_deployment_example.py +162 -0
  2. examples/customagent_cognitive_discovery_example.py +343 -0
  3. examples/fastapi_integration_example.py +570 -0
  4. jarviscore/__init__.py +19 -5
  5. jarviscore/cli/smoketest.py +8 -4
  6. jarviscore/core/agent.py +227 -0
  7. jarviscore/core/mesh.py +9 -0
  8. jarviscore/data/examples/cloud_deployment_example.py +162 -0
  9. jarviscore/data/examples/custom_profile_decorator.py +134 -0
  10. jarviscore/data/examples/custom_profile_wrap.py +168 -0
  11. jarviscore/data/examples/customagent_cognitive_discovery_example.py +343 -0
  12. jarviscore/data/examples/fastapi_integration_example.py +570 -0
  13. jarviscore/docs/API_REFERENCE.md +283 -3
  14. jarviscore/docs/CHANGELOG.md +139 -0
  15. jarviscore/docs/CONFIGURATION.md +1 -1
  16. jarviscore/docs/CUSTOMAGENT_GUIDE.md +997 -85
  17. jarviscore/docs/GETTING_STARTED.md +228 -267
  18. jarviscore/docs/TROUBLESHOOTING.md +1 -1
  19. jarviscore/docs/USER_GUIDE.md +153 -8
  20. jarviscore/integrations/__init__.py +16 -0
  21. jarviscore/integrations/fastapi.py +247 -0
  22. jarviscore/p2p/broadcaster.py +10 -3
  23. jarviscore/p2p/coordinator.py +310 -14
  24. jarviscore/p2p/keepalive.py +45 -23
  25. jarviscore/p2p/peer_client.py +311 -12
  26. jarviscore/p2p/swim_manager.py +9 -4
  27. jarviscore/profiles/__init__.py +7 -1
  28. jarviscore/profiles/customagent.py +295 -74
  29. {jarviscore_framework-0.2.1.dist-info → jarviscore_framework-0.3.1.dist-info}/METADATA +66 -18
  30. {jarviscore_framework-0.2.1.dist-info → jarviscore_framework-0.3.1.dist-info}/RECORD +37 -22
  31. {jarviscore_framework-0.2.1.dist-info → jarviscore_framework-0.3.1.dist-info}/WHEEL +1 -1
  32. tests/test_13_dx_improvements.py +554 -0
  33. tests/test_14_cloud_deployment.py +403 -0
  34. tests/test_15_llm_cognitive_discovery.py +684 -0
  35. tests/test_16_unified_dx_flow.py +947 -0
  36. {jarviscore_framework-0.2.1.dist-info → jarviscore_framework-0.3.1.dist-info}/licenses/LICENSE +0 -0
  37. {jarviscore_framework-0.2.1.dist-info → jarviscore_framework-0.3.1.dist-info}/top_level.txt +0 -0
@@ -11,12 +11,16 @@ CustomAgent lets you integrate your **existing agent code** with JarvisCore's ne
11
11
 
12
12
  1. [Prerequisites](#prerequisites)
13
13
  2. [Choose Your Mode](#choose-your-mode)
14
- 3. [P2P Mode](#p2p-mode)
15
- 4. [Distributed Mode](#distributed-mode)
16
- 5. [API Reference](#api-reference)
17
- 6. [Multi-Node Deployment](#multi-node-deployment)
18
- 7. [Error Handling](#error-handling)
19
- 8. [Troubleshooting](#troubleshooting)
14
+ 3. [P2P Mode](#p2p-mode) - Handler-based peer communication
15
+ 4. [Distributed Mode](#distributed-mode) - Workflow tasks + P2P
16
+ 5. [Cognitive Discovery (v0.3.0)](#cognitive-discovery-v030) - Dynamic peer awareness for LLMs
17
+ 6. [FastAPI Integration (v0.3.0)](#fastapi-integration-v030) - 3-line setup with JarvisLifespan
18
+ 7. [Framework Integration Patterns](#framework-integration-patterns) - aiohttp, Flask, Django
19
+ 8. [Cloud Deployment (v0.3.0)](#cloud-deployment-v030) - Self-registration for containers
20
+ 9. [API Reference](#api-reference)
21
+ 10. [Multi-Node Deployment](#multi-node-deployment)
22
+ 11. [Error Handling](#error-handling)
23
+ 12. [Troubleshooting](#troubleshooting)
20
24
 
21
25
  ---
22
26
 
@@ -98,13 +102,15 @@ class MyLLMClient:
98
102
 
99
103
  ### Quick Comparison
100
104
 
101
- | Feature | P2P Mode | Distributed Mode |
102
- |---------|----------|------------------|
103
- | **Primary method** | `run()` - continuous loop | `execute_task()` - on-demand |
104
- | **Communication** | Direct peer messaging | Workflow orchestration |
105
- | **Best for** | Chatbots, real-time agents | Pipelines, batch processing |
106
- | **Coordination** | Agents self-coordinate | Framework coordinates |
107
- | **Supports workflows** | No | Yes |
105
+ | Feature | P2P Mode (CustomAgent) | P2P Mode (CustomAgent) | Distributed Mode |
106
+ |---------|------------------------|--------------------------|------------------|
107
+ | **Primary method** | `run()` - continuous loop | `on_peer_request()` handlers | `execute_task()` - on-demand |
108
+ | **Communication** | Direct peer messaging | Handler-based (no loop) | Workflow orchestration |
109
+ | **Best for** | Custom message loops | API-first agents, FastAPI | Pipelines, batch processing |
110
+ | **Coordination** | Agents self-coordinate | Framework handles loop | Framework coordinates |
111
+ | **Supports workflows** | No | No | Yes |
112
+
113
+ > **CustomAgent** includes built-in P2P handlers - just implement `on_peer_request()` and `on_peer_notify()`. No need to write your own `run()` loop.
108
114
 
109
115
  ---
110
116
 
@@ -112,6 +118,46 @@ class MyLLMClient:
112
118
 
113
119
  P2P mode is for agents that run continuously and communicate directly with each other.
114
120
 
121
+ ### v0.3.1 Update: Handler-Based Pattern
122
+
123
+ **We've simplified P2P agents!** No more manual `run()` loops.
124
+
125
+ ```
126
+ ┌────────────────────────────────────────────────────────────────┐
127
+ │ OLD vs NEW Pattern │
128
+ ├────────────────────────────────────────────────────────────────┤
129
+ │ │
130
+ │ ❌ OLD (v0.2.x) - Manual Loop │
131
+ │ ┌──────────────────────────────────────────────┐ │
132
+ │ │ async def run(self): │ │
133
+ │ │ while not self.shutdown_requested: │ │
134
+ │ │ msg = await self.peers.receive() │ ← Polling │
135
+ │ │ if msg and msg.is_request: │ │
136
+ │ │ result = self.process(msg) │ │
137
+ │ │ await self.peers.respond(...) │ ← Manual │
138
+ │ │ await asyncio.sleep(0.1) │ │
139
+ │ └──────────────────────────────────────────────┘ │
140
+ │ │
141
+ │ ✅ NEW (v0.3.0+) - Handler-Based │
142
+ │ ┌──────────────────────────────────────────────┐ │
143
+ │ │ async def on_peer_request(self, msg): │ │
144
+ │ │ result = self.process(msg) │ │
145
+ │ │ return result │ ← Simple! │
146
+ │ └──────────────────────────────────────────────┘ │
147
+ │ ▲ │
148
+ │ │ │
149
+ │ └─ Framework calls this automatically │
150
+ │ │
151
+ └────────────────────────────────────────────────────────────────┘
152
+ ```
153
+
154
+ **Benefits:**
155
+ - ✅ **Less Code**: No boilerplate loops
156
+ - ✅ **Simpler**: Just return your result
157
+ - ✅ **Automatic**: Framework handles message dispatch
158
+ - ✅ **Error Handling**: Built-in exception capture
159
+ - ✅ **FastAPI Ready**: Works with `JarvisLifespan` out of the box
160
+
115
161
  ### Migration Overview
116
162
 
117
163
  ```
@@ -157,59 +203,89 @@ if __name__ == "__main__":
157
203
 
158
204
  ### Step 3: Modify Your Agent Code → `agents.py`
159
205
 
160
- Convert your existing class to inherit from `CustomAgent`:
206
+ **🚨 IMPORTANT CHANGE (v0.3.0+)**: We've moved from `run()` loops to **handler-based** agents!
161
207
 
208
+ #### ❌ OLD Pattern (Deprecated)
162
209
  ```python
163
- # agents.py (MODIFIED VERSION OF YOUR CODE)
164
- import asyncio
210
+ # DON'T DO THIS ANYMORE!
211
+ class ResearcherAgent(CustomAgent):
212
+ async def run(self): # ❌ Manual loop
213
+ while not self.shutdown_requested:
214
+ msg = await self.peers.receive(timeout=0.5)
215
+ if msg and msg.is_request:
216
+ result = self.llm.chat(f"Research: {msg.data['question']}")
217
+ await self.peers.respond(msg, {"response": result})
218
+ await asyncio.sleep(0.1)
219
+ ```
220
+ **Problems**: Manual loops, boilerplate, error-prone
221
+
222
+ #### ✅ NEW Pattern (Recommended)
223
+ ```python
224
+ # agents.py (MODERN VERSION)
165
225
  from jarviscore.profiles import CustomAgent
166
226
 
167
227
 
168
228
  class ResearcherAgent(CustomAgent):
169
- """Your agent, now framework-integrated."""
229
+ """Your agent, now framework-integrated with handlers."""
170
230
 
171
- # NEW: Required class attributes for discovery
231
+ # Required class attributes for discovery
172
232
  role = "researcher"
173
233
  capabilities = ["research", "analysis"]
234
+ description = "Research specialist that gathers and synthesizes information"
174
235
 
175
236
  async def setup(self):
176
- """NEW: Called once on startup. Move your __init__ logic here."""
237
+ """Called once on startup. Initialize your LLM here."""
177
238
  await super().setup()
178
239
  self.llm = MyLLMClient() # Your existing initialization
179
240
 
180
- async def run(self):
181
- """NEW: Main loop - replaces your if __name__ == '__main__' block."""
182
- while not self.shutdown_requested:
183
- if self.peers:
184
- msg = await self.peers.receive(timeout=0.5)
185
- if msg and msg.is_request:
186
- query = msg.data.get("question", "")
187
- # YOUR EXISTING LOGIC:
188
- result = self.llm.chat(f"Research: {query}")
189
- await self.peers.respond(msg, {"response": result})
190
- await asyncio.sleep(0.1)
241
+ async def on_peer_request(self, msg):
242
+ """
243
+ Handle incoming requests from other agents.
244
+
245
+ This is called AUTOMATICALLY when another agent asks you a question.
246
+ No loops, no polling, no boilerplate!
247
+ """
248
+ query = msg.data.get("question", "")
249
+
250
+ # YOUR EXISTING LOGIC:
251
+ result = self.llm.chat(f"Research: {query}")
252
+
253
+ # Just return the data - framework handles the response
254
+ return {"response": result}
191
255
 
192
256
  async def execute_task(self, task: dict) -> dict:
193
257
  """
194
- Required by base Agent class (@abstractmethod).
195
-
196
- In P2P mode, your main logic lives in run(), not here.
197
- This must exist because Python requires all abstract methods
198
- to be implemented, or you get TypeError on instantiation.
258
+ Required by base Agent class for workflow mode.
259
+
260
+ In pure P2P mode, your logic is in on_peer_request().
261
+ This is used when agent is part of a workflow pipeline.
199
262
  """
200
- return {"status": "success", "note": "This agent uses run() for P2P mode"}
263
+ return {"status": "success", "note": "This agent uses handlers for P2P mode"}
201
264
  ```
202
265
 
203
266
  **What changed:**
204
267
 
205
- | Before | After |
206
- |--------|-------|
207
- | `class MyResearcher:` | `class ResearcherAgent(CustomAgent):` |
208
- | `def __init__(self):` | `async def setup(self):` + `await super().setup()` |
209
- | `if __name__ == "__main__":` | `async def run(self):` loop |
210
- | Direct method calls | Peer message handling |
268
+ | Before (v0.2.x) | After (v0.3.0+) | Why? |
269
+ |-----------------|-----------------|------|
270
+ | `async def run(self):` with `while` loop | `async def on_peer_request(self, msg):` handler | Automatic dispatch, less boilerplate |
271
+ | Manual `await self.peers.receive()` | Framework calls your handler | No polling needed |
272
+ | Manual `await self.peers.respond(msg, data)` | Just `return data` | Simpler error handling |
273
+ | `asyncio.create_task(agent.run())` | Not needed - handlers run automatically | Cleaner lifecycle |
274
+
275
+ #### Migration Checklist (v0.2.x → v0.3.0+)
211
276
 
212
- > **Note**: This is a minimal example. For the full pattern with **LLM-driven peer communication** (where your LLM autonomously decides when to call other agents), see the [Complete Example](#complete-example-llm-driven-peer-communication) below.
277
+ If you have existing agents using the `run()` loop pattern:
278
+
279
+ - [ ] Replace `async def run(self):` with `async def on_peer_request(self, msg):`
280
+ - [ ] Remove `while not self.shutdown_requested:` loop
281
+ - [ ] Remove `msg = await self.peers.receive(timeout=0.5)` polling
282
+ - [ ] Change `await self.peers.respond(msg, data)` to `return data`
283
+ - [ ] Remove manual `asyncio.create_task(agent.run())` calls in main.py
284
+ - [ ] Consider using `JarvisLifespan` for FastAPI integration (see Step 4)
285
+ - [ ] Add `description` class attribute for better cognitive discovery
286
+ - [ ] Use `get_cognitive_context()` instead of hardcoded peer lists
287
+
288
+ > **Note**: The `run()` method is **still supported** for backward compatibility, but handlers are now the recommended approach. For the full pattern with **LLM-driven peer communication** (where your LLM autonomously decides when to call other agents), see the [Complete Example](#complete-example-llm-driven-peer-communication) below.
213
289
 
214
290
  ### Step 4: Create New Entry Point → `main.py`
215
291
 
@@ -310,22 +386,22 @@ This is the **key pattern** for P2P mode. Your LLM gets peer tools added to its
310
386
  **The key insight**: You add peer tools to your LLM's toolset. The LLM decides when to use them.
311
387
 
312
388
  ```python
313
- # agents.py
314
- import asyncio
389
+ # agents.py - UPDATED FOR v0.3.0+
315
390
  from jarviscore.profiles import CustomAgent
316
391
 
317
392
 
318
393
  class AnalystAgent(CustomAgent):
319
394
  """
320
- Analyst agent - specialists in data analysis.
395
+ Analyst agent - specialist in data analysis.
321
396
 
322
- This agent:
323
- 1. Listens for incoming requests from peers
324
- 2. Processes requests using its own LLM
325
- 3. Responds with analysis results
397
+ NEW PATTERN (v0.3.0+):
398
+ - Uses @on_peer_request HANDLER instead of run() loop
399
+ - Automatically receives and responds to peer requests
400
+ - No manual message polling needed!
326
401
  """
327
402
  role = "analyst"
328
403
  capabilities = ["analysis", "data_interpretation", "reporting"]
404
+ description = "Expert data analyst for statistics and insights"
329
405
 
330
406
  async def setup(self):
331
407
  await super().setup()
@@ -400,19 +476,20 @@ Analyze data thoroughly and provide insights."""
400
476
 
401
477
  return response.get("content", "Analysis complete.")
402
478
 
403
- async def run(self):
404
- """Listen for incoming requests from peers."""
405
- while not self.shutdown_requested:
406
- if self.peers:
407
- msg = await self.peers.receive(timeout=0.5)
408
- if msg and msg.is_request:
409
- query = msg.data.get("question", msg.data.get("query", ""))
479
+ async def on_peer_request(self, msg):
480
+ """
481
+ Handle incoming requests from peers.
482
+
483
+ NEW: This is called automatically when another agent sends a request.
484
+ OLD: Manual while loop with receive() polling
485
+ """
486
+ query = msg.data.get("question", msg.data.get("query", ""))
410
487
 
411
- # Process with LLM
412
- result = await self.process_with_llm(query)
488
+ # Process with LLM
489
+ result = await self.process_with_llm(query)
413
490
 
414
- await self.peers.respond(msg, {"response": result})
415
- await asyncio.sleep(0.1)
491
+ # Just return the data - framework handles the response!
492
+ return {"response": result}
416
493
 
417
494
  async def execute_task(self, task: dict) -> dict:
418
495
  """Required by base class."""
@@ -423,13 +500,16 @@ class AssistantAgent(CustomAgent):
423
500
  """
424
501
  Assistant agent - coordinates with other specialists.
425
502
 
426
- This agent:
503
+ NEW PATTERN (v0.3.0+):
427
504
  1. Has its own LLM for reasoning
428
- 2. Has peer tools (ask_peer, broadcast) in its toolset
429
- 3. LLM AUTONOMOUSLY decides when to ask other agents
505
+ 2. Uses get_cognitive_context() to discover available peers
506
+ 3. Peer tools (ask_peer, broadcast) added to LLM toolset
507
+ 4. LLM AUTONOMOUSLY decides when to ask other agents
508
+ 5. Uses on_peer_request handler instead of run() loop
430
509
  """
431
510
  role = "assistant"
432
511
  capabilities = ["chat", "coordination", "search"]
512
+ description = "General assistant that delegates specialized tasks to experts"
433
513
 
434
514
  async def setup(self):
435
515
  await super().setup()
@@ -529,16 +609,16 @@ Be concise in your responses."""
529
609
 
530
610
  return response.get("content", "")
531
611
 
532
- async def run(self):
533
- """Main loop - listen for incoming requests."""
534
- while not self.shutdown_requested:
535
- if self.peers:
536
- msg = await self.peers.receive(timeout=0.5)
537
- if msg and msg.is_request:
538
- query = msg.data.get("query", "")
539
- result = await self.chat(query)
540
- await self.peers.respond(msg, {"response": result})
541
- await asyncio.sleep(0.1)
612
+ async def on_peer_request(self, msg):
613
+ """
614
+ Handle incoming requests from other agents.
615
+
616
+ NEW: Handler-based - called automatically on request
617
+ OLD: Manual while loop with receive() polling
618
+ """
619
+ query = msg.data.get("query", "")
620
+ result = await self.chat(query)
621
+ return {"response": result}
542
622
 
543
623
  async def execute_task(self, task: dict) -> dict:
544
624
  """Required by base class."""
@@ -546,13 +626,14 @@ Be concise in your responses."""
546
626
  ```
547
627
 
548
628
  ```python
549
- # main.py
629
+ # main.py - UPDATED FOR v0.3.0+ (Handler-Based Pattern)
550
630
  import asyncio
551
631
  from jarviscore import Mesh
552
632
  from agents import AnalystAgent, AssistantAgent
553
633
 
554
634
 
555
635
  async def main():
636
+ """Simple P2P mesh without web server."""
556
637
  mesh = Mesh(
557
638
  mode="p2p",
558
639
  config={
@@ -561,17 +642,15 @@ async def main():
561
642
  }
562
643
  )
563
644
 
564
- # Add both agents
645
+ # Add both agents - they'll use handlers automatically
565
646
  mesh.add(AnalystAgent)
566
647
  assistant = mesh.add(AssistantAgent)
567
648
 
568
649
  await mesh.start()
569
650
 
570
- # Start analyst listening in background
571
- analyst = mesh.get_agent("analyst")
572
- analyst_task = asyncio.create_task(analyst.run())
573
-
574
- # Give time for setup
651
+ # NO MORE MANUAL run() TASKS! Handlers are automatic.
652
+
653
+ # Give time for mesh to stabilize
575
654
  await asyncio.sleep(0.5)
576
655
 
577
656
  # User asks a question - LLM will autonomously decide to use ask_peer
@@ -584,8 +663,6 @@ async def main():
584
663
  # Output: [{'tool': 'ask_peer', 'args': {'role': 'analyst', 'question': '...'}}]
585
664
 
586
665
  # Cleanup
587
- analyst.request_shutdown()
588
- analyst_task.cancel()
589
666
  await mesh.stop()
590
667
 
591
668
 
@@ -593,6 +670,59 @@ if __name__ == "__main__":
593
670
  asyncio.run(main())
594
671
  ```
595
672
 
673
+ **Or better yet, use FastAPI + JarvisLifespan:**
674
+
675
+ ```python
676
+ # main.py - PRODUCTION PATTERN (FastAPI + JarvisLifespan)
677
+ from fastapi import FastAPI, Request
678
+ from fastapi.responses import JSONResponse
679
+ from jarviscore.integrations import JarvisLifespan
680
+ from agents import AnalystAgent, AssistantAgent
681
+ import uvicorn
682
+
683
+
684
+ # ✅ ONE-LINE MESH SETUP with JarvisLifespan!
685
+ app = FastAPI(lifespan=JarvisLifespan([AnalystAgent, AssistantAgent]))
686
+
687
+
688
+ @app.post("/chat")
689
+ async def chat(request: Request):
690
+ """Chat endpoint - assistant may autonomously delegate to analyst."""
691
+ data = await request.json()
692
+ message = data.get("message", "")
693
+
694
+ # Get assistant from mesh (JarvisLifespan manages it)
695
+ assistant = app.state.mesh.get_agent("assistant")
696
+
697
+ # Chat - LLM autonomously discovers and delegates if needed
698
+ response = await assistant.chat(message)
699
+
700
+ return JSONResponse(response)
701
+
702
+
703
+ @app.get("/agents")
704
+ async def list_agents():
705
+ """Show what each agent sees (cognitive context)."""
706
+ mesh = app.state.mesh
707
+ agents_info = {}
708
+
709
+ for agent in mesh.agents:
710
+ if agent.peers:
711
+ context = agent.peers.get_cognitive_context(format="markdown")
712
+ agents_info[agent.role] = {
713
+ "role": agent.role,
714
+ "capabilities": agent.capabilities,
715
+ "peers_visible": len(agent.peers.get_all_peers()),
716
+ "cognitive_context": context[:200] + "..."
717
+ }
718
+
719
+ return JSONResponse(agents_info)
720
+
721
+
722
+ if __name__ == "__main__":
723
+ uvicorn.run(app, host="0.0.0.0", port=8000)
724
+ ```
725
+
596
726
  ### Key Concepts for P2P Mode
597
727
 
598
728
  #### Adding Peer Tools to Your LLM
@@ -660,6 +790,150 @@ async def run(self):
660
790
 
661
791
  ---
662
792
 
793
+ ## P2P Message Handlers
794
+
795
+ CustomAgent includes built-in handlers for P2P communication - just implement the handlers you need.
796
+
797
+ ### Handler-Based P2P (Recommended)
798
+
799
+ ```python
800
+ from jarviscore.profiles import CustomAgent
801
+
802
+ class MyAgent(CustomAgent):
803
+ role = "processor"
804
+ capabilities = ["processing"]
805
+
806
+ async def on_peer_request(self, msg):
807
+ """Called when another agent sends a request."""
808
+ return {"result": msg.data.get("task", "").upper()}
809
+
810
+ async def on_peer_notify(self, msg):
811
+ """Called when another agent broadcasts a notification."""
812
+ print(f"Notification received: {msg.data}")
813
+ ```
814
+
815
+ **What the framework handles:**
816
+ - Message receiving loop (`run()` is built-in)
817
+ - Routing requests to `on_peer_request()`
818
+ - Routing notifications to `on_peer_notify()`
819
+ - Automatic response sending (configurable with `auto_respond`)
820
+ - Shutdown handling
821
+
822
+ **Configuration:**
823
+ - `listen_timeout` (float): Seconds to wait for messages (default: 1.0)
824
+ - `auto_respond` (bool): Auto-send `on_peer_request()` return value (default: True)
825
+
826
+ ### Complete P2P Example
827
+
828
+ ```python
829
+ # agents.py
830
+ from jarviscore.profiles import CustomAgent
831
+
832
+
833
+ class AnalystAgent(CustomAgent):
834
+ """A data analyst that responds to peer requests."""
835
+
836
+ role = "analyst"
837
+ capabilities = ["analysis", "data_interpretation"]
838
+
839
+ async def setup(self):
840
+ await super().setup()
841
+ self.llm = MyLLMClient() # Your LLM client
842
+
843
+ async def on_peer_request(self, msg):
844
+ """
845
+ Handle incoming requests from other agents.
846
+
847
+ Args:
848
+ msg: IncomingMessage with msg.data, msg.sender_role, etc.
849
+
850
+ Returns:
851
+ dict: Response sent back to the requesting agent
852
+ """
853
+ query = msg.data.get("question", "")
854
+
855
+ # Your analysis logic
856
+ result = self.llm.chat(f"Analyze: {query}")
857
+
858
+ return {"response": result, "status": "success"}
859
+
860
+ async def on_peer_notify(self, msg):
861
+ """
862
+ Handle broadcast notifications.
863
+
864
+ Args:
865
+ msg: IncomingMessage with notification data
866
+
867
+ Returns:
868
+ None (notifications don't expect responses)
869
+ """
870
+ print(f"[{self.role}] Received notification: {msg.data}")
871
+
872
+
873
+ class AssistantAgent(CustomAgent):
874
+ """An assistant that coordinates with specialists."""
875
+
876
+ role = "assistant"
877
+ capabilities = ["chat", "coordination"]
878
+
879
+ async def setup(self):
880
+ await super().setup()
881
+ self.llm = MyLLMClient()
882
+
883
+ async def on_peer_request(self, msg):
884
+ """Handle incoming chat requests."""
885
+ query = msg.data.get("query", "")
886
+
887
+ # Use peer tools to ask specialists
888
+ if self.peers and "data" in query.lower():
889
+ # Ask the analyst for help
890
+ analyst_response = await self.peers.as_tool().execute(
891
+ "ask_peer",
892
+ {"role": "analyst", "question": query}
893
+ )
894
+ return {"response": analyst_response.get("response", "")}
895
+
896
+ # Handle directly
897
+ return {"response": self.llm.chat(query)}
898
+ ```
899
+
900
+ ```python
901
+ # main.py
902
+ import asyncio
903
+ from jarviscore import Mesh
904
+ from agents import AnalystAgent, AssistantAgent
905
+
906
+
907
+ async def main():
908
+ mesh = Mesh(mode="p2p", config={"bind_port": 7950})
909
+
910
+ mesh.add(AnalystAgent)
911
+ mesh.add(AssistantAgent)
912
+
913
+ await mesh.start()
914
+
915
+ # Agents automatically run their listeners
916
+ await mesh.run_forever()
917
+
918
+
919
+ if __name__ == "__main__":
920
+ asyncio.run(main())
921
+ ```
922
+
923
+ ### When to Use Handlers vs Custom run()
924
+
925
+ | Use handlers (`on_peer_request`) when... | Override `run()` when... |
926
+ |------------------------------------------|--------------------------|
927
+ | Request/response pattern fits your use case | You need custom message loop timing |
928
+ | You're integrating with FastAPI | You need to initiate messages proactively |
929
+ | You want minimal boilerplate | You have complex coordination logic |
930
+
931
+ ### CustomAgent with FastAPI
932
+
933
+ CustomAgent works seamlessly with FastAPI. See [FastAPI Integration](#fastapi-integration-v030) below.
934
+
935
+ ---
936
+
663
937
  ## Distributed Mode
664
938
 
665
939
  Distributed mode is for task pipelines where the framework orchestrates execution order and passes data between steps.
@@ -1066,6 +1340,592 @@ results = await mesh.workflow("parallel-example", [
1066
1340
 
1067
1341
  ---
1068
1342
 
1343
+ ## Cognitive Discovery (v0.3.0)
1344
+
1345
+ **Cognitive Discovery** lets your LLM dynamically learn about available peers instead of hardcoding agent names in prompts.
1346
+
1347
+ ### The Problem: Hardcoded Peer Names
1348
+
1349
+ Before v0.3.0, you had to hardcode peer information in your system prompts:
1350
+
1351
+ ```python
1352
+ # BEFORE: Hardcoded peer names - breaks when peers change
1353
+ system_prompt = """You are a helpful assistant.
1354
+
1355
+ You have access to:
1356
+ - ask_peer: Ask specialist agents for help
1357
+ - Use role="analyst" for data analysis
1358
+ - Use role="researcher" for research tasks
1359
+ - Use role="writer" for content creation
1360
+
1361
+ When a user needs data analysis, USE ask_peer with role="analyst"."""
1362
+ ```
1363
+
1364
+ **Problems:**
1365
+ - If you add a new agent, you must update every prompt
1366
+ - If an agent is offline, the LLM still tries to call it
1367
+ - Prompts become stale as your system evolves
1368
+ - Difficult to manage across many agents
1369
+
1370
+ ### The Solution: `get_cognitive_context()`
1371
+
1372
+ ```python
1373
+ # AFTER: Dynamic peer awareness - always up to date
1374
+ async def get_system_prompt(self) -> str:
1375
+ base_prompt = """You are a helpful assistant.
1376
+
1377
+ You have access to peer tools for collaborating with other agents."""
1378
+
1379
+ # Generate LLM-ready peer descriptions dynamically
1380
+ if self.peers:
1381
+ peer_context = self.peers.get_cognitive_context()
1382
+ return f"{base_prompt}\n\n{peer_context}"
1383
+
1384
+ return base_prompt
1385
+ ```
1386
+
1387
+ The `get_cognitive_context()` method generates text like:
1388
+
1389
+ ```
1390
+ Available Peers:
1391
+ - analyst (capabilities: analysis, data_interpretation)
1392
+ Use ask_peer with role="analyst" for data analysis tasks
1393
+ - researcher (capabilities: research, web_search)
1394
+ Use ask_peer with role="researcher" for research tasks
1395
+ ```
1396
+
1397
+ ### Complete Example: Dynamic Peer Discovery
1398
+
1399
+ ```python
1400
+ # agents.py
1401
+ from jarviscore.profiles import CustomAgent
1402
+
1403
+
1404
+ class AssistantAgent(CustomAgent):
1405
+ """An assistant that dynamically discovers and uses peers."""
1406
+
1407
+ role = "assistant"
1408
+ capabilities = ["chat", "coordination"]
1409
+
1410
+ async def setup(self):
1411
+ await super().setup()
1412
+ self.llm = MyLLMClient()
1413
+
1414
+ def get_system_prompt(self) -> str:
1415
+ """Build system prompt with dynamic peer context."""
1416
+ base_prompt = """You are a helpful AI assistant.
1417
+
1418
+ When users ask questions that require specialized knowledge:
1419
+ 1. Check what peers are available
1420
+ 2. Use ask_peer to get help from the right specialist
1421
+ 3. Synthesize their response for the user"""
1422
+
1423
+ # DYNAMIC: Add current peer information
1424
+ if self.peers:
1425
+ peer_context = self.peers.get_cognitive_context()
1426
+ return f"{base_prompt}\n\n{peer_context}"
1427
+
1428
+ return base_prompt
1429
+
1430
+ def get_tools(self) -> list:
1431
+ """Get tools including peer tools."""
1432
+ tools = [
1433
+ # Your local tools...
1434
+ ]
1435
+
1436
+ if self.peers:
1437
+ tools.extend(self.peers.as_tool().schema)
1438
+
1439
+ return tools
1440
+
1441
+ async def chat(self, user_message: str) -> str:
1442
+ """Chat with dynamic peer awareness."""
1443
+ # System prompt now includes current peer info
1444
+ system = self.get_system_prompt()
1445
+ tools = self.get_tools()
1446
+
1447
+ response = self.llm.chat(
1448
+ messages=[{"role": "user", "content": user_message}],
1449
+ tools=tools,
1450
+ system=system
1451
+ )
1452
+
1453
+ # Handle tool use...
1454
+ return response.get("content", "")
1455
+ ```
1456
+
1457
+ ### Benefits of Cognitive Discovery
1458
+
1459
+ | Before (Hardcoded) | After (Dynamic) |
1460
+ |--------------------|-----------------|
1461
+ | Update prompts manually when peers change | Prompts auto-update |
1462
+ | LLM tries to call offline agents | Only shows available agents |
1463
+ | Difficult to manage at scale | Scales automatically |
1464
+ | Stale documentation in prompts | Always current |
1465
+
1466
+ ---
1467
+
1468
+ ## FastAPI Integration (v0.3.0)
1469
+
1470
+ **JarvisLifespan** reduces FastAPI integration from ~100 lines to 3 lines.
1471
+
1472
+ ### The Problem: Manual Lifecycle Management
1473
+
1474
+ Before v0.3.0, integrating an agent with FastAPI required manual lifecycle management:
1475
+
1476
+ ```python
1477
+ # BEFORE: ~100 lines of boilerplate
1478
+ from contextlib import asynccontextmanager
1479
+ from fastapi import FastAPI
1480
+ from jarviscore import Mesh
1481
+ from jarviscore.profiles import CustomAgent
1482
+ import asyncio
1483
+
1484
+
1485
+ class MyAgent(CustomAgent):
1486
+ role = "processor"
1487
+ capabilities = ["processing"]
1488
+
1489
+ async def run(self):
1490
+ while not self.shutdown_requested:
1491
+ if self.peers:
1492
+ msg = await self.peers.receive(timeout=0.5)
1493
+ if msg and msg.is_request:
1494
+ result = self.process(msg.data)
1495
+ await self.peers.respond(msg, {"response": result})
1496
+ await asyncio.sleep(0.1)
1497
+
1498
+ async def execute_task(self, task):
1499
+ return {"status": "success"}
1500
+
1501
+
1502
+ # Manual lifecycle management
1503
+ mesh = None
1504
+ agent = None
1505
+ run_task = None
1506
+
1507
+
1508
+ @asynccontextmanager
1509
+ async def lifespan(app: FastAPI):
1510
+ global mesh, agent, run_task
1511
+
1512
+ # Startup
1513
+ mesh = Mesh(mode="p2p", config={"bind_port": 7950})
1514
+ agent = mesh.add(MyAgent)
1515
+ await mesh.start()
1516
+ run_task = asyncio.create_task(agent.run())
1517
+
1518
+ yield
1519
+
1520
+ # Shutdown
1521
+ agent.request_shutdown()
1522
+ run_task.cancel()
1523
+ await mesh.stop()
1524
+
1525
+
1526
+ app = FastAPI(lifespan=lifespan)
1527
+
1528
+
1529
+ @app.post("/process")
1530
+ async def process(data: dict):
1531
+ # Your endpoint logic
1532
+ return {"result": "processed"}
1533
+ ```
1534
+
1535
+ ### The Solution: JarvisLifespan
1536
+
1537
+ ```python
1538
+ # AFTER: 3 lines to integrate
1539
+ from fastapi import FastAPI
1540
+ from jarviscore.profiles import CustomAgent
1541
+ from jarviscore.integrations.fastapi import JarvisLifespan
1542
+
1543
+
1544
+ class ProcessorAgent(CustomAgent):
1545
+ role = "processor"
1546
+ capabilities = ["processing"]
1547
+
1548
+ async def on_peer_request(self, msg):
1549
+ return {"result": msg.data.get("task", "").upper()}
1550
+
1551
+
1552
+ # That's it - 3 lines!
1553
+ app = FastAPI(lifespan=JarvisLifespan(ProcessorAgent(), mode="p2p"))
1554
+
1555
+
1556
+ @app.post("/process")
1557
+ async def process(data: dict):
1558
+ return {"result": "processed"}
1559
+ ```
1560
+
1561
+ ### JarvisLifespan Configuration
1562
+
1563
+ ```python
1564
+ from jarviscore.integrations.fastapi import JarvisLifespan
1565
+
1566
+ # Basic usage
1567
+ app = FastAPI(lifespan=JarvisLifespan(agent, mode="p2p"))
1568
+
1569
+ # With configuration
1570
+ app = FastAPI(
1571
+ lifespan=JarvisLifespan(
1572
+ agent,
1573
+ mode="p2p", # or "distributed"
1574
+ bind_port=7950, # P2P port
1575
+ seed_nodes="ip:port", # For multi-node
1576
+ )
1577
+ )
1578
+ ```
1579
+
1580
+ ### Complete FastAPI Example
1581
+
1582
+ ```python
1583
+ # app.py
1584
+ from fastapi import FastAPI, HTTPException
1585
+ from pydantic import BaseModel
1586
+ from jarviscore.profiles import CustomAgent
1587
+ from jarviscore.integrations.fastapi import JarvisLifespan
1588
+
1589
+
1590
+ class AnalysisRequest(BaseModel):
1591
+ data: str
1592
+
1593
+
1594
+ class AnalystAgent(CustomAgent):
1595
+ """Agent that handles both API requests and P2P messages."""
1596
+
1597
+ role = "analyst"
1598
+ capabilities = ["analysis"]
1599
+
1600
+ async def setup(self):
1601
+ await super().setup()
1602
+ self.llm = MyLLMClient()
1603
+
1604
+ async def on_peer_request(self, msg):
1605
+ """Handle requests from other agents in the mesh."""
1606
+ query = msg.data.get("question", "")
1607
+ result = self.llm.chat(f"Analyze: {query}")
1608
+ return {"response": result}
1609
+
1610
+ def analyze(self, data: str) -> dict:
1611
+ """Method called by API endpoint."""
1612
+ result = self.llm.chat(f"Analyze this data: {data}")
1613
+ return {"analysis": result}
1614
+
1615
+
1616
+ # Create agent instance
1617
+ analyst = AnalystAgent()
1618
+
1619
+ # Create FastAPI app with automatic lifecycle management
1620
+ app = FastAPI(
1621
+ title="Analyst Service",
1622
+ lifespan=JarvisLifespan(analyst, mode="p2p", bind_port=7950)
1623
+ )
1624
+
1625
+
1626
+ @app.post("/analyze")
1627
+ async def analyze(request: AnalysisRequest):
1628
+ """API endpoint - also accessible as a peer in the mesh."""
1629
+ result = analyst.analyze(request.data)
1630
+ return result
1631
+
1632
+
1633
+ @app.get("/peers")
1634
+ async def list_peers():
1635
+ """See what other agents are in the mesh."""
1636
+ if analyst.peers:
1637
+ return {"peers": analyst.peers.list()}
1638
+ return {"peers": []}
1639
+ ```
1640
+
1641
+ Run with:
1642
+ ```bash
1643
+ uvicorn app:app --host 0.0.0.0 --port 8000
1644
+ ```
1645
+
1646
+ Your agent is now:
1647
+ - Serving HTTP API on port 8000
1648
+ - Participating in P2P mesh on port 7950
1649
+ - Discoverable by other agents
1650
+ - Automatically handles lifecycle
1651
+
1652
+ ### Testing the Flow
1653
+
1654
+ **Step 1: Start the FastAPI server (Terminal 1)**
1655
+ ```bash
1656
+ python examples/fastapi_integration_example.py
1657
+ ```
1658
+
1659
+ **Step 2: Join a scout agent (Terminal 2)**
1660
+ ```bash
1661
+ python examples/fastapi_integration_example.py --join-as scout
1662
+ ```
1663
+
1664
+ **Step 3: Test with curl (Terminal 3)**
1665
+ ```bash
1666
+ # Chat with assistant (may delegate to analyst)
1667
+ curl -X POST http://localhost:8000/chat -H "Content-Type: application/json" -d '{"message": "Analyze Q4 sales trends"}'
1668
+
1669
+ # Ask analyst directly
1670
+ curl -X POST http://localhost:8000/ask/analyst -H "Content-Type: application/json" -d '{"message": "What are key revenue metrics?"}'
1671
+
1672
+ # See what each agent knows about peers (cognitive context)
1673
+ curl http://localhost:8000/agents
1674
+ ```
1675
+
1676
+ **Expected flow for `/chat`:**
1677
+ 1. Request goes to **assistant** agent
1678
+ 2. Assistant's LLM sees peers via `get_cognitive_context()`
1679
+ 3. LLM decides to delegate to **analyst** (data analysis request)
1680
+ 4. Assistant uses `ask_peer` tool → P2P message to analyst
1681
+ 5. Analyst processes and responds via P2P
1682
+ 6. Response includes `"delegated_to": "analyst"` and `"peer_data"`
1683
+
1684
+ **Example response:**
1685
+ ```json
1686
+ {
1687
+ "message": "Analyze Q4 sales trends",
1688
+ "response": "Based on the analyst's findings...",
1689
+ "delegated_to": "analyst",
1690
+ "peer_data": {"analysis": "...", "confidence": 0.9}
1691
+ }
1692
+ ```
1693
+
1694
+ ---
1695
+
1696
+ ## Cloud Deployment (v0.3.0)
1697
+
1698
+ **Self-registration** lets agents join existing meshes without a central orchestrator - perfect for Docker, Kubernetes, and auto-scaling.
1699
+
1700
+ ### The Problem: Central Orchestrator Required
1701
+
1702
+ Before v0.3.0, all agents had to be registered with a central Mesh:
1703
+
1704
+ ```python
1705
+ # BEFORE: Central orchestrator pattern
1706
+ # You needed one "main" node that registered all agents
1707
+
1708
+ # main_node.py (central orchestrator)
1709
+ mesh = Mesh(mode="distributed", config={"bind_port": 7950})
1710
+ mesh.add(ResearcherAgent) # Must be on this node
1711
+ mesh.add(WriterAgent) # Must be on this node
1712
+ await mesh.start()
1713
+ ```
1714
+
1715
+ **Problems with this approach:**
1716
+ - Single point of failure
1717
+ - Can't easily scale agent instances
1718
+ - Doesn't work well with Kubernetes/Docker
1719
+ - All agents must be on the same node or manually configured
1720
+
1721
+ ### The Solution: `join_mesh()` and `leave_mesh()`
1722
+
1723
+ ```python
1724
+ # AFTER: Self-registering agents
1725
+ # Each agent can join any mesh independently
1726
+
1727
+ # agent_container.py (runs in Docker/K8s)
1728
+ from jarviscore.profiles import CustomAgent
1729
+ import os
1730
+
1731
+
1732
+ class WorkerAgent(CustomAgent):
1733
+ role = "worker"
1734
+ capabilities = ["processing"]
1735
+
1736
+ async def on_peer_request(self, msg):
1737
+ return {"result": "processed"}
1738
+
1739
+
1740
+ async def main():
1741
+ agent = WorkerAgent()
1742
+ await agent.setup()
1743
+
1744
+ # Join existing mesh via environment variable
1745
+ seed_nodes = os.environ.get("JARVISCORE_SEED_NODES", "mesh-service:7950")
1746
+ await agent.join_mesh(seed_nodes=seed_nodes)
1747
+
1748
+ # Agent is now part of the mesh, discoverable by others
1749
+ await agent.serve_forever()
1750
+
1751
+ # Clean shutdown
1752
+ await agent.leave_mesh()
1753
+ ```
1754
+
1755
+ ### Environment Variables for Cloud
1756
+
1757
+ | Variable | Description | Example |
1758
+ |----------|-------------|---------|
1759
+ | `JARVISCORE_SEED_NODES` | Comma-separated list of mesh nodes | `"10.0.0.1:7950,10.0.0.2:7950"` |
1760
+ | `JARVISCORE_MESH_ENDPOINT` | This agent's reachable address | `"worker-pod-abc:7950"` |
1761
+ | `JARVISCORE_BIND_PORT` | Port to listen on | `"7950"` |
1762
+
1763
+ ### Docker Deployment Example
1764
+
1765
+ ```dockerfile
1766
+ # Dockerfile
1767
+ FROM python:3.11-slim
1768
+ WORKDIR /app
1769
+ COPY requirements.txt .
1770
+ RUN pip install -r requirements.txt
1771
+ COPY . .
1772
+ CMD ["python", "agent.py"]
1773
+ ```
1774
+
1775
+ ```python
1776
+ # agent.py
1777
+ import asyncio
1778
+ import os
1779
+ from jarviscore.profiles import CustomAgent
1780
+
1781
+
1782
+ class WorkerAgent(CustomAgent):
1783
+ role = "worker"
1784
+ capabilities = ["processing"]
1785
+
1786
+ async def on_peer_request(self, msg):
1787
+ task = msg.data.get("task", "")
1788
+ return {"result": f"Processed: {task}"}
1789
+
1790
+
1791
+ async def main():
1792
+ agent = WorkerAgent()
1793
+ await agent.setup()
1794
+
1795
+ # Configuration from environment
1796
+ seed_nodes = os.environ.get("JARVISCORE_SEED_NODES")
1797
+ mesh_endpoint = os.environ.get("JARVISCORE_MESH_ENDPOINT")
1798
+
1799
+ if seed_nodes:
1800
+ await agent.join_mesh(
1801
+ seed_nodes=seed_nodes,
1802
+ advertise_endpoint=mesh_endpoint
1803
+ )
1804
+ print(f"Joined mesh via {seed_nodes}")
1805
+ else:
1806
+ print("Running standalone (no JARVISCORE_SEED_NODES)")
1807
+
1808
+ await agent.serve_forever()
1809
+
1810
+
1811
+ if __name__ == "__main__":
1812
+ asyncio.run(main())
1813
+ ```
1814
+
1815
+ ```yaml
1816
+ # docker-compose.yml
1817
+ version: '3.8'
1818
+ services:
1819
+ mesh-seed:
1820
+ build: .
1821
+ environment:
1822
+ - JARVISCORE_BIND_PORT=7950
1823
+ ports:
1824
+ - "7950:7950"
1825
+
1826
+ worker-1:
1827
+ build: .
1828
+ environment:
1829
+ - JARVISCORE_SEED_NODES=mesh-seed:7950
1830
+ - JARVISCORE_MESH_ENDPOINT=worker-1:7950
1831
+ depends_on:
1832
+ - mesh-seed
1833
+
1834
+ worker-2:
1835
+ build: .
1836
+ environment:
1837
+ - JARVISCORE_SEED_NODES=mesh-seed:7950
1838
+ - JARVISCORE_MESH_ENDPOINT=worker-2:7950
1839
+ depends_on:
1840
+ - mesh-seed
1841
+ ```
1842
+
1843
+ ### Kubernetes Deployment Example
1844
+
1845
+ ```yaml
1846
+ # k8s-deployment.yaml
1847
+ apiVersion: apps/v1
1848
+ kind: Deployment
1849
+ metadata:
1850
+ name: jarvis-worker
1851
+ spec:
1852
+ replicas: 3 # Scale as needed
1853
+ selector:
1854
+ matchLabels:
1855
+ app: jarvis-worker
1856
+ template:
1857
+ metadata:
1858
+ labels:
1859
+ app: jarvis-worker
1860
+ spec:
1861
+ containers:
1862
+ - name: worker
1863
+ image: myregistry/jarvis-worker:latest
1864
+ env:
1865
+ - name: JARVISCORE_SEED_NODES
1866
+ value: "jarvis-mesh-service:7950"
1867
+ - name: JARVISCORE_MESH_ENDPOINT
1868
+ valueFrom:
1869
+ fieldRef:
1870
+ fieldPath: status.podIP
1871
+ ports:
1872
+ - containerPort: 7950
1873
+ ---
1874
+ apiVersion: v1
1875
+ kind: Service
1876
+ metadata:
1877
+ name: jarvis-mesh-service
1878
+ spec:
1879
+ selector:
1880
+ app: jarvis-mesh-seed
1881
+ ports:
1882
+ - port: 7950
1883
+ targetPort: 7950
1884
+ ```
1885
+
1886
+ ### How Self-Registration Works
1887
+
1888
+ ```
1889
+ ┌─────────────────────────────────────────────────────────────┐
1890
+ │ SELF-REGISTRATION FLOW │
1891
+ ├─────────────────────────────────────────────────────────────┤
1892
+ │ │
1893
+ │ 1. New container starts │
1894
+ │ │ │
1895
+ │ ▼ │
1896
+ │ 2. agent.join_mesh(seed_nodes="mesh:7950") │
1897
+ │ │ │
1898
+ │ ▼ │
1899
+ │ 3. Agent connects to seed node │
1900
+ │ │ │
1901
+ │ ▼ │
1902
+ │ 4. SWIM protocol discovers all peers │
1903
+ │ │ │
1904
+ │ ▼ │
1905
+ │ 5. Agent registers its role/capabilities │
1906
+ │ │ │
1907
+ │ ▼ │
1908
+ │ 6. Other agents can now discover and call this agent │
1909
+ │ │
1910
+ └─────────────────────────────────────────────────────────────┘
1911
+ ```
1912
+
1913
+ ### RemoteAgentProxy (Automatic)
1914
+
1915
+ When agents join from different nodes, the framework automatically creates `RemoteAgentProxy` objects. You don't need to do anything special - the mesh handles it:
1916
+
1917
+ ```python
1918
+ # On any node, you can discover and call remote agents
1919
+ if agent.peers:
1920
+ # This works whether the peer is local or remote
1921
+ response = await agent.peers.as_tool().execute(
1922
+ "ask_peer",
1923
+ {"role": "worker", "question": "Process this data"}
1924
+ )
1925
+ ```
1926
+
1927
+ ---
1928
+
1069
1929
  ## API Reference
1070
1930
 
1071
1931
  ### CustomAgent Class Attributes
@@ -1082,8 +1942,26 @@ results = await mesh.workflow("parallel-example", [
1082
1942
  | `setup()` | Both | Called once on startup. Initialize resources here. Always call `await super().setup()` |
1083
1943
  | `run()` | P2P | Main loop for continuous operation. Required for P2P mode |
1084
1944
  | `execute_task(task)` | Distributed | Handle a workflow task. Required for Distributed mode |
1945
+ | `join_mesh(seed_nodes, ...)` | Both | **(v0.3.0)** Self-register with an existing mesh |
1946
+ | `leave_mesh()` | Both | **(v0.3.0)** Gracefully leave the mesh |
1947
+ | `serve_forever()` | Both | **(v0.3.0)** Block until shutdown signal |
1948
+
1949
+ ### P2P Message Handlers (v0.3.1)
1950
+
1951
+ CustomAgent includes built-in P2P message handlers for handler-based communication.
1085
1952
 
1086
- ### Why `execute_task()` is Required in P2P Mode
1953
+ | Attribute/Method | Type | Description |
1954
+ |------------------|------|-------------|
1955
+ | `listen_timeout` | `float` | Seconds to wait for messages in `run()` loop. Default: 1.0 |
1956
+ | `auto_respond` | `bool` | Auto-send `on_peer_request` return value. Default: True |
1957
+ | `on_peer_request(msg)` | async method | Handle incoming requests. Return value sent as response |
1958
+ | `on_peer_notify(msg)` | async method | Handle broadcast notifications. No return needed |
1959
+ | `on_error(error, msg)` | async method | Handle errors during message processing |
1960
+ | `run()` | async method | Built-in listener loop that dispatches to handlers |
1961
+
1962
+ **Note:** Override `on_peer_request()` and `on_peer_notify()` for your business logic. The `run()` method handles the message dispatch automatically.
1963
+
1964
+ ### Why `execute_task()` Exists in CustomAgent
1087
1965
 
1088
1966
  You may notice that P2P agents must implement `execute_task()` even though they primarily use `run()`. Here's why:
1089
1967
 
@@ -1128,6 +2006,33 @@ Access via `self.peers.as_tool().execute(tool_name, params)`:
1128
2006
  | `broadcast` | `{"message": str}` | Send a message to all connected peers |
1129
2007
  | `list_peers` | `{}` | Get list of available peers and their capabilities |
1130
2008
 
2009
+ ### PeerClient Methods (v0.3.0)
2010
+
2011
+ Access via `self.peers`:
2012
+
2013
+ | Method | Returns | Description |
2014
+ |--------|---------|-------------|
2015
+ | `get_cognitive_context()` | `str` | Generate LLM-ready text describing available peers |
2016
+ | `list()` | `list[PeerInfo]` | Get list of connected peers |
2017
+ | `as_tool()` | `PeerTool` | Get peer tools for LLM tool use |
2018
+ | `receive(timeout)` | `IncomingMessage` | Receive next message (for CustomAgent run loops) |
2019
+ | `respond(msg, data)` | `None` | Respond to a request message |
2020
+
2021
+ ### JarvisLifespan (v0.3.0)
2022
+
2023
+ FastAPI integration helper:
2024
+
2025
+ ```python
2026
+ from jarviscore.integrations.fastapi import JarvisLifespan
2027
+
2028
+ JarvisLifespan(
2029
+ agent, # Agent instance
2030
+ mode="p2p", # "p2p" or "distributed"
2031
+ bind_port=7950, # Optional: P2P port
2032
+ seed_nodes="ip:port", # Optional: for multi-node
2033
+ )
2034
+ ```
2035
+
1131
2036
  ### Mesh Configuration
1132
2037
 
1133
2038
  ```python
@@ -1358,5 +2263,12 @@ async def ask_researcher(self, question: str) -> str:
1358
2263
 
1359
2264
  For complete, runnable examples, see:
1360
2265
 
1361
- - `examples/customagent_p2p_example.py` - P2P mode with peer communication
2266
+ - `examples/customagent_p2p_example.py` - P2P mode with LLM-driven peer communication
1362
2267
  - `examples/customagent_distributed_example.py` - Distributed mode with workflows
2268
+ - `examples/customagent_cognitive_discovery_example.py` - CustomAgent + cognitive discovery (v0.4.0)
2269
+ - `examples/fastapi_integration_example.py` - FastAPI + JarvisLifespan (v0.3.0)
2270
+ - `examples/cloud_deployment_example.py` - Self-registration with join_mesh (v0.3.0)
2271
+
2272
+ ---
2273
+
2274
+ *CustomAgent Guide - JarvisCore Framework v0.3.1*