jarviscore-framework 0.2.1__py3-none-any.whl → 0.3.0__py3-none-any.whl

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
Files changed (32) hide show
  1. examples/cloud_deployment_example.py +162 -0
  2. examples/fastapi_integration_example.py +570 -0
  3. examples/listeneragent_cognitive_discovery_example.py +343 -0
  4. jarviscore/__init__.py +22 -5
  5. jarviscore/cli/smoketest.py +8 -4
  6. jarviscore/core/agent.py +227 -0
  7. jarviscore/data/examples/cloud_deployment_example.py +162 -0
  8. jarviscore/data/examples/fastapi_integration_example.py +570 -0
  9. jarviscore/data/examples/listeneragent_cognitive_discovery_example.py +343 -0
  10. jarviscore/docs/API_REFERENCE.md +296 -3
  11. jarviscore/docs/CHANGELOG.md +97 -0
  12. jarviscore/docs/CUSTOMAGENT_GUIDE.md +832 -13
  13. jarviscore/docs/GETTING_STARTED.md +111 -7
  14. jarviscore/docs/USER_GUIDE.md +152 -6
  15. jarviscore/integrations/__init__.py +16 -0
  16. jarviscore/integrations/fastapi.py +247 -0
  17. jarviscore/p2p/broadcaster.py +10 -3
  18. jarviscore/p2p/coordinator.py +310 -14
  19. jarviscore/p2p/keepalive.py +45 -23
  20. jarviscore/p2p/peer_client.py +282 -10
  21. jarviscore/p2p/swim_manager.py +9 -4
  22. jarviscore/profiles/__init__.py +10 -2
  23. jarviscore/profiles/listeneragent.py +292 -0
  24. {jarviscore_framework-0.2.1.dist-info → jarviscore_framework-0.3.0.dist-info}/METADATA +37 -4
  25. {jarviscore_framework-0.2.1.dist-info → jarviscore_framework-0.3.0.dist-info}/RECORD +32 -18
  26. {jarviscore_framework-0.2.1.dist-info → jarviscore_framework-0.3.0.dist-info}/WHEEL +1 -1
  27. tests/test_13_dx_improvements.py +554 -0
  28. tests/test_14_cloud_deployment.py +403 -0
  29. tests/test_15_llm_cognitive_discovery.py +684 -0
  30. tests/test_16_unified_dx_flow.py +947 -0
  31. {jarviscore_framework-0.2.1.dist-info → jarviscore_framework-0.3.0.dist-info}/licenses/LICENSE +0 -0
  32. {jarviscore_framework-0.2.1.dist-info → jarviscore_framework-0.3.0.dist-info}/top_level.txt +0 -0
@@ -12,11 +12,15 @@ CustomAgent lets you integrate your **existing agent code** with JarvisCore's ne
12
12
  1. [Prerequisites](#prerequisites)
13
13
  2. [Choose Your Mode](#choose-your-mode)
14
14
  3. [P2P Mode](#p2p-mode)
15
- 4. [Distributed Mode](#distributed-mode)
16
- 5. [API Reference](#api-reference)
17
- 6. [Multi-Node Deployment](#multi-node-deployment)
18
- 7. [Error Handling](#error-handling)
19
- 8. [Troubleshooting](#troubleshooting)
15
+ 4. [ListenerAgent (v0.3.0)](#listeneragent-v030) - API-first agents without run() loops
16
+ 5. [Distributed Mode](#distributed-mode)
17
+ 6. [Cognitive Discovery (v0.3.0)](#cognitive-discovery-v030) - Dynamic peer awareness for LLMs
18
+ 7. [FastAPI Integration (v0.3.0)](#fastapi-integration-v030) - 3-line setup with JarvisLifespan
19
+ 8. [Cloud Deployment (v0.3.0)](#cloud-deployment-v030) - Self-registration for containers
20
+ 9. [API Reference](#api-reference)
21
+ 10. [Multi-Node Deployment](#multi-node-deployment)
22
+ 11. [Error Handling](#error-handling)
23
+ 12. [Troubleshooting](#troubleshooting)
20
24
 
21
25
  ---
22
26
 
@@ -98,13 +102,15 @@ class MyLLMClient:
98
102
 
99
103
  ### Quick Comparison
100
104
 
101
- | Feature | P2P Mode | Distributed Mode |
102
- |---------|----------|------------------|
103
- | **Primary method** | `run()` - continuous loop | `execute_task()` - on-demand |
104
- | **Communication** | Direct peer messaging | Workflow orchestration |
105
- | **Best for** | Chatbots, real-time agents | Pipelines, batch processing |
106
- | **Coordination** | Agents self-coordinate | Framework coordinates |
107
- | **Supports workflows** | No | Yes |
105
+ | Feature | P2P Mode (CustomAgent) | P2P Mode (ListenerAgent) | Distributed Mode |
106
+ |---------|------------------------|--------------------------|------------------|
107
+ | **Primary method** | `run()` - continuous loop | `on_peer_request()` handlers | `execute_task()` - on-demand |
108
+ | **Communication** | Direct peer messaging | Handler-based (no loop) | Workflow orchestration |
109
+ | **Best for** | Custom message loops | API-first agents, FastAPI | Pipelines, batch processing |
110
+ | **Coordination** | Agents self-coordinate | Framework handles loop | Framework coordinates |
111
+ | **Supports workflows** | No | No | Yes |
112
+
113
+ > **New in v0.3.0**: `ListenerAgent` lets you write P2P agents without managing the `run()` loop yourself. Just implement `on_peer_request()` and `on_peer_notify()` handlers.
108
114
 
109
115
  ---
110
116
 
@@ -660,6 +666,183 @@ async def run(self):
660
666
 
661
667
  ---
662
668
 
669
+ ## ListenerAgent (v0.3.0)
670
+
671
+ **ListenerAgent** is for developers who want P2P communication without writing the `run()` loop themselves.
672
+
673
+ ### The Problem with CustomAgent for P2P
674
+
675
+ Every P2P CustomAgent needs this boilerplate:
676
+
677
+ ```python
678
+ # BEFORE (CustomAgent) - You write the same loop every time
679
+ class MyAgent(CustomAgent):
680
+ role = "processor"
681
+ capabilities = ["processing"]
682
+
683
+ async def run(self):
684
+ """You have to write this loop for every P2P agent."""
685
+ while not self.shutdown_requested:
686
+ if self.peers:
687
+ msg = await self.peers.receive(timeout=0.5)
688
+ if msg and msg.is_request:
689
+ # Handle request
690
+ result = self.process(msg.data)
691
+ await self.peers.respond(msg, {"response": result})
692
+ elif msg and msg.is_notify:
693
+ # Handle notification
694
+ self.handle_notify(msg.data)
695
+ await asyncio.sleep(0.1)
696
+
697
+ async def execute_task(self, task):
698
+ """Still required even though you're using run()."""
699
+ return {"status": "success"}
700
+ ```
701
+
702
+ ### The Solution: ListenerAgent
703
+
704
+ ```python
705
+ # AFTER (ListenerAgent) - Just implement the handlers
706
+ from jarviscore.profiles import ListenerAgent
707
+
708
+ class MyAgent(ListenerAgent):
709
+ role = "processor"
710
+ capabilities = ["processing"]
711
+
712
+ async def on_peer_request(self, msg):
713
+ """Called when another agent sends a request."""
714
+ return {"result": msg.data.get("task", "").upper()}
715
+
716
+ async def on_peer_notify(self, msg):
717
+ """Called when another agent broadcasts a notification."""
718
+ print(f"Notification received: {msg.data}")
719
+ ```
720
+
721
+ **What you no longer need:**
722
+ - ❌ `run()` loop with `while not self.shutdown_requested`
723
+ - ❌ `self.peers.receive()` and `self.peers.respond()` boilerplate
724
+ - ❌ `execute_task()` stub method
725
+ - ❌ `asyncio.sleep()` timing
726
+
727
+ **What the framework handles:**
728
+ - ✅ Message receiving loop
729
+ - ✅ Routing requests to `on_peer_request()`
730
+ - ✅ Routing notifications to `on_peer_notify()`
731
+ - ✅ Automatic response sending
732
+ - ✅ Shutdown handling
733
+
734
+ ### Complete ListenerAgent Example
735
+
736
+ ```python
737
+ # agents.py
738
+ from jarviscore.profiles import ListenerAgent
739
+
740
+
741
+ class AnalystAgent(ListenerAgent):
742
+ """A data analyst that responds to peer requests."""
743
+
744
+ role = "analyst"
745
+ capabilities = ["analysis", "data_interpretation"]
746
+
747
+ async def setup(self):
748
+ await super().setup()
749
+ self.llm = MyLLMClient() # Your LLM client
750
+
751
+ async def on_peer_request(self, msg):
752
+ """
753
+ Handle incoming requests from other agents.
754
+
755
+ Args:
756
+ msg: IncomingMessage with msg.data, msg.sender_role, etc.
757
+
758
+ Returns:
759
+ dict: Response sent back to the requesting agent
760
+ """
761
+ query = msg.data.get("question", "")
762
+
763
+ # Your analysis logic
764
+ result = self.llm.chat(f"Analyze: {query}")
765
+
766
+ return {"response": result, "status": "success"}
767
+
768
+ async def on_peer_notify(self, msg):
769
+ """
770
+ Handle broadcast notifications.
771
+
772
+ Args:
773
+ msg: IncomingMessage with notification data
774
+
775
+ Returns:
776
+ None (notifications don't expect responses)
777
+ """
778
+ print(f"[{self.role}] Received notification: {msg.data}")
779
+
780
+
781
+ class AssistantAgent(ListenerAgent):
782
+ """An assistant that coordinates with specialists."""
783
+
784
+ role = "assistant"
785
+ capabilities = ["chat", "coordination"]
786
+
787
+ async def setup(self):
788
+ await super().setup()
789
+ self.llm = MyLLMClient()
790
+
791
+ async def on_peer_request(self, msg):
792
+ """Handle incoming chat requests."""
793
+ query = msg.data.get("query", "")
794
+
795
+ # Use peer tools to ask specialists
796
+ if self.peers and "data" in query.lower():
797
+ # Ask the analyst for help
798
+ analyst_response = await self.peers.as_tool().execute(
799
+ "ask_peer",
800
+ {"role": "analyst", "question": query}
801
+ )
802
+ return {"response": analyst_response.get("response", "")}
803
+
804
+ # Handle directly
805
+ return {"response": self.llm.chat(query)}
806
+ ```
807
+
808
+ ```python
809
+ # main.py
810
+ import asyncio
811
+ from jarviscore import Mesh
812
+ from agents import AnalystAgent, AssistantAgent
813
+
814
+
815
+ async def main():
816
+ mesh = Mesh(mode="p2p", config={"bind_port": 7950})
817
+
818
+ mesh.add(AnalystAgent)
819
+ mesh.add(AssistantAgent)
820
+
821
+ await mesh.start()
822
+
823
+ # Agents automatically run their listeners
824
+ await mesh.run_forever()
825
+
826
+
827
+ if __name__ == "__main__":
828
+ asyncio.run(main())
829
+ ```
830
+
831
+ ### When to Use ListenerAgent vs CustomAgent
832
+
833
+ | Use ListenerAgent when... | Use CustomAgent when... |
834
+ |---------------------------|-------------------------|
835
+ | You want the simplest P2P agent | You need custom message loop timing |
836
+ | Request/response pattern fits your use case | You need to initiate messages proactively |
837
+ | You're integrating with FastAPI | You need fine-grained control over the loop |
838
+ | You want less boilerplate | You have complex coordination logic |
839
+
840
+ ### ListenerAgent with FastAPI
841
+
842
+ ListenerAgent shines with FastAPI integration. See [FastAPI Integration](#fastapi-integration-v030) below.
843
+
844
+ ---
845
+
663
846
  ## Distributed Mode
664
847
 
665
848
  Distributed mode is for task pipelines where the framework orchestrates execution order and passes data between steps.
@@ -1066,6 +1249,592 @@ results = await mesh.workflow("parallel-example", [
1066
1249
 
1067
1250
  ---
1068
1251
 
1252
+ ## Cognitive Discovery (v0.3.0)
1253
+
1254
+ **Cognitive Discovery** lets your LLM dynamically learn about available peers instead of hardcoding agent names in prompts.
1255
+
1256
+ ### The Problem: Hardcoded Peer Names
1257
+
1258
+ Before v0.3.0, you had to hardcode peer information in your system prompts:
1259
+
1260
+ ```python
1261
+ # BEFORE: Hardcoded peer names - breaks when peers change
1262
+ system_prompt = """You are a helpful assistant.
1263
+
1264
+ You have access to:
1265
+ - ask_peer: Ask specialist agents for help
1266
+ - Use role="analyst" for data analysis
1267
+ - Use role="researcher" for research tasks
1268
+ - Use role="writer" for content creation
1269
+
1270
+ When a user needs data analysis, USE ask_peer with role="analyst"."""
1271
+ ```
1272
+
1273
+ **Problems:**
1274
+ - If you add a new agent, you must update every prompt
1275
+ - If an agent is offline, the LLM still tries to call it
1276
+ - Prompts become stale as your system evolves
1277
+ - Difficult to manage across many agents
1278
+
1279
+ ### The Solution: `get_cognitive_context()`
1280
+
1281
+ ```python
1282
+ # AFTER: Dynamic peer awareness - always up to date
1283
+ async def get_system_prompt(self) -> str:
1284
+ base_prompt = """You are a helpful assistant.
1285
+
1286
+ You have access to peer tools for collaborating with other agents."""
1287
+
1288
+ # Generate LLM-ready peer descriptions dynamically
1289
+ if self.peers:
1290
+ peer_context = self.peers.get_cognitive_context()
1291
+ return f"{base_prompt}\n\n{peer_context}"
1292
+
1293
+ return base_prompt
1294
+ ```
1295
+
1296
+ The `get_cognitive_context()` method generates text like:
1297
+
1298
+ ```
1299
+ Available Peers:
1300
+ - analyst (capabilities: analysis, data_interpretation)
1301
+ Use ask_peer with role="analyst" for data analysis tasks
1302
+ - researcher (capabilities: research, web_search)
1303
+ Use ask_peer with role="researcher" for research tasks
1304
+ ```
1305
+
1306
+ ### Complete Example: Dynamic Peer Discovery
1307
+
1308
+ ```python
1309
+ # agents.py
1310
+ from jarviscore.profiles import CustomAgent
1311
+
1312
+
1313
+ class AssistantAgent(CustomAgent):
1314
+ """An assistant that dynamically discovers and uses peers."""
1315
+
1316
+ role = "assistant"
1317
+ capabilities = ["chat", "coordination"]
1318
+
1319
+ async def setup(self):
1320
+ await super().setup()
1321
+ self.llm = MyLLMClient()
1322
+
1323
+ def get_system_prompt(self) -> str:
1324
+ """Build system prompt with dynamic peer context."""
1325
+ base_prompt = """You are a helpful AI assistant.
1326
+
1327
+ When users ask questions that require specialized knowledge:
1328
+ 1. Check what peers are available
1329
+ 2. Use ask_peer to get help from the right specialist
1330
+ 3. Synthesize their response for the user"""
1331
+
1332
+ # DYNAMIC: Add current peer information
1333
+ if self.peers:
1334
+ peer_context = self.peers.get_cognitive_context()
1335
+ return f"{base_prompt}\n\n{peer_context}"
1336
+
1337
+ return base_prompt
1338
+
1339
+ def get_tools(self) -> list:
1340
+ """Get tools including peer tools."""
1341
+ tools = [
1342
+ # Your local tools...
1343
+ ]
1344
+
1345
+ if self.peers:
1346
+ tools.extend(self.peers.as_tool().schema)
1347
+
1348
+ return tools
1349
+
1350
+ async def chat(self, user_message: str) -> str:
1351
+ """Chat with dynamic peer awareness."""
1352
+ # System prompt now includes current peer info
1353
+ system = self.get_system_prompt()
1354
+ tools = self.get_tools()
1355
+
1356
+ response = self.llm.chat(
1357
+ messages=[{"role": "user", "content": user_message}],
1358
+ tools=tools,
1359
+ system=system
1360
+ )
1361
+
1362
+ # Handle tool use...
1363
+ return response.get("content", "")
1364
+ ```
1365
+
1366
+ ### Benefits of Cognitive Discovery
1367
+
1368
+ | Before (Hardcoded) | After (Dynamic) |
1369
+ |--------------------|-----------------|
1370
+ | Update prompts manually when peers change | Prompts auto-update |
1371
+ | LLM tries to call offline agents | Only shows available agents |
1372
+ | Difficult to manage at scale | Scales automatically |
1373
+ | Stale documentation in prompts | Always current |
1374
+
1375
+ ---
1376
+
1377
+ ## FastAPI Integration (v0.3.0)
1378
+
1379
+ **JarvisLifespan** reduces FastAPI integration from ~100 lines to 3 lines.
1380
+
1381
+ ### The Problem: Manual Lifecycle Management
1382
+
1383
+ Before v0.3.0, integrating an agent with FastAPI required manual lifecycle management:
1384
+
1385
+ ```python
1386
+ # BEFORE: ~100 lines of boilerplate
1387
+ from contextlib import asynccontextmanager
1388
+ from fastapi import FastAPI
1389
+ from jarviscore import Mesh
1390
+ from jarviscore.profiles import CustomAgent
1391
+ import asyncio
1392
+
1393
+
1394
+ class MyAgent(CustomAgent):
1395
+ role = "processor"
1396
+ capabilities = ["processing"]
1397
+
1398
+ async def run(self):
1399
+ while not self.shutdown_requested:
1400
+ if self.peers:
1401
+ msg = await self.peers.receive(timeout=0.5)
1402
+ if msg and msg.is_request:
1403
+ result = self.process(msg.data)
1404
+ await self.peers.respond(msg, {"response": result})
1405
+ await asyncio.sleep(0.1)
1406
+
1407
+ async def execute_task(self, task):
1408
+ return {"status": "success"}
1409
+
1410
+
1411
+ # Manual lifecycle management
1412
+ mesh = None
1413
+ agent = None
1414
+ run_task = None
1415
+
1416
+
1417
+ @asynccontextmanager
1418
+ async def lifespan(app: FastAPI):
1419
+ global mesh, agent, run_task
1420
+
1421
+ # Startup
1422
+ mesh = Mesh(mode="p2p", config={"bind_port": 7950})
1423
+ agent = mesh.add(MyAgent)
1424
+ await mesh.start()
1425
+ run_task = asyncio.create_task(agent.run())
1426
+
1427
+ yield
1428
+
1429
+ # Shutdown
1430
+ agent.request_shutdown()
1431
+ run_task.cancel()
1432
+ await mesh.stop()
1433
+
1434
+
1435
+ app = FastAPI(lifespan=lifespan)
1436
+
1437
+
1438
+ @app.post("/process")
1439
+ async def process(data: dict):
1440
+ # Your endpoint logic
1441
+ return {"result": "processed"}
1442
+ ```
1443
+
1444
+ ### The Solution: JarvisLifespan
1445
+
1446
+ ```python
1447
+ # AFTER: 3 lines to integrate
1448
+ from fastapi import FastAPI
1449
+ from jarviscore.profiles import ListenerAgent
1450
+ from jarviscore.integrations.fastapi import JarvisLifespan
1451
+
1452
+
1453
+ class ProcessorAgent(ListenerAgent):
1454
+ role = "processor"
1455
+ capabilities = ["processing"]
1456
+
1457
+ async def on_peer_request(self, msg):
1458
+ return {"result": msg.data.get("task", "").upper()}
1459
+
1460
+
1461
+ # That's it - 3 lines!
1462
+ app = FastAPI(lifespan=JarvisLifespan(ProcessorAgent(), mode="p2p"))
1463
+
1464
+
1465
+ @app.post("/process")
1466
+ async def process(data: dict):
1467
+ return {"result": "processed"}
1468
+ ```
1469
+
1470
+ ### JarvisLifespan Configuration
1471
+
1472
+ ```python
1473
+ from jarviscore.integrations.fastapi import JarvisLifespan
1474
+
1475
+ # Basic usage
1476
+ app = FastAPI(lifespan=JarvisLifespan(agent, mode="p2p"))
1477
+
1478
+ # With configuration
1479
+ app = FastAPI(
1480
+ lifespan=JarvisLifespan(
1481
+ agent,
1482
+ mode="p2p", # or "distributed"
1483
+ bind_port=7950, # P2P port
1484
+ seed_nodes="ip:port", # For multi-node
1485
+ )
1486
+ )
1487
+ ```
1488
+
1489
+ ### Complete FastAPI Example
1490
+
1491
+ ```python
1492
+ # app.py
1493
+ from fastapi import FastAPI, HTTPException
1494
+ from pydantic import BaseModel
1495
+ from jarviscore.profiles import ListenerAgent
1496
+ from jarviscore.integrations.fastapi import JarvisLifespan
1497
+
1498
+
1499
+ class AnalysisRequest(BaseModel):
1500
+ data: str
1501
+
1502
+
1503
+ class AnalystAgent(ListenerAgent):
1504
+ """Agent that handles both API requests and P2P messages."""
1505
+
1506
+ role = "analyst"
1507
+ capabilities = ["analysis"]
1508
+
1509
+ async def setup(self):
1510
+ await super().setup()
1511
+ self.llm = MyLLMClient()
1512
+
1513
+ async def on_peer_request(self, msg):
1514
+ """Handle requests from other agents in the mesh."""
1515
+ query = msg.data.get("question", "")
1516
+ result = self.llm.chat(f"Analyze: {query}")
1517
+ return {"response": result}
1518
+
1519
+ def analyze(self, data: str) -> dict:
1520
+ """Method called by API endpoint."""
1521
+ result = self.llm.chat(f"Analyze this data: {data}")
1522
+ return {"analysis": result}
1523
+
1524
+
1525
+ # Create agent instance
1526
+ analyst = AnalystAgent()
1527
+
1528
+ # Create FastAPI app with automatic lifecycle management
1529
+ app = FastAPI(
1530
+ title="Analyst Service",
1531
+ lifespan=JarvisLifespan(analyst, mode="p2p", bind_port=7950)
1532
+ )
1533
+
1534
+
1535
+ @app.post("/analyze")
1536
+ async def analyze(request: AnalysisRequest):
1537
+ """API endpoint - also accessible as a peer in the mesh."""
1538
+ result = analyst.analyze(request.data)
1539
+ return result
1540
+
1541
+
1542
+ @app.get("/peers")
1543
+ async def list_peers():
1544
+ """See what other agents are in the mesh."""
1545
+ if analyst.peers:
1546
+ return {"peers": analyst.peers.list()}
1547
+ return {"peers": []}
1548
+ ```
1549
+
1550
+ Run with:
1551
+ ```bash
1552
+ uvicorn app:app --host 0.0.0.0 --port 8000
1553
+ ```
1554
+
1555
+ Your agent is now:
1556
+ - Serving HTTP API on port 8000
1557
+ - Participating in P2P mesh on port 7950
1558
+ - Discoverable by other agents
1559
+ - Automatically handles lifecycle
1560
+
1561
+ ### Testing the Flow
1562
+
1563
+ **Step 1: Start the FastAPI server (Terminal 1)**
1564
+ ```bash
1565
+ python examples/fastapi_integration_example.py
1566
+ ```
1567
+
1568
+ **Step 2: Join a scout agent (Terminal 2)**
1569
+ ```bash
1570
+ python examples/fastapi_integration_example.py --join-as scout
1571
+ ```
1572
+
1573
+ **Step 3: Test with curl (Terminal 3)**
1574
+ ```bash
1575
+ # Chat with assistant (may delegate to analyst)
1576
+ curl -X POST http://localhost:8000/chat -H "Content-Type: application/json" -d '{"message": "Analyze Q4 sales trends"}'
1577
+
1578
+ # Ask analyst directly
1579
+ curl -X POST http://localhost:8000/ask/analyst -H "Content-Type: application/json" -d '{"message": "What are key revenue metrics?"}'
1580
+
1581
+ # See what each agent knows about peers (cognitive context)
1582
+ curl http://localhost:8000/agents
1583
+ ```
1584
+
1585
+ **Expected flow for `/chat`:**
1586
+ 1. Request goes to **assistant** agent
1587
+ 2. Assistant's LLM sees peers via `get_cognitive_context()`
1588
+ 3. LLM decides to delegate to **analyst** (data analysis request)
1589
+ 4. Assistant uses `ask_peer` tool → P2P message to analyst
1590
+ 5. Analyst processes and responds via P2P
1591
+ 6. Response includes `"delegated_to": "analyst"` and `"peer_data"`
1592
+
1593
+ **Example response:**
1594
+ ```json
1595
+ {
1596
+ "message": "Analyze Q4 sales trends",
1597
+ "response": "Based on the analyst's findings...",
1598
+ "delegated_to": "analyst",
1599
+ "peer_data": {"analysis": "...", "confidence": 0.9}
1600
+ }
1601
+ ```
1602
+
1603
+ ---
1604
+
1605
+ ## Cloud Deployment (v0.3.0)
1606
+
1607
+ **Self-registration** lets agents join existing meshes without a central orchestrator - perfect for Docker, Kubernetes, and auto-scaling.
1608
+
1609
+ ### The Problem: Central Orchestrator Required
1610
+
1611
+ Before v0.3.0, all agents had to be registered with a central Mesh:
1612
+
1613
+ ```python
1614
+ # BEFORE: Central orchestrator pattern
1615
+ # You needed one "main" node that registered all agents
1616
+
1617
+ # main_node.py (central orchestrator)
1618
+ mesh = Mesh(mode="distributed", config={"bind_port": 7950})
1619
+ mesh.add(ResearcherAgent) # Must be on this node
1620
+ mesh.add(WriterAgent) # Must be on this node
1621
+ await mesh.start()
1622
+ ```
1623
+
1624
+ **Problems with this approach:**
1625
+ - Single point of failure
1626
+ - Can't easily scale agent instances
1627
+ - Doesn't work well with Kubernetes/Docker
1628
+ - All agents must be on the same node or manually configured
1629
+
1630
+ ### The Solution: `join_mesh()` and `leave_mesh()`
1631
+
1632
+ ```python
1633
+ # AFTER: Self-registering agents
1634
+ # Each agent can join any mesh independently
1635
+
1636
+ # agent_container.py (runs in Docker/K8s)
1637
+ from jarviscore.profiles import ListenerAgent
1638
+ import os
1639
+
1640
+
1641
+ class WorkerAgent(ListenerAgent):
1642
+ role = "worker"
1643
+ capabilities = ["processing"]
1644
+
1645
+ async def on_peer_request(self, msg):
1646
+ return {"result": "processed"}
1647
+
1648
+
1649
+ async def main():
1650
+ agent = WorkerAgent()
1651
+ await agent.setup()
1652
+
1653
+ # Join existing mesh via environment variable
1654
+ seed_nodes = os.environ.get("JARVISCORE_SEED_NODES", "mesh-service:7950")
1655
+ await agent.join_mesh(seed_nodes=seed_nodes)
1656
+
1657
+ # Agent is now part of the mesh, discoverable by others
1658
+ await agent.serve_forever()
1659
+
1660
+ # Clean shutdown
1661
+ await agent.leave_mesh()
1662
+ ```
1663
+
1664
+ ### Environment Variables for Cloud
1665
+
1666
+ | Variable | Description | Example |
1667
+ |----------|-------------|---------|
1668
+ | `JARVISCORE_SEED_NODES` | Comma-separated list of mesh nodes | `"10.0.0.1:7950,10.0.0.2:7950"` |
1669
+ | `JARVISCORE_MESH_ENDPOINT` | This agent's reachable address | `"worker-pod-abc:7950"` |
1670
+ | `JARVISCORE_BIND_PORT` | Port to listen on | `"7950"` |
1671
+
1672
+ ### Docker Deployment Example
1673
+
1674
+ ```dockerfile
1675
+ # Dockerfile
1676
+ FROM python:3.11-slim
1677
+ WORKDIR /app
1678
+ COPY requirements.txt .
1679
+ RUN pip install -r requirements.txt
1680
+ COPY . .
1681
+ CMD ["python", "agent.py"]
1682
+ ```
1683
+
1684
+ ```python
1685
+ # agent.py
1686
+ import asyncio
1687
+ import os
1688
+ from jarviscore.profiles import ListenerAgent
1689
+
1690
+
1691
+ class WorkerAgent(ListenerAgent):
1692
+ role = "worker"
1693
+ capabilities = ["processing"]
1694
+
1695
+ async def on_peer_request(self, msg):
1696
+ task = msg.data.get("task", "")
1697
+ return {"result": f"Processed: {task}"}
1698
+
1699
+
1700
+ async def main():
1701
+ agent = WorkerAgent()
1702
+ await agent.setup()
1703
+
1704
+ # Configuration from environment
1705
+ seed_nodes = os.environ.get("JARVISCORE_SEED_NODES")
1706
+ mesh_endpoint = os.environ.get("JARVISCORE_MESH_ENDPOINT")
1707
+
1708
+ if seed_nodes:
1709
+ await agent.join_mesh(
1710
+ seed_nodes=seed_nodes,
1711
+ advertise_endpoint=mesh_endpoint
1712
+ )
1713
+ print(f"Joined mesh via {seed_nodes}")
1714
+ else:
1715
+ print("Running standalone (no JARVISCORE_SEED_NODES)")
1716
+
1717
+ await agent.serve_forever()
1718
+
1719
+
1720
+ if __name__ == "__main__":
1721
+ asyncio.run(main())
1722
+ ```
1723
+
1724
+ ```yaml
1725
+ # docker-compose.yml
1726
+ version: '3.8'
1727
+ services:
1728
+ mesh-seed:
1729
+ build: .
1730
+ environment:
1731
+ - JARVISCORE_BIND_PORT=7950
1732
+ ports:
1733
+ - "7950:7950"
1734
+
1735
+ worker-1:
1736
+ build: .
1737
+ environment:
1738
+ - JARVISCORE_SEED_NODES=mesh-seed:7950
1739
+ - JARVISCORE_MESH_ENDPOINT=worker-1:7950
1740
+ depends_on:
1741
+ - mesh-seed
1742
+
1743
+ worker-2:
1744
+ build: .
1745
+ environment:
1746
+ - JARVISCORE_SEED_NODES=mesh-seed:7950
1747
+ - JARVISCORE_MESH_ENDPOINT=worker-2:7950
1748
+ depends_on:
1749
+ - mesh-seed
1750
+ ```
1751
+
1752
+ ### Kubernetes Deployment Example
1753
+
1754
+ ```yaml
1755
+ # k8s-deployment.yaml
1756
+ apiVersion: apps/v1
1757
+ kind: Deployment
1758
+ metadata:
1759
+ name: jarvis-worker
1760
+ spec:
1761
+ replicas: 3 # Scale as needed
1762
+ selector:
1763
+ matchLabels:
1764
+ app: jarvis-worker
1765
+ template:
1766
+ metadata:
1767
+ labels:
1768
+ app: jarvis-worker
1769
+ spec:
1770
+ containers:
1771
+ - name: worker
1772
+ image: myregistry/jarvis-worker:latest
1773
+ env:
1774
+ - name: JARVISCORE_SEED_NODES
1775
+ value: "jarvis-mesh-service:7950"
1776
+ - name: JARVISCORE_MESH_ENDPOINT
1777
+ valueFrom:
1778
+ fieldRef:
1779
+ fieldPath: status.podIP
1780
+ ports:
1781
+ - containerPort: 7950
1782
+ ---
1783
+ apiVersion: v1
1784
+ kind: Service
1785
+ metadata:
1786
+ name: jarvis-mesh-service
1787
+ spec:
1788
+ selector:
1789
+ app: jarvis-mesh-seed
1790
+ ports:
1791
+ - port: 7950
1792
+ targetPort: 7950
1793
+ ```
1794
+
1795
+ ### How Self-Registration Works
1796
+
1797
+ ```
1798
+ ┌─────────────────────────────────────────────────────────────┐
1799
+ │ SELF-REGISTRATION FLOW │
1800
+ ├─────────────────────────────────────────────────────────────┤
1801
+ │ │
1802
+ │ 1. New container starts │
1803
+ │ │ │
1804
+ │ ▼ │
1805
+ │ 2. agent.join_mesh(seed_nodes="mesh:7950") │
1806
+ │ │ │
1807
+ │ ▼ │
1808
+ │ 3. Agent connects to seed node │
1809
+ │ │ │
1810
+ │ ▼ │
1811
+ │ 4. SWIM protocol discovers all peers │
1812
+ │ │ │
1813
+ │ ▼ │
1814
+ │ 5. Agent registers its role/capabilities │
1815
+ │ │ │
1816
+ │ ▼ │
1817
+ │ 6. Other agents can now discover and call this agent │
1818
+ │ │
1819
+ └─────────────────────────────────────────────────────────────┘
1820
+ ```
1821
+
1822
+ ### RemoteAgentProxy (Automatic)
1823
+
1824
+ When agents join from different nodes, the framework automatically creates `RemoteAgentProxy` objects. You don't need to do anything special - the mesh handles it:
1825
+
1826
+ ```python
1827
+ # On any node, you can discover and call remote agents
1828
+ if agent.peers:
1829
+ # This works whether the peer is local or remote
1830
+ response = await agent.peers.as_tool().execute(
1831
+ "ask_peer",
1832
+ {"role": "worker", "question": "Process this data"}
1833
+ )
1834
+ ```
1835
+
1836
+ ---
1837
+
1069
1838
  ## API Reference
1070
1839
 
1071
1840
  ### CustomAgent Class Attributes
@@ -1082,6 +1851,22 @@ results = await mesh.workflow("parallel-example", [
1082
1851
  | `setup()` | Both | Called once on startup. Initialize resources here. Always call `await super().setup()` |
1083
1852
  | `run()` | P2P | Main loop for continuous operation. Required for P2P mode |
1084
1853
  | `execute_task(task)` | Distributed | Handle a workflow task. Required for Distributed mode |
1854
+ | `join_mesh(seed_nodes, ...)` | Both | **(v0.3.0)** Self-register with an existing mesh |
1855
+ | `leave_mesh()` | Both | **(v0.3.0)** Gracefully leave the mesh |
1856
+ | `serve_forever()` | Both | **(v0.3.0)** Block until shutdown signal |
1857
+
1858
+ ### ListenerAgent Class (v0.3.0)
1859
+
1860
+ ListenerAgent extends CustomAgent with handler-based P2P communication.
1861
+
1862
+ | Attribute/Method | Type | Description |
1863
+ |------------------|------|-------------|
1864
+ | `role` | `str` | Required. Unique identifier for this agent type |
1865
+ | `capabilities` | `list[str]` | Required. List of capabilities for discovery |
1866
+ | `on_peer_request(msg)` | async method | Handle incoming requests. Return dict to respond |
1867
+ | `on_peer_notify(msg)` | async method | Handle broadcast notifications. No return needed |
1868
+
1869
+ **Note:** ListenerAgent does not require `run()` or `execute_task()` implementations.
1085
1870
 
1086
1871
  ### Why `execute_task()` is Required in P2P Mode
1087
1872
 
@@ -1128,6 +1913,33 @@ Access via `self.peers.as_tool().execute(tool_name, params)`:
1128
1913
  | `broadcast` | `{"message": str}` | Send a message to all connected peers |
1129
1914
  | `list_peers` | `{}` | Get list of available peers and their capabilities |
1130
1915
 
1916
+ ### PeerClient Methods (v0.3.0)
1917
+
1918
+ Access via `self.peers`:
1919
+
1920
+ | Method | Returns | Description |
1921
+ |--------|---------|-------------|
1922
+ | `get_cognitive_context()` | `str` | Generate LLM-ready text describing available peers |
1923
+ | `list()` | `list[PeerInfo]` | Get list of connected peers |
1924
+ | `as_tool()` | `PeerTool` | Get peer tools for LLM tool use |
1925
+ | `receive(timeout)` | `IncomingMessage` | Receive next message (for CustomAgent run loops) |
1926
+ | `respond(msg, data)` | `None` | Respond to a request message |
1927
+
1928
+ ### JarvisLifespan (v0.3.0)
1929
+
1930
+ FastAPI integration helper:
1931
+
1932
+ ```python
1933
+ from jarviscore.integrations.fastapi import JarvisLifespan
1934
+
1935
+ JarvisLifespan(
1936
+ agent, # Agent instance
1937
+ mode="p2p", # "p2p" or "distributed"
1938
+ bind_port=7950, # Optional: P2P port
1939
+ seed_nodes="ip:port", # Optional: for multi-node
1940
+ )
1941
+ ```
1942
+
1131
1943
  ### Mesh Configuration
1132
1944
 
1133
1945
  ```python
@@ -1358,5 +2170,12 @@ async def ask_researcher(self, question: str) -> str:
1358
2170
 
1359
2171
  For complete, runnable examples, see:
1360
2172
 
1361
- - `examples/customagent_p2p_example.py` - P2P mode with peer communication
2173
+ - `examples/customagent_p2p_example.py` - P2P mode with LLM-driven peer communication
1362
2174
  - `examples/customagent_distributed_example.py` - Distributed mode with workflows
2175
+ - `examples/listeneragent_cognitive_discovery_example.py` - ListenerAgent + cognitive discovery (v0.3.0)
2176
+ - `examples/fastapi_integration_example.py` - FastAPI + JarvisLifespan (v0.3.0)
2177
+ - `examples/cloud_deployment_example.py` - Self-registration with join_mesh (v0.3.0)
2178
+
2179
+ ---
2180
+
2181
+ *CustomAgent Guide - JarvisCore Framework v0.3.0*