jarviscore-framework 0.3.0__py3-none-any.whl → 0.3.2__py3-none-any.whl
This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
- examples/cloud_deployment_example.py +3 -3
- examples/{listeneragent_cognitive_discovery_example.py → customagent_cognitive_discovery_example.py} +55 -14
- examples/customagent_distributed_example.py +140 -1
- examples/fastapi_integration_example.py +74 -11
- jarviscore/__init__.py +8 -11
- jarviscore/cli/smoketest.py +1 -1
- jarviscore/core/mesh.py +158 -0
- jarviscore/data/examples/cloud_deployment_example.py +3 -3
- jarviscore/data/examples/custom_profile_decorator.py +134 -0
- jarviscore/data/examples/custom_profile_wrap.py +168 -0
- jarviscore/data/examples/{listeneragent_cognitive_discovery_example.py → customagent_cognitive_discovery_example.py} +55 -14
- jarviscore/data/examples/customagent_distributed_example.py +140 -1
- jarviscore/data/examples/fastapi_integration_example.py +74 -11
- jarviscore/docs/API_REFERENCE.md +576 -47
- jarviscore/docs/CHANGELOG.md +131 -0
- jarviscore/docs/CONFIGURATION.md +1 -1
- jarviscore/docs/CUSTOMAGENT_GUIDE.md +591 -153
- jarviscore/docs/GETTING_STARTED.md +186 -329
- jarviscore/docs/TROUBLESHOOTING.md +1 -1
- jarviscore/docs/USER_GUIDE.md +292 -12
- jarviscore/integrations/fastapi.py +4 -4
- jarviscore/p2p/coordinator.py +36 -7
- jarviscore/p2p/messages.py +13 -0
- jarviscore/p2p/peer_client.py +380 -21
- jarviscore/p2p/peer_tool.py +17 -11
- jarviscore/profiles/__init__.py +2 -4
- jarviscore/profiles/customagent.py +302 -74
- jarviscore/testing/__init__.py +35 -0
- jarviscore/testing/mocks.py +578 -0
- {jarviscore_framework-0.3.0.dist-info → jarviscore_framework-0.3.2.dist-info}/METADATA +61 -46
- {jarviscore_framework-0.3.0.dist-info → jarviscore_framework-0.3.2.dist-info}/RECORD +42 -34
- tests/test_13_dx_improvements.py +37 -37
- tests/test_15_llm_cognitive_discovery.py +18 -18
- tests/test_16_unified_dx_flow.py +3 -3
- tests/test_17_session_context.py +489 -0
- tests/test_18_mesh_diagnostics.py +465 -0
- tests/test_19_async_requests.py +516 -0
- tests/test_20_load_balancing.py +546 -0
- tests/test_21_mock_testing.py +776 -0
- jarviscore/profiles/listeneragent.py +0 -292
- {jarviscore_framework-0.3.0.dist-info → jarviscore_framework-0.3.2.dist-info}/WHEEL +0 -0
- {jarviscore_framework-0.3.0.dist-info → jarviscore_framework-0.3.2.dist-info}/licenses/LICENSE +0 -0
- {jarviscore_framework-0.3.0.dist-info → jarviscore_framework-0.3.2.dist-info}/top_level.txt +0 -0
|
@@ -11,21 +11,20 @@ Build your first AI agent in 5 minutes!
|
|
|
11
11
|
| Profile | Best For | LLM Required |
|
|
12
12
|
|---------|----------|--------------|
|
|
13
13
|
| **AutoAgent** | Rapid prototyping, LLM generates code from prompts | Yes |
|
|
14
|
-
| **CustomAgent** |
|
|
15
|
-
| **ListenerAgent** | API-first (FastAPI), just implement handlers | Optional |
|
|
14
|
+
| **CustomAgent** | Your own code with P2P handlers or workflow tasks | Optional |
|
|
16
15
|
|
|
17
16
|
### Execution Modes (How agents are orchestrated)
|
|
18
17
|
|
|
19
18
|
| Mode | Use Case | Start Here |
|
|
20
19
|
|------|----------|------------|
|
|
21
|
-
| **Autonomous** | Single machine, simple pipelines |
|
|
20
|
+
| **Autonomous** | Single machine, simple pipelines | This guide |
|
|
22
21
|
| **P2P** | Direct agent communication, swarms | [CustomAgent Guide](CUSTOMAGENT_GUIDE.md) |
|
|
23
22
|
| **Distributed** | Multi-node production systems | [CustomAgent Guide](CUSTOMAGENT_GUIDE.md) |
|
|
24
23
|
|
|
25
24
|
**Recommendation:**
|
|
26
25
|
- **New to agents?** Start with **AutoAgent + Autonomous mode** below
|
|
27
|
-
- **Have existing code?** Jump to **CustomAgent**
|
|
28
|
-
- **Building APIs?** See **
|
|
26
|
+
- **Have existing code?** Jump to **CustomAgent** section
|
|
27
|
+
- **Building APIs?** See **CustomAgent + FastAPI** below
|
|
29
28
|
|
|
30
29
|
---
|
|
31
30
|
|
|
@@ -42,8 +41,8 @@ An **AutoAgent** that takes natural language prompts and automatically:
|
|
|
42
41
|
|
|
43
42
|
## Prerequisites
|
|
44
43
|
|
|
45
|
-
-
|
|
46
|
-
-
|
|
44
|
+
- Python 3.10 or higher
|
|
45
|
+
- An API key from one of these LLM providers:
|
|
47
46
|
- [Claude (Anthropic)](https://console.anthropic.com/) - Recommended
|
|
48
47
|
- [Azure OpenAI](https://azure.microsoft.com/en-us/products/ai-services/openai-service)
|
|
49
48
|
- [Google Gemini](https://ai.google.dev/)
|
|
@@ -97,7 +96,7 @@ LLM_MODEL=Qwen/Qwen2.5-Coder-32B-Instruct
|
|
|
97
96
|
```
|
|
98
97
|
|
|
99
98
|
**Tip:** JarvisCore automatically tries providers in this order:
|
|
100
|
-
Claude
|
|
99
|
+
Claude -> Azure -> Gemini -> vLLM
|
|
101
100
|
|
|
102
101
|
---
|
|
103
102
|
|
|
@@ -115,11 +114,11 @@ python -m jarviscore.cli.check --validate-llm
|
|
|
115
114
|
|
|
116
115
|
You should see:
|
|
117
116
|
```
|
|
118
|
-
|
|
119
|
-
|
|
120
|
-
|
|
121
|
-
|
|
122
|
-
|
|
117
|
+
Python Version: OK
|
|
118
|
+
JarvisCore Package: OK
|
|
119
|
+
Dependencies: OK
|
|
120
|
+
.env File: OK
|
|
121
|
+
Claude/Azure/Gemini: OK
|
|
123
122
|
```
|
|
124
123
|
|
|
125
124
|
Run the smoke test for end-to-end validation:
|
|
@@ -128,7 +127,7 @@ Run the smoke test for end-to-end validation:
|
|
|
128
127
|
python -m jarviscore.cli.smoketest
|
|
129
128
|
```
|
|
130
129
|
|
|
131
|
-
|
|
130
|
+
**If all tests pass**, you're ready to build agents!
|
|
132
131
|
|
|
133
132
|
---
|
|
134
133
|
|
|
@@ -199,14 +198,16 @@ Output: 3628800
|
|
|
199
198
|
Execution time: 4.23s
|
|
200
199
|
```
|
|
201
200
|
|
|
202
|
-
|
|
201
|
+
**Congratulations!** You just built an AI agent with zero manual coding!
|
|
203
202
|
|
|
204
203
|
---
|
|
205
204
|
|
|
206
|
-
## Step 5:
|
|
205
|
+
## Step 5: CustomAgent (Your Own Code)
|
|
207
206
|
|
|
208
207
|
If you have existing agents or don't need LLM code generation, use **CustomAgent**:
|
|
209
208
|
|
|
209
|
+
### Workflow Mode (execute_task)
|
|
210
|
+
|
|
210
211
|
```python
|
|
211
212
|
import asyncio
|
|
212
213
|
from jarviscore import Mesh
|
|
@@ -225,7 +226,6 @@ class MyAgent(CustomAgent):
|
|
|
225
226
|
|
|
226
227
|
|
|
227
228
|
async def main():
|
|
228
|
-
# CustomAgent uses "distributed" (workflow + P2P) or "p2p" (P2P only)
|
|
229
229
|
mesh = Mesh(mode="distributed", config={
|
|
230
230
|
'bind_port': 7950,
|
|
231
231
|
'node_name': 'custom-node',
|
|
@@ -244,80 +244,79 @@ async def main():
|
|
|
244
244
|
asyncio.run(main())
|
|
245
245
|
```
|
|
246
246
|
|
|
247
|
-
|
|
248
|
-
- No LLM API required (no costs!)
|
|
249
|
-
- Keep your existing logic
|
|
250
|
-
- Works with any framework (LangChain, CrewAI, etc.)
|
|
247
|
+
### P2P Mode (on_peer_request)
|
|
251
248
|
|
|
252
|
-
|
|
249
|
+
```python
|
|
250
|
+
import asyncio
|
|
251
|
+
from jarviscore import Mesh
|
|
252
|
+
from jarviscore.profiles import CustomAgent
|
|
253
253
|
|
|
254
|
-
---
|
|
255
254
|
|
|
256
|
-
|
|
255
|
+
class MyAgent(CustomAgent):
|
|
256
|
+
role = "processor"
|
|
257
|
+
capabilities = ["data_processing"]
|
|
257
258
|
|
|
258
|
-
|
|
259
|
+
async def on_peer_request(self, msg):
|
|
260
|
+
"""Handle requests from other agents."""
|
|
261
|
+
data = msg.data.get("data", [])
|
|
262
|
+
return {"result": [x * 2 for x in data]}
|
|
259
263
|
|
|
260
|
-
### The Problem
|
|
261
264
|
|
|
262
|
-
|
|
265
|
+
async def main():
|
|
266
|
+
mesh = Mesh(mode="p2p", config={'bind_port': 7950})
|
|
267
|
+
mesh.add(MyAgent)
|
|
268
|
+
await mesh.start()
|
|
263
269
|
|
|
264
|
-
|
|
265
|
-
|
|
266
|
-
|
|
267
|
-
async def run(self):
|
|
268
|
-
while not self.shutdown_requested:
|
|
269
|
-
msg = await self.peers.receive(timeout=1.0)
|
|
270
|
-
if msg is None:
|
|
271
|
-
continue
|
|
272
|
-
if msg.type == MessageType.REQUEST:
|
|
273
|
-
result = await self.process(msg.data)
|
|
274
|
-
await self.peers.respond(msg, result)
|
|
275
|
-
# ... error handling, logging, etc.
|
|
276
|
-
```
|
|
270
|
+
# Agent listens for peer requests
|
|
271
|
+
print("Agent running. Press Ctrl+C to stop.")
|
|
272
|
+
await mesh.agents[0].run()
|
|
277
273
|
|
|
278
|
-
|
|
274
|
+
await mesh.stop()
|
|
279
275
|
|
|
280
|
-
With ListenerAgent, just implement handlers:
|
|
281
276
|
|
|
282
|
-
|
|
283
|
-
|
|
277
|
+
asyncio.run(main())
|
|
278
|
+
```
|
|
284
279
|
|
|
285
|
-
|
|
286
|
-
|
|
287
|
-
|
|
280
|
+
**Key Benefits:**
|
|
281
|
+
- Keep your existing logic
|
|
282
|
+
- Works with any framework (LangChain, CrewAI, etc.)
|
|
288
283
|
|
|
289
|
-
|
|
290
|
-
"""Handle requests - return value is sent as response."""
|
|
291
|
-
return {"result": await self.process(msg.data)}
|
|
284
|
+
---
|
|
292
285
|
|
|
293
|
-
|
|
294
|
-
|
|
295
|
-
|
|
296
|
-
```
|
|
286
|
+
## Step 6: CustomAgent + FastAPI (API-First)
|
|
287
|
+
|
|
288
|
+
Building an API where agents run in the background? JarvisCore makes it easy.
|
|
297
289
|
|
|
298
290
|
### FastAPI Integration (3 Lines)
|
|
299
291
|
|
|
300
292
|
```python
|
|
301
293
|
from fastapi import FastAPI, Request
|
|
302
|
-
from jarviscore.profiles import
|
|
294
|
+
from jarviscore.profiles import CustomAgent
|
|
303
295
|
from jarviscore.integrations.fastapi import JarvisLifespan
|
|
304
296
|
|
|
305
|
-
|
|
297
|
+
|
|
298
|
+
class ProcessorAgent(CustomAgent):
|
|
306
299
|
role = "processor"
|
|
307
300
|
capabilities = ["data_processing"]
|
|
308
301
|
|
|
309
302
|
async def on_peer_request(self, msg):
|
|
303
|
+
"""Handle requests from other agents in the mesh."""
|
|
310
304
|
return {"processed": msg.data.get("task", "").upper()}
|
|
311
305
|
|
|
306
|
+
|
|
312
307
|
# Create agent and integrate with FastAPI
|
|
313
308
|
agent = ProcessorAgent()
|
|
314
309
|
app = FastAPI(lifespan=JarvisLifespan(agent, mode="p2p", bind_port=7950))
|
|
315
310
|
|
|
311
|
+
|
|
316
312
|
@app.post("/process")
|
|
317
313
|
async def process(data: dict, request: Request):
|
|
318
314
|
# Access your agent from the request
|
|
319
315
|
agent = request.app.state.jarvis_agents["processor"]
|
|
320
|
-
|
|
316
|
+
# Call another agent in the mesh
|
|
317
|
+
result = await agent.peers.request("analyst", {"task": data.get("task")})
|
|
318
|
+
return result
|
|
319
|
+
|
|
321
320
|
|
|
322
321
|
@app.get("/peers")
|
|
323
322
|
async def list_peers(request: Request):
|
|
@@ -333,129 +332,139 @@ Run with: `uvicorn myapp:app --host 0.0.0.0 --port 8000`
|
|
|
333
332
|
- Auto message dispatch to handlers
|
|
334
333
|
- Graceful startup/shutdown handled by JarvisLifespan
|
|
335
334
|
|
|
336
|
-
**For more:** See [CustomAgent Guide](CUSTOMAGENT_GUIDE.md) for ListenerAgent details.
|
|
337
|
-
|
|
338
335
|
---
|
|
339
336
|
|
|
340
|
-
##
|
|
337
|
+
## Step 7: Framework Integration Patterns
|
|
341
338
|
|
|
342
|
-
|
|
339
|
+
JarvisCore is **async-first**. Here's how to integrate with different frameworks:
|
|
343
340
|
|
|
344
|
-
|
|
345
|
-
2. **Generated Python code** using Claude/Azure/Gemini:
|
|
346
|
-
```python
|
|
347
|
-
def factorial(n):
|
|
348
|
-
if n == 0 or n == 1:
|
|
349
|
-
return 1
|
|
350
|
-
return n * factorial(n - 1)
|
|
341
|
+
### Pattern 1: FastAPI (Recommended)
|
|
351
342
|
|
|
352
|
-
|
|
353
|
-
|
|
354
|
-
|
|
355
|
-
|
|
343
|
+
```python
|
|
344
|
+
from fastapi import FastAPI
|
|
345
|
+
from jarviscore.profiles import CustomAgent
|
|
346
|
+
from jarviscore.integrations.fastapi import JarvisLifespan
|
|
356
347
|
|
|
357
|
-
|
|
348
|
+
class MyAgent(CustomAgent):
|
|
349
|
+
role = "processor"
|
|
350
|
+
capabilities = ["processing"]
|
|
358
351
|
|
|
359
|
-
|
|
352
|
+
async def on_peer_request(self, msg):
|
|
353
|
+
return {"result": msg.data}
|
|
360
354
|
|
|
361
|
-
|
|
355
|
+
app = FastAPI(lifespan=JarvisLifespan(MyAgent(), mode="p2p", bind_port=7950))
|
|
356
|
+
```
|
|
362
357
|
|
|
363
|
-
###
|
|
358
|
+
### Pattern 2: Other Async Frameworks (aiohttp, Quart, Tornado)
|
|
364
359
|
|
|
365
360
|
```python
|
|
366
|
-
|
|
367
|
-
|
|
368
|
-
|
|
369
|
-
|
|
361
|
+
# aiohttp example
|
|
362
|
+
import asyncio
|
|
363
|
+
from aiohttp import web
|
|
364
|
+
from jarviscore import Mesh
|
|
365
|
+
from jarviscore.profiles import CustomAgent
|
|
370
366
|
|
|
367
|
+
class MyAgent(CustomAgent):
|
|
368
|
+
role = "processor"
|
|
369
|
+
capabilities = ["processing"]
|
|
371
370
|
|
|
372
|
-
async def
|
|
373
|
-
|
|
374
|
-
mesh.add(DataAgent)
|
|
375
|
-
await mesh.start()
|
|
371
|
+
async def on_peer_request(self, msg):
|
|
372
|
+
return {"result": msg.data}
|
|
376
373
|
|
|
377
|
-
|
|
378
|
-
|
|
379
|
-
"agent": "data_analyst",
|
|
380
|
-
"task": """
|
|
381
|
-
Given this list: [10, 20, 30, 40, 50]
|
|
382
|
-
Calculate: mean, median, min, max, sum
|
|
383
|
-
Return as a dict
|
|
384
|
-
"""
|
|
385
|
-
}
|
|
386
|
-
])
|
|
374
|
+
mesh = None
|
|
375
|
+
agent = None
|
|
387
376
|
|
|
388
|
-
|
|
389
|
-
|
|
377
|
+
async def on_startup(app):
|
|
378
|
+
global mesh, agent
|
|
379
|
+
mesh = Mesh(mode="p2p", config={"bind_port": 7950})
|
|
380
|
+
agent = mesh.add(MyAgent())
|
|
381
|
+
await mesh.start()
|
|
382
|
+
asyncio.create_task(agent.run())
|
|
383
|
+
app['agent'] = agent
|
|
390
384
|
|
|
385
|
+
async def on_cleanup(app):
|
|
386
|
+
agent.request_shutdown()
|
|
391
387
|
await mesh.stop()
|
|
388
|
+
|
|
389
|
+
async def process_handler(request):
|
|
390
|
+
agent = request.app['agent']
|
|
391
|
+
result = await agent.peers.request("analyst", {"task": "analyze"})
|
|
392
|
+
return web.json_response(result)
|
|
393
|
+
|
|
394
|
+
app = web.Application()
|
|
395
|
+
app.on_startup.append(on_startup)
|
|
396
|
+
app.on_cleanup.append(on_cleanup)
|
|
397
|
+
app.router.add_post('/process', process_handler)
|
|
392
398
|
```
|
|
393
399
|
|
|
394
|
-
###
|
|
400
|
+
### Pattern 3: Sync Frameworks (Flask, Django)
|
|
395
401
|
|
|
396
402
|
```python
|
|
397
|
-
|
|
398
|
-
|
|
399
|
-
|
|
400
|
-
|
|
403
|
+
# Flask example - requires background thread
|
|
404
|
+
import asyncio
|
|
405
|
+
import threading
|
|
406
|
+
from flask import Flask, jsonify
|
|
407
|
+
from jarviscore import Mesh
|
|
408
|
+
from jarviscore.profiles import CustomAgent
|
|
401
409
|
|
|
410
|
+
app = Flask(__name__)
|
|
402
411
|
|
|
403
|
-
|
|
404
|
-
|
|
405
|
-
|
|
406
|
-
await mesh.start()
|
|
412
|
+
class MyAgent(CustomAgent):
|
|
413
|
+
role = "processor"
|
|
414
|
+
capabilities = ["processing"]
|
|
407
415
|
|
|
408
|
-
|
|
409
|
-
{
|
|
410
|
-
"agent": "text_processor",
|
|
411
|
-
"task": """
|
|
412
|
-
Count the words in this sentence:
|
|
413
|
-
"JarvisCore makes building AI agents incredibly easy"
|
|
414
|
-
"""
|
|
415
|
-
}
|
|
416
|
-
])
|
|
416
|
+
async def on_peer_request(self, msg):
|
|
417
|
+
return {"result": msg.data}
|
|
417
418
|
|
|
418
|
-
|
|
419
|
+
# Global state
|
|
420
|
+
_loop = None
|
|
421
|
+
_mesh = None
|
|
422
|
+
_agent = None
|
|
419
423
|
|
|
420
|
-
|
|
421
|
-
|
|
424
|
+
def _start_mesh():
|
|
425
|
+
"""Run in background thread."""
|
|
426
|
+
global _loop, _mesh, _agent
|
|
427
|
+
_loop = asyncio.new_event_loop()
|
|
428
|
+
asyncio.set_event_loop(_loop)
|
|
422
429
|
|
|
423
|
-
|
|
430
|
+
_mesh = Mesh(mode="p2p", config={"bind_port": 7950})
|
|
431
|
+
_agent = _mesh.add(MyAgent())
|
|
424
432
|
|
|
425
|
-
|
|
426
|
-
|
|
427
|
-
mesh = Mesh(mode="autonomous")
|
|
428
|
-
mesh.add(CalculatorAgent)
|
|
429
|
-
mesh.add(DataAgent)
|
|
430
|
-
await mesh.start()
|
|
433
|
+
_loop.run_until_complete(_mesh.start())
|
|
434
|
+
_loop.run_until_complete(_agent.run())
|
|
431
435
|
|
|
432
|
-
|
|
433
|
-
|
|
434
|
-
|
|
435
|
-
"agent": "calculator",
|
|
436
|
-
"task": "Calculate 5 factorial"
|
|
437
|
-
},
|
|
438
|
-
{
|
|
439
|
-
"id": "step2",
|
|
440
|
-
"agent": "data_analyst",
|
|
441
|
-
"task": "Take the result from step1 and calculate its square root",
|
|
442
|
-
"dependencies": ["step1"] # Waits for step1 to complete
|
|
443
|
-
}
|
|
444
|
-
])
|
|
445
|
-
|
|
446
|
-
print(f"Factorial(5): {results[0]['output']}") # 120
|
|
447
|
-
print(f"Square root: {results[1]['output']:.2f}") # 10.95
|
|
436
|
+
# Start mesh in background thread
|
|
437
|
+
_thread = threading.Thread(target=_start_mesh, daemon=True)
|
|
438
|
+
_thread.start()
|
|
448
439
|
|
|
449
|
-
|
|
440
|
+
@app.route("/process", methods=["POST"])
|
|
441
|
+
def process():
|
|
442
|
+
future = asyncio.run_coroutine_threadsafe(
|
|
443
|
+
_agent.peers.request("analyst", {"task": "analyze"}),
|
|
444
|
+
_loop
|
|
445
|
+
)
|
|
446
|
+
result = future.result(timeout=30)
|
|
447
|
+
return jsonify(result)
|
|
450
448
|
```
|
|
451
449
|
|
|
450
|
+
### Framework Recommendation
|
|
451
|
+
|
|
452
|
+
| Use Case | Recommended Approach |
|
|
453
|
+
|----------|---------------------|
|
|
454
|
+
| FastAPI project | FastAPI + JarvisLifespan |
|
|
455
|
+
| Existing async app | Manual mesh lifecycle |
|
|
456
|
+
| Existing Flask/Django | Background thread pattern |
|
|
457
|
+
| CLI tool / script | Standalone asyncio.run() |
|
|
458
|
+
|
|
459
|
+
**For more:** See [CustomAgent Guide](CUSTOMAGENT_GUIDE.md) for detailed integration examples.
|
|
460
|
+
|
|
452
461
|
---
|
|
453
462
|
|
|
454
463
|
## Key Concepts
|
|
455
464
|
|
|
456
465
|
### 1. AutoAgent Profile
|
|
457
466
|
|
|
458
|
-
The `AutoAgent` profile handles the "prompt
|
|
467
|
+
The `AutoAgent` profile handles the "prompt -> code -> result" workflow automatically:
|
|
459
468
|
|
|
460
469
|
```python
|
|
461
470
|
class MyAgent(AutoAgent):
|
|
@@ -473,32 +482,24 @@ class MyAgent(CustomAgent):
|
|
|
473
482
|
role = "unique_name"
|
|
474
483
|
capabilities = ["skill1", "skill2"]
|
|
475
484
|
|
|
476
|
-
|
|
477
|
-
|
|
478
|
-
|
|
479
|
-
async def run(self): # For continuous loop (p2p)
|
|
480
|
-
while not self.shutdown_requested:
|
|
481
|
-
msg = await self.peers.receive(timeout=0.5)
|
|
482
|
-
...
|
|
483
|
-
```
|
|
484
|
-
|
|
485
|
-
### 3. ListenerAgent Profile
|
|
486
|
-
|
|
487
|
-
The `ListenerAgent` profile is for API-first agents - just implement handlers:
|
|
485
|
+
# For P2P messaging - handle requests from other agents
|
|
486
|
+
async def on_peer_request(self, msg):
|
|
487
|
+
return {"result": ...} # Return value sent as response
|
|
488
488
|
|
|
489
|
-
|
|
490
|
-
|
|
491
|
-
|
|
492
|
-
capabilities = ["skill1", "skill2"]
|
|
489
|
+
# For P2P messaging - handle notifications (fire-and-forget)
|
|
490
|
+
async def on_peer_notify(self, msg):
|
|
491
|
+
await self.log(msg.data)
|
|
493
492
|
|
|
494
|
-
|
|
495
|
-
|
|
493
|
+
# For workflow tasks
|
|
494
|
+
async def execute_task(self, task):
|
|
495
|
+
return {"status": "success", "output": ...}
|
|
496
496
|
|
|
497
|
-
|
|
498
|
-
|
|
497
|
+
# Configuration
|
|
498
|
+
listen_timeout = 1.0 # Seconds to wait for messages
|
|
499
|
+
auto_respond = True # Auto-send on_peer_request return value
|
|
499
500
|
```
|
|
500
501
|
|
|
501
|
-
###
|
|
502
|
+
### 3. Mesh
|
|
502
503
|
|
|
503
504
|
The `Mesh` is the orchestrator that manages agents and workflows:
|
|
504
505
|
|
|
@@ -512,10 +513,10 @@ await mesh.stop() # Cleanup
|
|
|
512
513
|
|
|
513
514
|
**Modes:**
|
|
514
515
|
- `autonomous`: Workflow engine only (AutoAgent)
|
|
515
|
-
- `p2p`: P2P coordinator for agent-to-agent communication (CustomAgent
|
|
516
|
-
- `distributed`: Both workflow engine AND P2P (CustomAgent
|
|
516
|
+
- `p2p`: P2P coordinator for agent-to-agent communication (CustomAgent)
|
|
517
|
+
- `distributed`: Both workflow engine AND P2P (CustomAgent)
|
|
517
518
|
|
|
518
|
-
###
|
|
519
|
+
### 4. Workflow
|
|
519
520
|
|
|
520
521
|
A workflow is a list of tasks to execute:
|
|
521
522
|
|
|
@@ -529,7 +530,7 @@ results = await mesh.workflow("workflow-id", [
|
|
|
529
530
|
])
|
|
530
531
|
```
|
|
531
532
|
|
|
532
|
-
###
|
|
533
|
+
### 5. Results
|
|
533
534
|
|
|
534
535
|
Each task returns a result dict:
|
|
535
536
|
|
|
@@ -546,48 +547,6 @@ Each task returns a result dict:
|
|
|
546
547
|
|
|
547
548
|
---
|
|
548
549
|
|
|
549
|
-
## Configuration Options
|
|
550
|
-
|
|
551
|
-
### Sandbox Mode
|
|
552
|
-
|
|
553
|
-
Choose between local or remote code execution:
|
|
554
|
-
|
|
555
|
-
```bash
|
|
556
|
-
# Local (default) - runs on your machine
|
|
557
|
-
SANDBOX_MODE=local
|
|
558
|
-
|
|
559
|
-
# Remote - runs on Azure Container Apps (more secure)
|
|
560
|
-
SANDBOX_MODE=remote
|
|
561
|
-
SANDBOX_SERVICE_URL=https://your-sandbox-service.com
|
|
562
|
-
```
|
|
563
|
-
|
|
564
|
-
### LLM Settings
|
|
565
|
-
|
|
566
|
-
Fine-tune LLM behavior:
|
|
567
|
-
|
|
568
|
-
```bash
|
|
569
|
-
# Model selection (provider-specific)
|
|
570
|
-
CLAUDE_MODEL=claude-sonnet-4 # Claude
|
|
571
|
-
AZURE_DEPLOYMENT=gpt-4o # Azure
|
|
572
|
-
GEMINI_MODEL=gemini-2.0-flash # Gemini
|
|
573
|
-
LLM_MODEL=Qwen/Qwen2.5-Coder-32B # vLLM
|
|
574
|
-
|
|
575
|
-
# Generation parameters
|
|
576
|
-
LLM_TEMPERATURE=0.0 # 0.0 = deterministic, 1.0 = creative
|
|
577
|
-
LLM_MAX_TOKENS=2000 # Max response length
|
|
578
|
-
LLM_TIMEOUT=120 # Request timeout (seconds)
|
|
579
|
-
```
|
|
580
|
-
|
|
581
|
-
### Logging
|
|
582
|
-
|
|
583
|
-
Control log verbosity:
|
|
584
|
-
|
|
585
|
-
```bash
|
|
586
|
-
LOG_LEVEL=INFO # DEBUG, INFO, WARNING, ERROR, CRITICAL
|
|
587
|
-
```
|
|
588
|
-
|
|
589
|
-
---
|
|
590
|
-
|
|
591
550
|
## Common Patterns
|
|
592
551
|
|
|
593
552
|
### Pattern 1: Error Handling
|
|
@@ -600,7 +559,6 @@ try:
|
|
|
600
559
|
|
|
601
560
|
if results[0]['status'] == 'failure':
|
|
602
561
|
print(f"Error: {results[0]['error']}")
|
|
603
|
-
# The agent automatically attempted repairs
|
|
604
562
|
print(f"Repair attempts: {results[0]['repairs']}")
|
|
605
563
|
|
|
606
564
|
except Exception as e:
|
|
@@ -619,29 +577,25 @@ results = await mesh.workflow("dynamic", [
|
|
|
619
577
|
print(results[0]['output']) # 78.54
|
|
620
578
|
```
|
|
621
579
|
|
|
622
|
-
### Pattern 3:
|
|
623
|
-
|
|
624
|
-
Pass data between steps:
|
|
580
|
+
### Pattern 3: Multi-Step Workflow
|
|
625
581
|
|
|
626
582
|
```python
|
|
627
|
-
results = await mesh.workflow("
|
|
583
|
+
results = await mesh.workflow("multi-step", [
|
|
628
584
|
{
|
|
629
|
-
"id": "
|
|
630
|
-
"agent": "
|
|
631
|
-
"task": "
|
|
585
|
+
"id": "step1",
|
|
586
|
+
"agent": "calculator",
|
|
587
|
+
"task": "Calculate 5 factorial"
|
|
632
588
|
},
|
|
633
589
|
{
|
|
634
|
-
"id": "
|
|
590
|
+
"id": "step2",
|
|
635
591
|
"agent": "data_analyst",
|
|
636
|
-
"task": "
|
|
637
|
-
"dependencies": ["
|
|
592
|
+
"task": "Take the result from step1 and calculate its square root",
|
|
593
|
+
"dependencies": ["step1"] # Waits for step1 to complete
|
|
638
594
|
}
|
|
639
595
|
])
|
|
640
596
|
|
|
641
|
-
|
|
642
|
-
|
|
643
|
-
print(f"Numbers: {numbers}")
|
|
644
|
-
print(f"Statistics: {stats}")
|
|
597
|
+
print(f"Factorial(5): {results[0]['output']}") # 120
|
|
598
|
+
print(f"Square root: {results[1]['output']:.2f}") # 10.95
|
|
645
599
|
```
|
|
646
600
|
|
|
647
601
|
---
|
|
@@ -663,138 +617,41 @@ ls -la logs/
|
|
|
663
617
|
cat logs/<agent>/<latest>.json
|
|
664
618
|
```
|
|
665
619
|
|
|
666
|
-
Run smoke test with verbose output:
|
|
667
|
-
```bash
|
|
668
|
-
python -m jarviscore.cli.smoketest --verbose
|
|
669
|
-
```
|
|
670
|
-
|
|
671
620
|
### Issue: Slow execution
|
|
672
621
|
|
|
673
|
-
**Causes:**
|
|
674
|
-
1. LLM latency (2-5s per request)
|
|
675
|
-
2. Complex prompts
|
|
676
|
-
3. Network issues
|
|
677
|
-
|
|
678
622
|
**Solutions:**
|
|
679
623
|
- Use faster models (Claude Haiku, Gemini Flash)
|
|
680
624
|
- Simplify prompts
|
|
681
625
|
- Use local vLLM for zero-latency
|
|
682
626
|
|
|
683
|
-
### Issue: Generated code has errors
|
|
684
|
-
|
|
685
|
-
**Good news:** AutoAgent automatically attempts to fix errors!
|
|
686
|
-
|
|
687
|
-
It will:
|
|
688
|
-
1. Detect the error
|
|
689
|
-
2. Ask the LLM to fix the code
|
|
690
|
-
3. Retry execution (up to 3 times)
|
|
691
|
-
|
|
692
|
-
Check `repairs` in the result to see how many fixes were needed.
|
|
693
|
-
|
|
694
627
|
---
|
|
695
628
|
|
|
696
629
|
## Next Steps
|
|
697
630
|
|
|
698
|
-
1. **
|
|
699
|
-
2. **
|
|
700
|
-
3. **User Guide**: Complete documentation
|
|
631
|
+
1. **CustomAgent Guide**: P2P and distributed with your code -> [CUSTOMAGENT_GUIDE.md](CUSTOMAGENT_GUIDE.md)
|
|
632
|
+
2. **AutoAgent Guide**: Multi-node distributed mode -> [AUTOAGENT_GUIDE.md](AUTOAGENT_GUIDE.md)
|
|
633
|
+
3. **User Guide**: Complete documentation -> [USER_GUIDE.md](USER_GUIDE.md)
|
|
701
634
|
4. **API Reference**: [API_REFERENCE.md](API_REFERENCE.md)
|
|
702
|
-
5. **
|
|
703
|
-
6. **Examples**: Check out `examples/` directory
|
|
635
|
+
5. **Examples**: Check out `examples/` directory
|
|
704
636
|
|
|
705
637
|
---
|
|
706
638
|
|
|
707
639
|
## Best Practices
|
|
708
640
|
|
|
709
|
-
###
|
|
641
|
+
### DO
|
|
710
642
|
|
|
711
643
|
- **Be specific in prompts**: "Calculate factorial of 10" > "Do math"
|
|
712
644
|
- **Test with simple tasks first**: Validate your setup works
|
|
713
645
|
- **Use appropriate models**: Haiku/Flash for simple tasks, Opus/GPT-4 for complex
|
|
714
|
-
- **
|
|
715
|
-
- **Read error messages**: They contain helpful hints
|
|
646
|
+
- **Use async frameworks**: FastAPI, aiohttp for best experience
|
|
716
647
|
|
|
717
|
-
###
|
|
648
|
+
### DON'T
|
|
718
649
|
|
|
719
650
|
- **Use vague prompts**: "Do something" won't work well
|
|
720
651
|
- **Expect instant results**: LLM generation takes 2-5 seconds
|
|
721
652
|
- **Skip validation**: Always run health check after setup
|
|
722
653
|
- **Commit API keys**: Keep `.env` out of version control
|
|
723
|
-
- **Ignore logs**: They help debug issues
|
|
724
|
-
|
|
725
|
-
---
|
|
726
|
-
|
|
727
|
-
## FAQ
|
|
728
|
-
|
|
729
|
-
### Q: How much does it cost?
|
|
730
|
-
|
|
731
|
-
**A:** Depends on your LLM provider:
|
|
732
|
-
- **Claude**: ~$3-15 per million tokens (most expensive but best quality)
|
|
733
|
-
- **Azure**: ~$3-15 per million tokens (enterprise-grade)
|
|
734
|
-
- **Gemini**: $0.10-5 per million tokens (cheapest cloud option)
|
|
735
|
-
- **vLLM**: FREE (self-hosted, no API costs)
|
|
736
|
-
|
|
737
|
-
A typical simple task uses ~500 tokens = $0.0015 with Claude.
|
|
738
|
-
|
|
739
|
-
### Q: Is the code execution safe?
|
|
740
|
-
|
|
741
|
-
**A:** Yes! Code runs in an isolated sandbox:
|
|
742
|
-
- **Local mode**: Restricted Python environment (no file/network access)
|
|
743
|
-
- **Remote mode**: Azure Container Apps (fully isolated containers)
|
|
744
|
-
|
|
745
|
-
### Q: Can I use my own LLM?
|
|
746
|
-
|
|
747
|
-
**A:** Yes! Point `LLM_ENDPOINT` to any OpenAI-compatible API:
|
|
748
|
-
```bash
|
|
749
|
-
LLM_ENDPOINT=http://localhost:8000 # Local vLLM
|
|
750
|
-
LLM_ENDPOINT=https://your-api.com # Custom endpoint
|
|
751
|
-
```
|
|
752
|
-
|
|
753
|
-
### Q: What if the LLM generates bad code?
|
|
754
|
-
|
|
755
|
-
**A:** AutoAgent automatically detects and fixes errors:
|
|
756
|
-
1. Catches syntax/runtime errors
|
|
757
|
-
2. Sends error to LLM with fix instructions
|
|
758
|
-
3. Retries with corrected code (up to 3 attempts)
|
|
759
|
-
|
|
760
|
-
Check `repairs` in the result to see how many fixes were needed.
|
|
761
|
-
|
|
762
|
-
### Q: Can I see the generated code?
|
|
763
|
-
|
|
764
|
-
**A:** Yes! It's in the result:
|
|
765
|
-
```python
|
|
766
|
-
result = results[0]
|
|
767
|
-
print(result['code']) # Shows the generated Python code
|
|
768
|
-
```
|
|
769
|
-
|
|
770
|
-
Or check logs:
|
|
771
|
-
```bash
|
|
772
|
-
cat logs/<agent-role>/<result-id>.json
|
|
773
|
-
```
|
|
774
|
-
|
|
775
|
-
### Q: How do I deploy this in production?
|
|
776
|
-
|
|
777
|
-
**A:** See the User Guide for:
|
|
778
|
-
- Remote sandbox configuration (Azure Container Apps)
|
|
779
|
-
- High-availability setup
|
|
780
|
-
- Monitoring and logging
|
|
781
|
-
- Cost optimization
|
|
782
|
-
|
|
783
|
-
---
|
|
784
|
-
|
|
785
|
-
## Support
|
|
786
|
-
|
|
787
|
-
Need help?
|
|
788
|
-
|
|
789
|
-
1. **Check docs**: [USER_GUIDE.md](USER_GUIDE.md) | [TROUBLESHOOTING.md](TROUBLESHOOTING.md)
|
|
790
|
-
2. **Run diagnostics**:
|
|
791
|
-
```bash
|
|
792
|
-
python -m jarviscore.cli.check --verbose
|
|
793
|
-
python -m jarviscore.cli.smoketest --verbose
|
|
794
|
-
```
|
|
795
|
-
3. **Check logs**: `cat logs/<agent>/<latest>.json`
|
|
796
|
-
4. **Report issues**: [GitHub Issues](https://github.com/Prescott-Data/jarviscore-framework/issues)
|
|
797
654
|
|
|
798
655
|
---
|
|
799
656
|
|
|
800
|
-
|
|
657
|
+
**Happy building with JarvisCore!**
|