jarviscore-framework 0.2.1__py3-none-any.whl → 0.3.1__py3-none-any.whl

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
Files changed (37) hide show
  1. examples/cloud_deployment_example.py +162 -0
  2. examples/customagent_cognitive_discovery_example.py +343 -0
  3. examples/fastapi_integration_example.py +570 -0
  4. jarviscore/__init__.py +19 -5
  5. jarviscore/cli/smoketest.py +8 -4
  6. jarviscore/core/agent.py +227 -0
  7. jarviscore/core/mesh.py +9 -0
  8. jarviscore/data/examples/cloud_deployment_example.py +162 -0
  9. jarviscore/data/examples/custom_profile_decorator.py +134 -0
  10. jarviscore/data/examples/custom_profile_wrap.py +168 -0
  11. jarviscore/data/examples/customagent_cognitive_discovery_example.py +343 -0
  12. jarviscore/data/examples/fastapi_integration_example.py +570 -0
  13. jarviscore/docs/API_REFERENCE.md +283 -3
  14. jarviscore/docs/CHANGELOG.md +139 -0
  15. jarviscore/docs/CONFIGURATION.md +1 -1
  16. jarviscore/docs/CUSTOMAGENT_GUIDE.md +997 -85
  17. jarviscore/docs/GETTING_STARTED.md +228 -267
  18. jarviscore/docs/TROUBLESHOOTING.md +1 -1
  19. jarviscore/docs/USER_GUIDE.md +153 -8
  20. jarviscore/integrations/__init__.py +16 -0
  21. jarviscore/integrations/fastapi.py +247 -0
  22. jarviscore/p2p/broadcaster.py +10 -3
  23. jarviscore/p2p/coordinator.py +310 -14
  24. jarviscore/p2p/keepalive.py +45 -23
  25. jarviscore/p2p/peer_client.py +311 -12
  26. jarviscore/p2p/swim_manager.py +9 -4
  27. jarviscore/profiles/__init__.py +7 -1
  28. jarviscore/profiles/customagent.py +295 -74
  29. {jarviscore_framework-0.2.1.dist-info → jarviscore_framework-0.3.1.dist-info}/METADATA +66 -18
  30. {jarviscore_framework-0.2.1.dist-info → jarviscore_framework-0.3.1.dist-info}/RECORD +37 -22
  31. {jarviscore_framework-0.2.1.dist-info → jarviscore_framework-0.3.1.dist-info}/WHEEL +1 -1
  32. tests/test_13_dx_improvements.py +554 -0
  33. tests/test_14_cloud_deployment.py +403 -0
  34. tests/test_15_llm_cognitive_discovery.py +684 -0
  35. tests/test_16_unified_dx_flow.py +947 -0
  36. {jarviscore_framework-0.2.1.dist-info → jarviscore_framework-0.3.1.dist-info}/licenses/LICENSE +0 -0
  37. {jarviscore_framework-0.2.1.dist-info → jarviscore_framework-0.3.1.dist-info}/top_level.txt +0 -0
@@ -11,17 +11,20 @@ Build your first AI agent in 5 minutes!
11
11
  | Profile | Best For | LLM Required |
12
12
  |---------|----------|--------------|
13
13
  | **AutoAgent** | Rapid prototyping, LLM generates code from prompts | Yes |
14
- | **CustomAgent** | Existing code, full control (LangChain, CrewAI, etc.) | Optional |
14
+ | **CustomAgent** | Your own code with P2P handlers or workflow tasks | Optional |
15
15
 
16
16
  ### Execution Modes (How agents are orchestrated)
17
17
 
18
18
  | Mode | Use Case | Start Here |
19
19
  |------|----------|------------|
20
- | **Autonomous** | Single machine, simple pipelines | This guide |
20
+ | **Autonomous** | Single machine, simple pipelines | This guide |
21
21
  | **P2P** | Direct agent communication, swarms | [CustomAgent Guide](CUSTOMAGENT_GUIDE.md) |
22
- | **Distributed** | Multi-node production systems | [AutoAgent Guide](AUTOAGENT_GUIDE.md) |
22
+ | **Distributed** | Multi-node production systems | [CustomAgent Guide](CUSTOMAGENT_GUIDE.md) |
23
23
 
24
- **Recommendation:** Start with **AutoAgent + Autonomous mode** below, then explore other modes.
24
+ **Recommendation:**
25
+ - **New to agents?** Start with **AutoAgent + Autonomous mode** below
26
+ - **Have existing code?** Jump to **CustomAgent** section
27
+ - **Building APIs?** See **CustomAgent + FastAPI** below
25
28
 
26
29
  ---
27
30
 
@@ -38,8 +41,8 @@ An **AutoAgent** that takes natural language prompts and automatically:
38
41
 
39
42
  ## Prerequisites
40
43
 
41
- - Python 3.10 or higher
42
- - An API key from one of these LLM providers:
44
+ - Python 3.10 or higher
45
+ - An API key from one of these LLM providers:
43
46
  - [Claude (Anthropic)](https://console.anthropic.com/) - Recommended
44
47
  - [Azure OpenAI](https://azure.microsoft.com/en-us/products/ai-services/openai-service)
45
48
  - [Google Gemini](https://ai.google.dev/)
@@ -93,7 +96,7 @@ LLM_MODEL=Qwen/Qwen2.5-Coder-32B-Instruct
93
96
  ```
94
97
 
95
98
  **Tip:** JarvisCore automatically tries providers in this order:
96
- Claude Azure Gemini vLLM
99
+ Claude -> Azure -> Gemini -> vLLM
97
100
 
98
101
  ---
99
102
 
@@ -111,11 +114,11 @@ python -m jarviscore.cli.check --validate-llm
111
114
 
112
115
  You should see:
113
116
  ```
114
- Python Version: OK
115
- JarvisCore Package: OK
116
- Dependencies: OK
117
- .env File: OK
118
- Claude/Azure/Gemini: OK
117
+ Python Version: OK
118
+ JarvisCore Package: OK
119
+ Dependencies: OK
120
+ .env File: OK
121
+ Claude/Azure/Gemini: OK
119
122
  ```
120
123
 
121
124
  Run the smoke test for end-to-end validation:
@@ -124,7 +127,7 @@ Run the smoke test for end-to-end validation:
124
127
  python -m jarviscore.cli.smoketest
125
128
  ```
126
129
 
127
- **If all tests pass**, you're ready to build agents!
130
+ **If all tests pass**, you're ready to build agents!
128
131
 
129
132
  ---
130
133
 
@@ -195,14 +198,16 @@ Output: 3628800
195
198
  Execution time: 4.23s
196
199
  ```
197
200
 
198
- **🎉 Congratulations!** You just built an AI agent with zero manual coding!
201
+ **Congratulations!** You just built an AI agent with zero manual coding!
199
202
 
200
203
  ---
201
204
 
202
- ## Step 5: Try CustomAgent (Alternative Path)
205
+ ## Step 5: CustomAgent (Your Own Code)
203
206
 
204
207
  If you have existing agents or don't need LLM code generation, use **CustomAgent**:
205
208
 
209
+ ### Workflow Mode (execute_task)
210
+
206
211
  ```python
207
212
  import asyncio
208
213
  from jarviscore import Mesh
@@ -221,7 +226,6 @@ class MyAgent(CustomAgent):
221
226
 
222
227
 
223
228
  async def main():
224
- # CustomAgent uses "distributed" (workflow + P2P) or "p2p" (P2P only)
225
229
  mesh = Mesh(mode="distributed", config={
226
230
  'bind_port': 7950,
227
231
  'node_name': 'custom-node',
@@ -237,137 +241,230 @@ async def main():
237
241
  await mesh.stop()
238
242
 
239
243
 
244
+ asyncio.run(main())
245
+ ```
246
+
247
+ ### P2P Mode (on_peer_request)
248
+
249
+ ```python
250
+ import asyncio
251
+ from jarviscore import Mesh
252
+ from jarviscore.profiles import CustomAgent
253
+
254
+
255
+ class MyAgent(CustomAgent):
256
+ role = "processor"
257
+ capabilities = ["data_processing"]
258
+
259
+ async def on_peer_request(self, msg):
260
+ """Handle requests from other agents."""
261
+ data = msg.data.get("data", [])
262
+ return {"result": [x * 2 for x in data]}
263
+
264
+
265
+ async def main():
266
+ mesh = Mesh(mode="p2p", config={'bind_port': 7950})
267
+ mesh.add(MyAgent)
268
+ await mesh.start()
269
+
270
+ # Agent listens for peer requests
271
+ print("Agent running. Press Ctrl+C to stop.")
272
+ await mesh.agents[0].run()
273
+
274
+ await mesh.stop()
275
+
276
+
240
277
  asyncio.run(main())
241
278
  ```
242
279
 
243
280
  **Key Benefits:**
244
- - No LLM API required (no costs!)
245
281
  - Keep your existing logic
246
282
  - Works with any framework (LangChain, CrewAI, etc.)
247
283
 
248
- **For more:** See [CustomAgent Guide](CUSTOMAGENT_GUIDE.md) for P2P mode and multi-node examples.
249
-
250
284
  ---
251
285
 
252
- ## What Just Happened?
286
+ ## Step 6: CustomAgent + FastAPI (API-First)
253
287
 
254
- Behind the scenes, JarvisCore:
288
+ Building an API where agents run in the background? JarvisCore makes it easy.
255
289
 
256
- 1. **Received your prompt**: "Calculate the factorial of 10"
257
- 2. **Generated Python code** using Claude/Azure/Gemini:
258
- ```python
259
- def factorial(n):
260
- if n == 0 or n == 1:
261
- return 1
262
- return n * factorial(n - 1)
290
+ ### FastAPI Integration (3 Lines)
263
291
 
264
- result = factorial(10)
265
- ```
266
- 3. **Executed the code** safely in a sandbox
267
- 4. **Returned the result**: 3628800
292
+ ```python
293
+ from fastapi import FastAPI, Request
294
+ from jarviscore.profiles import CustomAgent
295
+ from jarviscore.integrations.fastapi import JarvisLifespan
268
296
 
269
- All from a single natural language prompt!
270
297
 
271
- ---
298
+ class ProcessorAgent(CustomAgent):
299
+ role = "processor"
300
+ capabilities = ["data_processing"]
272
301
 
273
- ## Step 5: Try More Complex AutoAgent Profile Examples
302
+ async def on_peer_request(self, msg):
303
+ """Handle requests from other agents in the mesh."""
304
+ return {"processed": msg.data.get("task", "").upper()}
274
305
 
275
- ### Example 1: Data Processing
276
306
 
277
- ```python
278
- class DataAgent(AutoAgent):
279
- role = "data_analyst"
280
- capabilities = ["data_processing", "statistics"]
281
- system_prompt = "You are a data analyst. Generate Python code for data tasks."
307
+ # Create agent and integrate with FastAPI
308
+ agent = ProcessorAgent()
309
+ app = FastAPI(lifespan=JarvisLifespan(agent, mode="p2p", bind_port=7950))
282
310
 
283
311
 
284
- async def analyze_data():
285
- mesh = Mesh(mode="autonomous")
286
- mesh.add(DataAgent)
287
- await mesh.start()
312
+ @app.post("/process")
313
+ async def process(data: dict, request: Request):
314
+ # Access your agent from the request
315
+ agent = request.app.state.jarvis_agents["processor"]
316
+ # Call another agent in the mesh
317
+ result = await agent.peers.request("analyst", {"task": data.get("task")})
318
+ return result
288
319
 
289
- results = await mesh.workflow("data-workflow", [
290
- {
291
- "agent": "data_analyst",
292
- "task": """
293
- Given this list: [10, 20, 30, 40, 50]
294
- Calculate: mean, median, min, max, sum
295
- Return as a dict
296
- """
297
- }
298
- ])
299
320
 
300
- print(results[0]['output'])
301
- # Output: {'mean': 30.0, 'median': 30, 'min': 10, 'max': 50, 'sum': 150}
321
+ @app.get("/peers")
322
+ async def list_peers(request: Request):
323
+ agent = request.app.state.jarvis_agents["processor"]
324
+ return {"peers": agent.peers.list_peers()}
325
+ ```
302
326
 
303
- await mesh.stop()
327
+ Run with: `uvicorn myapp:app --host 0.0.0.0 --port 8000`
328
+
329
+ **What you get:**
330
+ - HTTP endpoints (FastAPI routes) as primary interface
331
+ - P2P mesh participation in background
332
+ - Auto message dispatch to handlers
333
+ - Graceful startup/shutdown handled by JarvisLifespan
334
+
335
+ ---
336
+
337
+ ## Step 7: Framework Integration Patterns
338
+
339
+ JarvisCore is **async-first**. Here's how to integrate with different frameworks:
340
+
341
+ ### Pattern 1: FastAPI (Recommended)
342
+
343
+ ```python
344
+ from fastapi import FastAPI
345
+ from jarviscore.profiles import CustomAgent
346
+ from jarviscore.integrations.fastapi import JarvisLifespan
347
+
348
+ class MyAgent(CustomAgent):
349
+ role = "processor"
350
+ capabilities = ["processing"]
351
+
352
+ async def on_peer_request(self, msg):
353
+ return {"result": msg.data}
354
+
355
+ app = FastAPI(lifespan=JarvisLifespan(MyAgent(), mode="p2p", bind_port=7950))
304
356
  ```
305
357
 
306
- ### Example 2: Text Processing
358
+ ### Pattern 2: Other Async Frameworks (aiohttp, Quart, Tornado)
307
359
 
308
360
  ```python
309
- class TextAgent(AutoAgent):
310
- role = "text_processor"
311
- capabilities = ["text", "nlp"]
312
- system_prompt = "You are a text processing expert."
361
+ # aiohttp example
362
+ import asyncio
363
+ from aiohttp import web
364
+ from jarviscore import Mesh
365
+ from jarviscore.profiles import CustomAgent
313
366
 
367
+ class MyAgent(CustomAgent):
368
+ role = "processor"
369
+ capabilities = ["processing"]
314
370
 
315
- async def process_text():
316
- mesh = Mesh(mode="autonomous")
317
- mesh.add(TextAgent)
318
- await mesh.start()
371
+ async def on_peer_request(self, msg):
372
+ return {"result": msg.data}
319
373
 
320
- results = await mesh.workflow("text-workflow", [
321
- {
322
- "agent": "text_processor",
323
- "task": """
324
- Count the words in this sentence:
325
- "JarvisCore makes building AI agents incredibly easy"
326
- """
327
- }
328
- ])
374
+ mesh = None
375
+ agent = None
329
376
 
330
- print(results[0]['output']) # Output: 7
377
+ async def on_startup(app):
378
+ global mesh, agent
379
+ mesh = Mesh(mode="p2p", config={"bind_port": 7950})
380
+ agent = mesh.add(MyAgent())
381
+ await mesh.start()
382
+ asyncio.create_task(agent.run())
383
+ app['agent'] = agent
331
384
 
385
+ async def on_cleanup(app):
386
+ agent.request_shutdown()
332
387
  await mesh.stop()
388
+
389
+ async def process_handler(request):
390
+ agent = request.app['agent']
391
+ result = await agent.peers.request("analyst", {"task": "analyze"})
392
+ return web.json_response(result)
393
+
394
+ app = web.Application()
395
+ app.on_startup.append(on_startup)
396
+ app.on_cleanup.append(on_cleanup)
397
+ app.router.add_post('/process', process_handler)
333
398
  ```
334
399
 
335
- ### Example 3: Multi-Step Workflow
400
+ ### Pattern 3: Sync Frameworks (Flask, Django)
336
401
 
337
402
  ```python
338
- async def multi_step_workflow():
339
- mesh = Mesh(mode="autonomous")
340
- mesh.add(CalculatorAgent)
341
- mesh.add(DataAgent)
342
- await mesh.start()
403
+ # Flask example - requires background thread
404
+ import asyncio
405
+ import threading
406
+ from flask import Flask, jsonify
407
+ from jarviscore import Mesh
408
+ from jarviscore.profiles import CustomAgent
343
409
 
344
- results = await mesh.workflow("multi-step", [
345
- {
346
- "id": "step1",
347
- "agent": "calculator",
348
- "task": "Calculate 5 factorial"
349
- },
350
- {
351
- "id": "step2",
352
- "agent": "data_analyst",
353
- "task": "Take the result from step1 and calculate its square root",
354
- "dependencies": ["step1"] # Waits for step1 to complete
355
- }
356
- ])
410
+ app = Flask(__name__)
357
411
 
358
- print(f"Factorial(5): {results[0]['output']}") # 120
359
- print(f"Square root: {results[1]['output']:.2f}") # 10.95
412
+ class MyAgent(CustomAgent):
413
+ role = "processor"
414
+ capabilities = ["processing"]
360
415
 
361
- await mesh.stop()
416
+ async def on_peer_request(self, msg):
417
+ return {"result": msg.data}
418
+
419
+ # Global state
420
+ _loop = None
421
+ _mesh = None
422
+ _agent = None
423
+
424
+ def _start_mesh():
425
+ """Run in background thread."""
426
+ global _loop, _mesh, _agent
427
+ _loop = asyncio.new_event_loop()
428
+ asyncio.set_event_loop(_loop)
429
+
430
+ _mesh = Mesh(mode="p2p", config={"bind_port": 7950})
431
+ _agent = _mesh.add(MyAgent())
432
+
433
+ _loop.run_until_complete(_mesh.start())
434
+ _loop.run_until_complete(_agent.run())
435
+
436
+ # Start mesh in background thread
437
+ _thread = threading.Thread(target=_start_mesh, daemon=True)
438
+ _thread.start()
439
+
440
+ @app.route("/process", methods=["POST"])
441
+ def process():
442
+ future = asyncio.run_coroutine_threadsafe(
443
+ _agent.peers.request("analyst", {"task": "analyze"}),
444
+ _loop
445
+ )
446
+ result = future.result(timeout=30)
447
+ return jsonify(result)
362
448
  ```
363
449
 
450
+ ### Framework Recommendation
451
+
452
+ | Use Case | Recommended Approach |
453
+ |----------|---------------------|
454
+ | FastAPI project | FastAPI + JarvisLifespan |
455
+ | Existing async app | Manual mesh lifecycle |
456
+ | Existing Flask/Django | Background thread pattern |
457
+ | CLI tool / script | Standalone asyncio.run() |
458
+
459
+ **For more:** See [CustomAgent Guide](CUSTOMAGENT_GUIDE.md) for detailed integration examples.
460
+
364
461
  ---
365
462
 
366
463
  ## Key Concepts
367
464
 
368
465
  ### 1. AutoAgent Profile
369
466
 
370
- The `AutoAgent` profile handles the "prompt code result" workflow automatically:
467
+ The `AutoAgent` profile handles the "prompt -> code -> result" workflow automatically:
371
468
 
372
469
  ```python
373
470
  class MyAgent(AutoAgent):
@@ -385,13 +482,21 @@ class MyAgent(CustomAgent):
385
482
  role = "unique_name"
386
483
  capabilities = ["skill1", "skill2"]
387
484
 
388
- async def execute_task(self, task): # For workflow steps (distributed)
485
+ # For P2P messaging - handle requests from other agents
486
+ async def on_peer_request(self, msg):
487
+ return {"result": ...} # Return value sent as response
488
+
489
+ # For P2P messaging - handle notifications (fire-and-forget)
490
+ async def on_peer_notify(self, msg):
491
+ await self.log(msg.data)
492
+
493
+ # For workflow tasks
494
+ async def execute_task(self, task):
389
495
  return {"status": "success", "output": ...}
390
496
 
391
- async def run(self): # For continuous loop (p2p)
392
- while not self.shutdown_requested:
393
- msg = await self.peers.receive(timeout=0.5)
394
- ...
497
+ # Configuration
498
+ listen_timeout = 1.0 # Seconds to wait for messages
499
+ auto_respond = True # Auto-send on_peer_request return value
395
500
  ```
396
501
 
397
502
  ### 3. Mesh
@@ -442,48 +547,6 @@ Each task returns a result dict:
442
547
 
443
548
  ---
444
549
 
445
- ## Configuration Options
446
-
447
- ### Sandbox Mode
448
-
449
- Choose between local or remote code execution:
450
-
451
- ```bash
452
- # Local (default) - runs on your machine
453
- SANDBOX_MODE=local
454
-
455
- # Remote - runs on Azure Container Apps (more secure)
456
- SANDBOX_MODE=remote
457
- SANDBOX_SERVICE_URL=https://your-sandbox-service.com
458
- ```
459
-
460
- ### LLM Settings
461
-
462
- Fine-tune LLM behavior:
463
-
464
- ```bash
465
- # Model selection (provider-specific)
466
- CLAUDE_MODEL=claude-sonnet-4 # Claude
467
- AZURE_DEPLOYMENT=gpt-4o # Azure
468
- GEMINI_MODEL=gemini-2.0-flash # Gemini
469
- LLM_MODEL=Qwen/Qwen2.5-Coder-32B # vLLM
470
-
471
- # Generation parameters
472
- LLM_TEMPERATURE=0.0 # 0.0 = deterministic, 1.0 = creative
473
- LLM_MAX_TOKENS=2000 # Max response length
474
- LLM_TIMEOUT=120 # Request timeout (seconds)
475
- ```
476
-
477
- ### Logging
478
-
479
- Control log verbosity:
480
-
481
- ```bash
482
- LOG_LEVEL=INFO # DEBUG, INFO, WARNING, ERROR, CRITICAL
483
- ```
484
-
485
- ---
486
-
487
550
  ## Common Patterns
488
551
 
489
552
  ### Pattern 1: Error Handling
@@ -496,7 +559,6 @@ try:
496
559
 
497
560
  if results[0]['status'] == 'failure':
498
561
  print(f"Error: {results[0]['error']}")
499
- # The agent automatically attempted repairs
500
562
  print(f"Repair attempts: {results[0]['repairs']}")
501
563
 
502
564
  except Exception as e:
@@ -515,29 +577,25 @@ results = await mesh.workflow("dynamic", [
515
577
  print(results[0]['output']) # 78.54
516
578
  ```
517
579
 
518
- ### Pattern 3: Context Passing
519
-
520
- Pass data between steps:
580
+ ### Pattern 3: Multi-Step Workflow
521
581
 
522
582
  ```python
523
- results = await mesh.workflow("context-workflow", [
583
+ results = await mesh.workflow("multi-step", [
524
584
  {
525
- "id": "generate",
526
- "agent": "data_analyst",
527
- "task": "Generate a list of 10 random numbers between 1 and 100"
585
+ "id": "step1",
586
+ "agent": "calculator",
587
+ "task": "Calculate 5 factorial"
528
588
  },
529
589
  {
530
- "id": "analyze",
590
+ "id": "step2",
531
591
  "agent": "data_analyst",
532
- "task": "Calculate statistics on the numbers from step 'generate'",
533
- "dependencies": ["generate"]
592
+ "task": "Take the result from step1 and calculate its square root",
593
+ "dependencies": ["step1"] # Waits for step1 to complete
534
594
  }
535
595
  ])
536
596
 
537
- numbers = results[0]['output']
538
- stats = results[1]['output']
539
- print(f"Numbers: {numbers}")
540
- print(f"Statistics: {stats}")
597
+ print(f"Factorial(5): {results[0]['output']}") # 120
598
+ print(f"Square root: {results[1]['output']:.2f}") # 10.95
541
599
  ```
542
600
 
543
601
  ---
@@ -559,138 +617,41 @@ ls -la logs/
559
617
  cat logs/<agent>/<latest>.json
560
618
  ```
561
619
 
562
- Run smoke test with verbose output:
563
- ```bash
564
- python -m jarviscore.cli.smoketest --verbose
565
- ```
566
-
567
620
  ### Issue: Slow execution
568
621
 
569
- **Causes:**
570
- 1. LLM latency (2-5s per request)
571
- 2. Complex prompts
572
- 3. Network issues
573
-
574
622
  **Solutions:**
575
623
  - Use faster models (Claude Haiku, Gemini Flash)
576
624
  - Simplify prompts
577
625
  - Use local vLLM for zero-latency
578
626
 
579
- ### Issue: Generated code has errors
580
-
581
- **Good news:** AutoAgent automatically attempts to fix errors!
582
-
583
- It will:
584
- 1. Detect the error
585
- 2. Ask the LLM to fix the code
586
- 3. Retry execution (up to 3 times)
587
-
588
- Check `repairs` in the result to see how many fixes were needed.
589
-
590
627
  ---
591
628
 
592
629
  ## Next Steps
593
630
 
594
- 1. **AutoAgent Guide**: Multi-node distributed mode [AUTOAGENT_GUIDE.md](AUTOAGENT_GUIDE.md)
595
- 2. **CustomAgent Guide**: P2P and distributed with your code → [CUSTOMAGENT_GUIDE.md](CUSTOMAGENT_GUIDE.md)
596
- 3. **User Guide**: Complete documentation [USER_GUIDE.md](USER_GUIDE.md)
631
+ 1. **CustomAgent Guide**: P2P and distributed with your code -> [CUSTOMAGENT_GUIDE.md](CUSTOMAGENT_GUIDE.md)
632
+ 2. **AutoAgent Guide**: Multi-node distributed mode -> [AUTOAGENT_GUIDE.md](AUTOAGENT_GUIDE.md)
633
+ 3. **User Guide**: Complete documentation -> [USER_GUIDE.md](USER_GUIDE.md)
597
634
  4. **API Reference**: [API_REFERENCE.md](API_REFERENCE.md)
598
- 5. **Configuration**: [CONFIGURATION.md](CONFIGURATION.md)
599
- 6. **Examples**: Check out `examples/` directory
635
+ 5. **Examples**: Check out `examples/` directory
600
636
 
601
637
  ---
602
638
 
603
639
  ## Best Practices
604
640
 
605
- ### DO
641
+ ### DO
606
642
 
607
643
  - **Be specific in prompts**: "Calculate factorial of 10" > "Do math"
608
644
  - **Test with simple tasks first**: Validate your setup works
609
645
  - **Use appropriate models**: Haiku/Flash for simple tasks, Opus/GPT-4 for complex
610
- - **Monitor costs**: Check LLM usage if using paid APIs
611
- - **Read error messages**: They contain helpful hints
646
+ - **Use async frameworks**: FastAPI, aiohttp for best experience
612
647
 
613
- ### DON'T
648
+ ### DON'T
614
649
 
615
650
  - **Use vague prompts**: "Do something" won't work well
616
651
  - **Expect instant results**: LLM generation takes 2-5 seconds
617
652
  - **Skip validation**: Always run health check after setup
618
653
  - **Commit API keys**: Keep `.env` out of version control
619
- - **Ignore logs**: They help debug issues
620
-
621
- ---
622
-
623
- ## FAQ
624
-
625
- ### Q: How much does it cost?
626
-
627
- **A:** Depends on your LLM provider:
628
- - **Claude**: ~$3-15 per million tokens (most expensive but best quality)
629
- - **Azure**: ~$3-15 per million tokens (enterprise-grade)
630
- - **Gemini**: $0.10-5 per million tokens (cheapest cloud option)
631
- - **vLLM**: FREE (self-hosted, no API costs)
632
-
633
- A typical simple task uses ~500 tokens = $0.0015 with Claude.
634
-
635
- ### Q: Is the code execution safe?
636
-
637
- **A:** Yes! Code runs in an isolated sandbox:
638
- - **Local mode**: Restricted Python environment (no file/network access)
639
- - **Remote mode**: Azure Container Apps (fully isolated containers)
640
-
641
- ### Q: Can I use my own LLM?
642
-
643
- **A:** Yes! Point `LLM_ENDPOINT` to any OpenAI-compatible API:
644
- ```bash
645
- LLM_ENDPOINT=http://localhost:8000 # Local vLLM
646
- LLM_ENDPOINT=https://your-api.com # Custom endpoint
647
- ```
648
-
649
- ### Q: What if the LLM generates bad code?
650
-
651
- **A:** AutoAgent automatically detects and fixes errors:
652
- 1. Catches syntax/runtime errors
653
- 2. Sends error to LLM with fix instructions
654
- 3. Retries with corrected code (up to 3 attempts)
655
-
656
- Check `repairs` in the result to see how many fixes were needed.
657
-
658
- ### Q: Can I see the generated code?
659
-
660
- **A:** Yes! It's in the result:
661
- ```python
662
- result = results[0]
663
- print(result['code']) # Shows the generated Python code
664
- ```
665
-
666
- Or check logs:
667
- ```bash
668
- cat logs/<agent-role>/<result-id>.json
669
- ```
670
-
671
- ### Q: How do I deploy this in production?
672
-
673
- **A:** See the User Guide for:
674
- - Remote sandbox configuration (Azure Container Apps)
675
- - High-availability setup
676
- - Monitoring and logging
677
- - Cost optimization
678
-
679
- ---
680
-
681
- ## Support
682
-
683
- Need help?
684
-
685
- 1. **Check docs**: [USER_GUIDE.md](USER_GUIDE.md) | [TROUBLESHOOTING.md](TROUBLESHOOTING.md)
686
- 2. **Run diagnostics**:
687
- ```bash
688
- python -m jarviscore.cli.check --verbose
689
- python -m jarviscore.cli.smoketest --verbose
690
- ```
691
- 3. **Check logs**: `cat logs/<agent>/<latest>.json`
692
- 4. **Report issues**: [GitHub Issues](https://github.com/Prescott-Data/jarviscore-framework/issues)
693
654
 
694
655
  ---
695
656
 
696
- **🚀 Happy building with JarvisCore!**
657
+ **Happy building with JarvisCore!**
@@ -563,4 +563,4 @@ If significantly slower:
563
563
 
564
564
  ## Version
565
565
 
566
- Troubleshooting Guide for JarvisCore v0.2.1
566
+ Troubleshooting Guide for JarvisCore v0.3.1