jarviscore-framework 0.1.0__py3-none-any.whl

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
Files changed (55) hide show
  1. examples/calculator_agent_example.py +77 -0
  2. examples/multi_agent_workflow.py +132 -0
  3. examples/research_agent_example.py +76 -0
  4. jarviscore/__init__.py +54 -0
  5. jarviscore/cli/__init__.py +7 -0
  6. jarviscore/cli/__main__.py +33 -0
  7. jarviscore/cli/check.py +404 -0
  8. jarviscore/cli/smoketest.py +371 -0
  9. jarviscore/config/__init__.py +7 -0
  10. jarviscore/config/settings.py +128 -0
  11. jarviscore/core/__init__.py +7 -0
  12. jarviscore/core/agent.py +163 -0
  13. jarviscore/core/mesh.py +463 -0
  14. jarviscore/core/profile.py +64 -0
  15. jarviscore/docs/API_REFERENCE.md +932 -0
  16. jarviscore/docs/CONFIGURATION.md +753 -0
  17. jarviscore/docs/GETTING_STARTED.md +600 -0
  18. jarviscore/docs/TROUBLESHOOTING.md +424 -0
  19. jarviscore/docs/USER_GUIDE.md +983 -0
  20. jarviscore/execution/__init__.py +94 -0
  21. jarviscore/execution/code_registry.py +298 -0
  22. jarviscore/execution/generator.py +268 -0
  23. jarviscore/execution/llm.py +430 -0
  24. jarviscore/execution/repair.py +283 -0
  25. jarviscore/execution/result_handler.py +332 -0
  26. jarviscore/execution/sandbox.py +555 -0
  27. jarviscore/execution/search.py +281 -0
  28. jarviscore/orchestration/__init__.py +18 -0
  29. jarviscore/orchestration/claimer.py +101 -0
  30. jarviscore/orchestration/dependency.py +143 -0
  31. jarviscore/orchestration/engine.py +292 -0
  32. jarviscore/orchestration/status.py +96 -0
  33. jarviscore/p2p/__init__.py +23 -0
  34. jarviscore/p2p/broadcaster.py +353 -0
  35. jarviscore/p2p/coordinator.py +364 -0
  36. jarviscore/p2p/keepalive.py +361 -0
  37. jarviscore/p2p/swim_manager.py +290 -0
  38. jarviscore/profiles/__init__.py +6 -0
  39. jarviscore/profiles/autoagent.py +264 -0
  40. jarviscore/profiles/customagent.py +137 -0
  41. jarviscore_framework-0.1.0.dist-info/METADATA +136 -0
  42. jarviscore_framework-0.1.0.dist-info/RECORD +55 -0
  43. jarviscore_framework-0.1.0.dist-info/WHEEL +5 -0
  44. jarviscore_framework-0.1.0.dist-info/licenses/LICENSE +21 -0
  45. jarviscore_framework-0.1.0.dist-info/top_level.txt +3 -0
  46. tests/conftest.py +44 -0
  47. tests/test_agent.py +165 -0
  48. tests/test_autoagent.py +140 -0
  49. tests/test_autoagent_day4.py +186 -0
  50. tests/test_customagent.py +248 -0
  51. tests/test_integration.py +293 -0
  52. tests/test_llm_fallback.py +185 -0
  53. tests/test_mesh.py +356 -0
  54. tests/test_p2p_integration.py +375 -0
  55. tests/test_remote_sandbox.py +116 -0
@@ -0,0 +1,424 @@
1
+ # JarvisCore Troubleshooting Guide
2
+
3
+ Common issues and solutions for AutoAgent/Prompt-Dev users.
4
+
5
+ ---
6
+
7
+ ## Quick Diagnostics
8
+
9
+ Run these commands to diagnose issues:
10
+
11
+ ```bash
12
+ # Check installation and configuration
13
+ python -m jarviscore.cli.check
14
+
15
+ # Test LLM connectivity
16
+ python -m jarviscore.cli.check --validate-llm
17
+
18
+ # Run end-to-end smoke test
19
+ python -m jarviscore.cli.smoketest
20
+
21
+ # Verbose output for debugging
22
+ python -m jarviscore.cli.smoketest --verbose
23
+ ```
24
+
25
+ ---
26
+
27
+ ## Common Issues
28
+
29
+ ### 1. Installation Problems
30
+
31
+ #### Issue: `ModuleNotFoundError: No module named 'jarviscore'`
32
+
33
+ **Solution:**
34
+ ```bash
35
+ pip install jarviscore
36
+
37
+ # Or install in development mode
38
+ cd jarviscore
39
+ pip install -e .
40
+ ```
41
+
42
+ #### Issue: `ImportError: cannot import name 'AutoAgent'`
43
+
44
+ **Cause:** Old/cached installation
45
+
46
+ **Solution:**
47
+ ```bash
48
+ pip uninstall jarviscore
49
+ pip install jarviscore
50
+ ```
51
+
52
+ ---
53
+
54
+ ### 2. LLM Configuration Issues
55
+
56
+ #### Issue: `No LLM provider configured`
57
+
58
+ **Cause:** Missing API key in `.env`
59
+
60
+ **Solution:**
61
+ 1. Copy example config:
62
+ ```bash
63
+ cp .env.example .env
64
+ ```
65
+
66
+ 2. Add your API key:
67
+ ```bash
68
+ # Choose ONE:
69
+ CLAUDE_API_KEY=sk-ant-...
70
+ # OR
71
+ AZURE_API_KEY=...
72
+ # OR
73
+ GEMINI_API_KEY=...
74
+ ```
75
+
76
+ 3. Validate:
77
+ ```bash
78
+ python -m jarviscore.cli.check --validate-llm
79
+ ```
80
+
81
+ #### Issue: `Error code: 401 - Unauthorized`
82
+
83
+ **Cause:** Invalid API key
84
+
85
+ **Solution:**
86
+ 1. Verify your API key is correct
87
+ 2. Check it hasn't expired
88
+ 3. For Azure: Ensure AZURE_ENDPOINT and AZURE_DEPLOYMENT are correct
89
+
90
+ #### Issue: `Error code: 529 - Overloaded`
91
+
92
+ **Cause:** LLM provider temporarily overloaded (Claude, Azure, etc.)
93
+
94
+ **Solution:**
95
+ - This is temporary - retry after a few seconds
96
+ - The smoke test automatically retries 3 times
97
+ - Consider adding a backup LLM provider in `.env`
98
+
99
+ #### Issue: `Error code: 429 - Rate limit exceeded`
100
+
101
+ **Cause:** Too many requests to LLM API
102
+
103
+ **Solution:**
104
+ - Wait 60 seconds before retrying
105
+ - Check your API plan limits
106
+ - Consider upgrading your API plan
107
+
108
+ ---
109
+
110
+ ### 3. Execution Errors
111
+
112
+ #### Issue: `Task failed: Code execution timed out`
113
+
114
+ **Cause:** Generated code runs longer than timeout (default: 300s)
115
+
116
+ **Solution:**
117
+ Increase timeout in `.env`:
118
+ ```bash
119
+ EXECUTION_TIMEOUT=600 # 10 minutes
120
+ ```
121
+
122
+ #### Issue: `Sandbox execution failed: <error>`
123
+
124
+ **Cause:** Generated code has errors
125
+
126
+ **What happens:**
127
+ - Framework automatically attempts repairs (max 3 attempts)
128
+ - If repairs fail, the task fails
129
+
130
+ **Solution:**
131
+ 1. Check logs for details:
132
+ ```bash
133
+ ls -la logs/
134
+ cat logs/<latest-log>.log
135
+ ```
136
+
137
+ 2. Make prompt more specific:
138
+ ```python
139
+ task="Calculate factorial of 10. Store result in variable named 'result'."
140
+ ```
141
+
142
+ 3. Adjust system prompt:
143
+ ```python
144
+ class MyAgent(AutoAgent):
145
+ system_prompt = """
146
+ You are a Python expert. Generate clean, working code.
147
+ - Use only standard library
148
+ - Store final result in 'result' variable
149
+ - Handle edge cases
150
+ """
151
+ ```
152
+
153
+ #### Issue: `Maximum repair attempts exceeded`
154
+
155
+ **Cause:** LLM unable to generate working code after 3 tries
156
+
157
+ **Solution:**
158
+ 1. Simplify your task
159
+ 2. Be more explicit in prompt
160
+ 3. Check logs to see what errors occurred:
161
+ ```bash
162
+ cat logs/<latest-log>.log
163
+ ```
164
+
165
+ ---
166
+
167
+ ### 4. Workflow Issues
168
+
169
+ #### Issue: `Agent not found: <role>`
170
+
171
+ **Cause:** Agent role mismatch
172
+
173
+ **Solution:**
174
+ ```python
175
+ # Agent definition
176
+ class CalculatorAgent(AutoAgent):
177
+ role = "calculator" # <-- This name
178
+
179
+ # Workflow must match
180
+ mesh.workflow("wf-1", [
181
+ {"agent": "calculator", "task": "..."} # <-- Must match role
182
+ ])
183
+ ```
184
+
185
+ #### Issue: `Dependency not satisfied: <step-id>`
186
+
187
+ **Cause:** Workflow dependency chain broken
188
+
189
+ **Solution:**
190
+ ```python
191
+ # Ensure dependencies exist
192
+ await mesh.workflow("wf-1", [
193
+ {"id": "step1", "agent": "agent1", "task": "..."},
194
+ {"id": "step2", "agent": "agent2", "task": "...",
195
+ "dependencies": ["step1"]} # step1 must exist
196
+ ])
197
+ ```
198
+
199
+ ---
200
+
201
+ ### 5. Environment Issues
202
+
203
+ #### Issue: `.env file not found`
204
+
205
+ **Solution:**
206
+ ```bash
207
+ # Create from example
208
+ cp .env.example .env
209
+
210
+ # Or create manually
211
+ cat > .env << 'EOF'
212
+ CLAUDE_API_KEY=your-key-here
213
+ EOF
214
+ ```
215
+
216
+ #### Issue: `Environment variable not loading`
217
+
218
+ **Cause:** `.env` file in wrong location
219
+
220
+ **Solution:**
221
+ Place `.env` in one of these locations:
222
+ - Current working directory: `./env`
223
+ - Project root: `jarviscore/.env`
224
+
225
+ Or set environment variable directly:
226
+ ```bash
227
+ export CLAUDE_API_KEY=your-key-here
228
+ python your_script.py
229
+ ```
230
+
231
+ ---
232
+
233
+ ### 6. Sandbox Configuration
234
+
235
+ #### Issue: `Remote sandbox connection failed`
236
+
237
+ **Cause:** SANDBOX_SERVICE_URL incorrect or service down
238
+
239
+ **Solution:**
240
+ 1. Use local sandbox (default):
241
+ ```bash
242
+ SANDBOX_MODE=local
243
+ ```
244
+
245
+ 2. Or verify remote URL:
246
+ ```bash
247
+ SANDBOX_MODE=remote
248
+ SANDBOX_SERVICE_URL=https://your-sandbox-service.com
249
+ ```
250
+
251
+ 3. Test connectivity:
252
+ ```bash
253
+ curl https://your-sandbox-service.com/health
254
+ ```
255
+
256
+ ---
257
+
258
+ ### 7. Performance Issues
259
+
260
+ #### Issue: Code generation is slow (>10 seconds)
261
+
262
+ **Cause:** LLM latency or complex prompt
263
+
264
+ **Solutions:**
265
+ 1. **Use faster model:**
266
+ ```bash
267
+ # Claude
268
+ CLAUDE_MODEL=claude-haiku-4
269
+
270
+ # Gemini
271
+ GEMINI_MODEL=gemini-1.5-flash
272
+ ```
273
+
274
+ 2. **Simplify system prompt:**
275
+ - Remove unnecessary instructions
276
+ - Be concise but specific
277
+
278
+ 3. **Use local vLLM:**
279
+ ```bash
280
+ LLM_ENDPOINT=http://localhost:8000
281
+ LLM_MODEL=Qwen/Qwen2.5-Coder-32B-Instruct
282
+ ```
283
+
284
+ #### Issue: High LLM API costs
285
+
286
+ **Solutions:**
287
+ 1. Use cheaper models (Haiku, Flash)
288
+ 2. Set up local vLLM (free)
289
+ 3. Cache common operations
290
+ 4. Reduce MAX_REPAIR_ATTEMPTS in `.env`
291
+
292
+ ---
293
+
294
+ ### 8. Testing Issues
295
+
296
+ #### Issue: Smoke test fails but examples work
297
+
298
+ **Cause:** Temporary LLM issues or network
299
+
300
+ **Solution:**
301
+ - Smoke test is more strict than examples
302
+ - Run with verbose to see details:
303
+ ```bash
304
+ python -m jarviscore.cli.smoketest --verbose
305
+ ```
306
+ - If retrying works eventually, it's temporary LLM overload
307
+
308
+ #### Issue: All tests pass but my agent fails
309
+
310
+ **Cause:** Task-specific issue
311
+
312
+ **Solution:**
313
+ 1. Test with simpler task first:
314
+ ```python
315
+ task="Calculate 2 + 2" # Simple
316
+ ```
317
+
318
+ 2. Gradually increase complexity:
319
+ ```python
320
+ task="Calculate factorial of 5" # Medium
321
+ ```
322
+
323
+ 3. Check agent logs:
324
+ ```bash
325
+ cat logs/<agent-role>_*.log
326
+ ```
327
+
328
+ ---
329
+
330
+ ## Debug Mode
331
+
332
+ Enable verbose logging for detailed diagnostics:
333
+
334
+ ```bash
335
+ # In .env
336
+ LOG_LEVEL=DEBUG
337
+ ```
338
+
339
+ Then check logs:
340
+ ```bash
341
+ tail -f logs/<latest>.log
342
+ ```
343
+
344
+ ---
345
+
346
+ ## Getting Help
347
+
348
+ If issues persist:
349
+
350
+ 1. **Check logs:**
351
+ ```bash
352
+ ls -la logs/
353
+ cat logs/<latest>.log
354
+ ```
355
+
356
+ 2. **Run diagnostics:**
357
+ ```bash
358
+ python -m jarviscore.cli.check --verbose
359
+ python -m jarviscore.cli.smoketest --verbose
360
+ ```
361
+
362
+ 3. **Provide this info when asking for help:**
363
+ - Python version: `python --version`
364
+ - JarvisCore version: `pip show jarviscore`
365
+ - LLM provider used (Claude/Azure/Gemini)
366
+ - Error message and logs
367
+ - Minimal code to reproduce issue
368
+
369
+ 4. **Create an issue:**
370
+ - GitHub: https://github.com/yourusername/jarviscore/issues
371
+ - Include diagnostics output above
372
+
373
+ ---
374
+
375
+ ## Best Practices to Avoid Issues
376
+
377
+ 1. **Always validate setup first:**
378
+ ```bash
379
+ python -m jarviscore.cli.check --validate-llm
380
+ python -m jarviscore.cli.smoketest
381
+ ```
382
+
383
+ 2. **Use specific prompts:**
384
+ - ❌ "Do math"
385
+ - ✅ "Calculate the factorial of 10 and store result in 'result' variable"
386
+
387
+ 3. **Start simple, then scale:**
388
+ - Test with simple tasks first
389
+ - Add complexity gradually
390
+ - Monitor logs for warnings
391
+
392
+ 4. **Keep dependencies updated:**
393
+ ```bash
394
+ pip install --upgrade jarviscore
395
+ ```
396
+
397
+ 5. **Use version control for `.env`:**
398
+ - Never commit API keys
399
+ - Use `.env.example` as template
400
+ - Document required variables
401
+
402
+ ---
403
+
404
+ ## Performance Benchmarks (Expected)
405
+
406
+ Use these as baselines:
407
+
408
+ | Operation | Expected Time | Notes |
409
+ |-----------|--------------|-------|
410
+ | Sandbox execution | 2-5ms | Local code execution |
411
+ | Code generation | 2-4s | LLM response time |
412
+ | Simple task (e.g., 2+2) | 3-5s | End-to-end |
413
+ | Complex task | 5-15s | With potential repairs |
414
+ | Multi-step workflow (2 steps) | 7-10s | Sequential execution |
415
+
416
+ If significantly slower:
417
+ 1. Check network latency
418
+ 2. Try different LLM model
419
+ 3. Consider local vLLM
420
+ 4. Check LOG_LEVEL (DEBUG is slower)
421
+
422
+ ---
423
+
424
+ *Last updated: 2026-01-13*