jarviscore-framework 0.2.1__py3-none-any.whl → 0.3.1__py3-none-any.whl

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
Files changed (37) hide show
  1. examples/cloud_deployment_example.py +162 -0
  2. examples/customagent_cognitive_discovery_example.py +343 -0
  3. examples/fastapi_integration_example.py +570 -0
  4. jarviscore/__init__.py +19 -5
  5. jarviscore/cli/smoketest.py +8 -4
  6. jarviscore/core/agent.py +227 -0
  7. jarviscore/core/mesh.py +9 -0
  8. jarviscore/data/examples/cloud_deployment_example.py +162 -0
  9. jarviscore/data/examples/custom_profile_decorator.py +134 -0
  10. jarviscore/data/examples/custom_profile_wrap.py +168 -0
  11. jarviscore/data/examples/customagent_cognitive_discovery_example.py +343 -0
  12. jarviscore/data/examples/fastapi_integration_example.py +570 -0
  13. jarviscore/docs/API_REFERENCE.md +283 -3
  14. jarviscore/docs/CHANGELOG.md +139 -0
  15. jarviscore/docs/CONFIGURATION.md +1 -1
  16. jarviscore/docs/CUSTOMAGENT_GUIDE.md +997 -85
  17. jarviscore/docs/GETTING_STARTED.md +228 -267
  18. jarviscore/docs/TROUBLESHOOTING.md +1 -1
  19. jarviscore/docs/USER_GUIDE.md +153 -8
  20. jarviscore/integrations/__init__.py +16 -0
  21. jarviscore/integrations/fastapi.py +247 -0
  22. jarviscore/p2p/broadcaster.py +10 -3
  23. jarviscore/p2p/coordinator.py +310 -14
  24. jarviscore/p2p/keepalive.py +45 -23
  25. jarviscore/p2p/peer_client.py +311 -12
  26. jarviscore/p2p/swim_manager.py +9 -4
  27. jarviscore/profiles/__init__.py +7 -1
  28. jarviscore/profiles/customagent.py +295 -74
  29. {jarviscore_framework-0.2.1.dist-info → jarviscore_framework-0.3.1.dist-info}/METADATA +66 -18
  30. {jarviscore_framework-0.2.1.dist-info → jarviscore_framework-0.3.1.dist-info}/RECORD +37 -22
  31. {jarviscore_framework-0.2.1.dist-info → jarviscore_framework-0.3.1.dist-info}/WHEEL +1 -1
  32. tests/test_13_dx_improvements.py +554 -0
  33. tests/test_14_cloud_deployment.py +403 -0
  34. tests/test_15_llm_cognitive_discovery.py +684 -0
  35. tests/test_16_unified_dx_flow.py +947 -0
  36. {jarviscore_framework-0.2.1.dist-info → jarviscore_framework-0.3.1.dist-info}/licenses/LICENSE +0 -0
  37. {jarviscore_framework-0.2.1.dist-info → jarviscore_framework-0.3.1.dist-info}/top_level.txt +0 -0
@@ -1,87 +1,130 @@
1
1
  """
2
- CustomAgent - User-controlled execution profile.
3
-
4
- User provides their own implementation using any framework:
5
- - LangChain
6
- - MCP (Model Context Protocol)
7
- - CrewAI
8
- - Raw Python
9
- - Any other tool/framework
2
+ CustomAgent - User-controlled execution profile with P2P message handling.
3
+
4
+ Unified profile for building agents that:
5
+ - Handle P2P mesh communication (requests, notifications)
6
+ - Execute workflow tasks
7
+ - Integrate with HTTP APIs (FastAPI, Flask, etc.)
8
+
9
+ Example - Basic P2P Agent:
10
+ class AnalystAgent(CustomAgent):
11
+ role = "analyst"
12
+ capabilities = ["analysis"]
13
+
14
+ async def on_peer_request(self, msg):
15
+ result = await self.analyze(msg.data)
16
+ return {"status": "success", "result": result}
17
+
18
+ Example - With FastAPI:
19
+ from fastapi import FastAPI
20
+ from jarviscore.integrations.fastapi import JarvisLifespan
21
+
22
+ class ProcessorAgent(CustomAgent):
23
+ role = "processor"
24
+ capabilities = ["processing"]
25
+
26
+ async def on_peer_request(self, msg):
27
+ return {"result": await self.process(msg.data)}
28
+
29
+ app = FastAPI(lifespan=JarvisLifespan(ProcessorAgent(), mode="p2p"))
10
30
  """
11
- from typing import Dict, Any
31
+ from typing import Dict, Any, Optional
32
+ import asyncio
33
+ import logging
34
+
12
35
  from jarviscore.core.profile import Profile
13
36
 
37
+ logger = logging.getLogger(__name__)
38
+
14
39
 
15
40
  class CustomAgent(Profile):
16
41
  """
17
- Custom execution profile with full user control.
42
+ User-controlled agent profile with P2P message handling.
43
+
44
+ For P2P messaging, implement these handlers:
45
+ on_peer_request(msg) - Handle requests, return response
46
+ on_peer_notify(msg) - Handle notifications (fire-and-forget)
47
+ on_error(error, msg) - Handle errors
18
48
 
19
- User defines:
20
- - role: str
21
- - capabilities: List[str]
22
- - setup(): Initialize custom framework/tools
23
- - execute_task(): Custom execution logic
49
+ For workflow execution:
50
+ execute_task(task) - Handle workflow tasks directly
51
+ (defaults to delegating to on_peer_request)
24
52
 
25
- Framework provides:
26
- - Orchestration (task claiming, dependencies, nudging)
27
- - P2P coordination (agent discovery, task routing)
28
- - State management (crash recovery, HITL)
29
- - Cost tracking (if user provides token counts)
53
+ Configuration:
54
+ listen_timeout: Seconds to wait for messages (default: 1.0)
55
+ auto_respond: Auto-send on_peer_request return value (default: True)
30
56
 
31
- Example with LangChain:
32
- class APIAgent(CustomAgent):
33
- role = "api_client"
34
- capabilities = ["api_calls"]
57
+ Example - P2P Agent:
58
+ class AnalystAgent(CustomAgent):
59
+ role = "analyst"
60
+ capabilities = ["analysis"]
61
+
62
+ async def on_peer_request(self, msg):
63
+ result = await self.analyze(msg.data)
64
+ return {"status": "success", "result": result}
65
+
66
+ Example - With LangChain:
67
+ class LangChainAgent(CustomAgent):
68
+ role = "assistant"
69
+ capabilities = ["chat"]
35
70
 
36
71
  async def setup(self):
37
72
  await super().setup()
38
73
  from langchain.agents import Agent
39
74
  self.lc_agent = Agent(...)
40
75
 
41
- async def execute_task(self, task):
42
- result = await self.lc_agent.run(task["task"])
76
+ async def on_peer_request(self, msg):
77
+ result = await self.lc_agent.run(msg.data["query"])
43
78
  return {"status": "success", "output": result}
44
79
 
45
- Example with MCP:
80
+ Example - With MCP:
46
81
  class MCPAgent(CustomAgent):
47
82
  role = "tool_user"
48
83
  capabilities = ["mcp_tools"]
49
- mcp_server_url = "stdio://./server.py"
50
84
 
51
85
  async def setup(self):
52
86
  await super().setup()
53
87
  from mcp import Client
54
- self.mcp = Client(self.mcp_server_url)
88
+ self.mcp = Client("stdio://./server.py")
55
89
  await self.mcp.connect()
56
90
 
57
- async def execute_task(self, task):
58
- result = await self.mcp.call_tool("my_tool", task["params"])
91
+ async def on_peer_request(self, msg):
92
+ result = await self.mcp.call_tool("my_tool", msg.data)
59
93
  return {"status": "success", "data": result}
60
94
 
61
- Example with Raw Python:
62
- class DataProcessor(CustomAgent):
95
+ Example - With FastAPI:
96
+ from fastapi import FastAPI
97
+ from jarviscore.integrations.fastapi import JarvisLifespan
98
+
99
+ class ProcessorAgent(CustomAgent):
63
100
  role = "processor"
64
101
  capabilities = ["data_processing"]
65
102
 
66
- async def execute_task(self, task):
67
- # Pure Python logic
68
- data = task["params"]["data"]
69
- processed = [x * 2 for x in data]
70
- return {"status": "success", "output": processed}
103
+ async def on_peer_request(self, msg):
104
+ if msg.data.get("action") == "process":
105
+ return {"result": await self.process(msg.data["payload"])}
106
+ return {"error": "unknown action"}
107
+
108
+ agent = ProcessorAgent()
109
+ app = FastAPI(lifespan=JarvisLifespan(agent, mode="p2p"))
110
+
111
+ @app.post("/process")
112
+ async def process_endpoint(data: dict, request: Request):
113
+ # HTTP endpoint - primary interface
114
+ agent = request.app.state.jarvis_agents["processor"]
115
+ return await agent.process(data)
71
116
  """
72
117
 
73
- def __init__(self, agent_id=None):
74
- super().__init__(agent_id)
118
+ # Configuration - can be overridden in subclasses
119
+ listen_timeout: float = 1.0 # Seconds to wait for messages
120
+ auto_respond: bool = True # Automatically send response for requests
75
121
 
76
- # User can add any custom attributes
77
- # e.g., mcp_server_url, langchain_config, etc.
122
+ def __init__(self, agent_id: Optional[str] = None):
123
+ super().__init__(agent_id)
78
124
 
79
125
  async def setup(self):
80
126
  """
81
- User implements this to initialize custom framework/tools.
82
-
83
- DAY 1: Base implementation (user overrides)
84
- DAY 5+: Full examples with LangChain, MCP, etc.
127
+ Initialize agent resources. Override to add custom setup.
85
128
 
86
129
  Example:
87
130
  async def setup(self):
@@ -91,47 +134,225 @@ class CustomAgent(Profile):
91
134
  self.agent = Agent(...)
92
135
  """
93
136
  await super().setup()
94
-
95
137
  self._logger.info(f"CustomAgent setup: {self.agent_id}")
96
- self._logger.info(
97
- f" Note: Override setup() to initialize your custom framework"
138
+
139
+ # ─────────────────────────────────────────────────────────────────
140
+ # P2P Message Handling
141
+ # ─────────────────────────────────────────────────────────────────
142
+
143
+ async def run(self):
144
+ """
145
+ Listener loop - receives and dispatches P2P messages.
146
+
147
+ Runs automatically in P2P mode. Dispatches messages to:
148
+ - on_peer_request() for request-response messages
149
+ - on_peer_notify() for fire-and-forget notifications
150
+
151
+ You typically don't need to override this. Just implement the handlers.
152
+ """
153
+ self._logger.info(f"[{self.role}] Listener loop started")
154
+
155
+ while not self.shutdown_requested:
156
+ try:
157
+ # Wait for incoming message with timeout
158
+ # Timeout allows periodic shutdown_requested checks
159
+ msg = await self.peers.receive(timeout=self.listen_timeout)
160
+
161
+ if msg is None:
162
+ # Timeout - no message, continue loop to check shutdown
163
+ continue
164
+
165
+ # Dispatch to appropriate handler
166
+ await self._dispatch_message(msg)
167
+
168
+ except asyncio.CancelledError:
169
+ self._logger.debug(f"[{self.role}] Listener loop cancelled")
170
+ raise
171
+ except Exception as e:
172
+ self._logger.error(f"[{self.role}] Listener loop error: {e}")
173
+ await self.on_error(e, None)
174
+
175
+ self._logger.info(f"[{self.role}] Listener loop stopped")
176
+
177
+ async def _dispatch_message(self, msg):
178
+ """
179
+ Dispatch message to appropriate handler based on message type.
180
+
181
+ Handles:
182
+ - REQUEST messages: calls on_peer_request, sends response if auto_respond=True
183
+ - NOTIFY messages: calls on_peer_notify
184
+ """
185
+ from jarviscore.p2p.messages import MessageType
186
+
187
+ try:
188
+ # Check if this is a request (expects response)
189
+ is_request = (
190
+ msg.type == MessageType.REQUEST or
191
+ getattr(msg, 'is_request', False) or
192
+ msg.correlation_id is not None
193
+ )
194
+
195
+ if is_request:
196
+ # Request-response: call handler, optionally send response
197
+ response = await self.on_peer_request(msg)
198
+
199
+ if self.auto_respond and response is not None:
200
+ await self.peers.respond(msg, response)
201
+ self._logger.debug(
202
+ f"[{self.role}] Sent response to {msg.sender}"
203
+ )
204
+ else:
205
+ # Notification: fire-and-forget
206
+ await self.on_peer_notify(msg)
207
+
208
+ except Exception as e:
209
+ self._logger.error(
210
+ f"[{self.role}] Error handling message from {msg.sender}: {e}"
211
+ )
212
+ await self.on_error(e, msg)
213
+
214
+ # ─────────────────────────────────────────────────────────────────
215
+ # Message Handlers - Override in your agent
216
+ # ─────────────────────────────────────────────────────────────────
217
+
218
+ async def on_peer_request(self, msg) -> Any:
219
+ """
220
+ Handle incoming peer request.
221
+
222
+ Override to process request-response messages from other agents.
223
+ The return value is automatically sent as response (if auto_respond=True).
224
+
225
+ Args:
226
+ msg: IncomingMessage with:
227
+ - msg.sender: Sender agent ID or role
228
+ - msg.data: Request payload (dict)
229
+ - msg.correlation_id: For response matching (handled automatically)
230
+
231
+ Returns:
232
+ Response data (dict) to send back to the requester.
233
+ Return None to skip sending a response.
234
+
235
+ Example:
236
+ async def on_peer_request(self, msg):
237
+ action = msg.data.get("action")
238
+
239
+ if action == "analyze":
240
+ result = await self.analyze(msg.data["payload"])
241
+ return {"status": "success", "result": result}
242
+
243
+ elif action == "status":
244
+ return {"status": "ok", "queue_size": self.queue_size}
245
+
246
+ return {"status": "error", "message": f"Unknown action: {action}"}
247
+ """
248
+ return None
249
+
250
+ async def on_peer_notify(self, msg) -> None:
251
+ """
252
+ Handle incoming peer notification.
253
+
254
+ Override to process fire-and-forget messages from other agents.
255
+ No response is expected or sent.
256
+
257
+ Args:
258
+ msg: IncomingMessage with:
259
+ - msg.sender: Sender agent ID or role
260
+ - msg.data: Notification payload (dict)
261
+
262
+ Example:
263
+ async def on_peer_notify(self, msg):
264
+ event = msg.data.get("event")
265
+
266
+ if event == "task_complete":
267
+ await self.update_dashboard(msg.data)
268
+ self._logger.info(f"Task completed by {msg.sender}")
269
+
270
+ elif event == "peer_joined":
271
+ self._logger.info(f"New peer in mesh: {msg.data.get('role')}")
272
+ """
273
+ self._logger.debug(
274
+ f"[{self.role}] Received notify from {msg.sender}: "
275
+ f"{list(msg.data.keys()) if isinstance(msg.data, dict) else 'data'}"
98
276
  )
99
277
 
278
+ async def on_error(self, error: Exception, msg=None) -> None:
279
+ """
280
+ Handle errors during message processing.
281
+
282
+ Override to customize error handling (logging, alerting, metrics, etc.)
283
+ Default implementation logs the error and continues processing.
284
+
285
+ Args:
286
+ error: The exception that occurred
287
+ msg: The message being processed when error occurred (may be None)
288
+
289
+ Example:
290
+ async def on_error(self, error, msg):
291
+ # Log with context
292
+ self._logger.error(
293
+ f"Error processing message: {error}",
294
+ extra={"sender": msg.sender if msg else None}
295
+ )
296
+
297
+ # Send to error tracking service
298
+ await self.error_tracker.capture(error, context={"msg": msg})
299
+
300
+ # Optionally notify the sender of failure
301
+ if msg and msg.correlation_id:
302
+ await self.peers.respond(msg, {
303
+ "status": "error",
304
+ "error": str(error)
305
+ })
306
+ """
307
+ if msg:
308
+ self._logger.error(
309
+ f"[{self.role}] Error processing message from {msg.sender}: {error}"
310
+ )
311
+ else:
312
+ self._logger.error(f"[{self.role}] Error in listener loop: {error}")
313
+
314
+ # ─────────────────────────────────────────────────────────────────
315
+ # Workflow Compatibility
316
+ # ─────────────────────────────────────────────────────────────────
317
+
100
318
  async def execute_task(self, task: Dict[str, Any]) -> Dict[str, Any]:
101
319
  """
102
- User implements this with custom execution logic.
320
+ Execute a task (for workflow/distributed modes).
103
321
 
104
- DAY 1: Raises NotImplementedError (user must override)
105
- DAY 5+: Full examples provided
322
+ Default: Delegates to on_peer_request via synthetic message.
323
+ Override for custom workflow logic.
106
324
 
107
325
  Args:
108
- task: Task specification
326
+ task: Task specification dict
109
327
 
110
328
  Returns:
111
- Result dictionary with at least:
112
- - status: "success" or "failure"
113
- - output: Task result
114
- - error (optional): Error message if failed
115
- - tokens_used (optional): For cost tracking
116
- - cost_usd (optional): For cost tracking
329
+ Result dict with status and output
117
330
 
118
331
  Raises:
119
- NotImplementedError: User must override this method
120
-
121
- Example:
122
- async def execute_task(self, task):
123
- result = await self.my_framework.run(task)
124
- return {
125
- "status": "success",
126
- "output": result,
127
- "tokens_used": 1000, # Optional
128
- "cost_usd": 0.002 # Optional
129
- }
332
+ NotImplementedError: If on_peer_request returns None and
333
+ execute_task is not overridden
130
334
  """
335
+ from jarviscore.p2p.messages import IncomingMessage, MessageType
336
+
337
+ # Create a synthetic message to pass to the handler
338
+ synthetic_msg = IncomingMessage(
339
+ sender="workflow",
340
+ sender_node="local",
341
+ type=MessageType.REQUEST,
342
+ data=task,
343
+ correlation_id=None,
344
+ timestamp=0
345
+ )
346
+
347
+ result = await self.on_peer_request(synthetic_msg)
348
+
349
+ if result is not None:
350
+ return {"status": "success", "output": result}
351
+
131
352
  raise NotImplementedError(
132
- f"{self.__class__.__name__} must implement execute_task()\n\n"
353
+ f"{self.__class__.__name__} must implement on_peer_request() or execute_task()\n\n"
133
354
  f"Example:\n"
134
- f" async def execute_task(self, task):\n"
135
- f" result = await self.my_framework.run(task['task'])\n"
136
- f" return {{'status': 'success', 'output': result}}\n"
355
+ f" async def on_peer_request(self, msg):\n"
356
+ f" result = await self.process(msg.data)\n"
357
+ f" return {{'status': 'success', 'result': result}}\n"
137
358
  )
@@ -1,6 +1,6 @@
1
1
  Metadata-Version: 2.4
2
2
  Name: jarviscore-framework
3
- Version: 0.2.1
3
+ Version: 0.3.1
4
4
  Summary: Build autonomous AI agents in 3 lines of code. Production-ready orchestration with P2P mesh networking.
5
5
  Author-email: Ruth Mutua <mutuandinda82@gmail.com>, Muyukani Kizito <muyukani@prescottdata.io>
6
6
  Maintainer-email: Prescott Data <info@prescottdata.io>
@@ -26,6 +26,8 @@ Requires-Dist: pyzmq
26
26
  Requires-Dist: python-dotenv>=1.0.0
27
27
  Requires-Dist: aiohttp>=3.9.0
28
28
  Requires-Dist: beautifulsoup4>=4.12.0
29
+ Requires-Dist: fastapi>=0.104.0
30
+ Requires-Dist: uvicorn>=0.29.0
29
31
  Requires-Dist: anthropic>=0.18.0
30
32
  Requires-Dist: openai>=1.0.0
31
33
  Requires-Dist: google-genai>=1.0.0
@@ -47,10 +49,13 @@ Dynamic: license-file
47
49
 
48
50
  ## Features
49
51
 
50
- - **AutoAgent** - LLM generates and executes code from natural language
51
- - **CustomAgent** - Bring your own logic (LangChain, CrewAI, etc.)
52
- - **P2P Mesh** - Agent discovery and communication via SWIM protocol
53
- - **Workflow Orchestration** - Dependencies, context passing, multi-step pipelines
52
+ - **AutoAgent** - LLM generates and executes code from natural language
53
+ - **CustomAgent** - Bring your own logic with P2P message handlers
54
+ - **P2P Mesh** - Agent discovery and communication via SWIM protocol
55
+ - **Workflow Orchestration** - Dependencies, context passing, multi-step pipelines
56
+ - **FastAPI Integration** - 3-line setup with JarvisLifespan
57
+ - **Cognitive Discovery** - LLM-ready peer descriptions for autonomous delegation
58
+ - **Cloud Deployment** - Self-registering agents for Docker/K8s
54
59
 
55
60
  ## Installation
56
61
 
@@ -94,7 +99,26 @@ results = await mesh.workflow("calc", [
94
99
  print(results[0]["output"]) # 3628800
95
100
  ```
96
101
 
97
- ### CustomAgent (Your Code)
102
+ ### CustomAgent + FastAPI (Recommended)
103
+
104
+ ```python
105
+ from fastapi import FastAPI
106
+ from jarviscore.profiles import CustomAgent
107
+ from jarviscore.integrations.fastapi import JarvisLifespan
108
+
109
+ class ProcessorAgent(CustomAgent):
110
+ role = "processor"
111
+ capabilities = ["processing"]
112
+
113
+ async def on_peer_request(self, msg):
114
+ # Handle requests from other agents
115
+ return {"result": msg.data.get("task", "").upper()}
116
+
117
+ # 3 lines to integrate with FastAPI
118
+ app = FastAPI(lifespan=JarvisLifespan(ProcessorAgent(), mode="p2p"))
119
+ ```
120
+
121
+ ### CustomAgent (Workflow Mode)
98
122
 
99
123
  ```python
100
124
  from jarviscore import Mesh
@@ -118,26 +142,50 @@ results = await mesh.workflow("demo", [
118
142
  print(results[0]["output"]) # [2, 4, 6]
119
143
  ```
120
144
 
145
+ ## Profiles
146
+
147
+ | Profile | You Write | JarvisCore Handles |
148
+ |---------|-----------|-------------------|
149
+ | **AutoAgent** | System prompt | LLM code generation, sandboxed execution |
150
+ | **CustomAgent** | `on_peer_request()` and/or `execute_task()` | Mesh, discovery, routing, lifecycle |
151
+
121
152
  ## Execution Modes
122
153
 
123
- | Mode | Profile | Use Case |
124
- |------|---------|----------|
125
- | `autonomous` | AutoAgent | Single machine, LLM code generation |
126
- | `p2p` | CustomAgent | Agent-to-agent communication, swarms |
127
- | `distributed` | CustomAgent | Multi-node workflows + P2P |
154
+ | Mode | Use Case |
155
+ |------|----------|
156
+ | `autonomous` | Single machine, LLM code generation (AutoAgent) |
157
+ | `p2p` | Agent-to-agent communication, swarms (CustomAgent) |
158
+ | `distributed` | Multi-node workflows + P2P (CustomAgent) |
159
+
160
+ ## Framework Integration
161
+
162
+ JarvisCore is **async-first**. Best experience with async frameworks.
163
+
164
+ | Framework | Integration |
165
+ |-----------|-------------|
166
+ | **FastAPI** | `JarvisLifespan` (3 lines) |
167
+ | **aiohttp, Quart, Tornado** | Manual lifecycle (see docs) |
168
+ | **Flask, Django** | Background thread pattern (see docs) |
128
169
 
129
170
  ## Documentation
130
171
 
131
- - [User Guide](jarviscore/docs/USER_GUIDE.md) - Complete documentation
132
- - [Getting Started](jarviscore/docs/GETTING_STARTED.md) - 5-minute quickstart
133
- - [AutoAgent Guide](jarviscore/docs/AUTOAGENT_GUIDE.md) - LLM-powered agents
134
- - [CustomAgent Guide](jarviscore/docs/CUSTOMAGENT_GUIDE.md) - Bring your own code
135
- - [API Reference](jarviscore/docs/API_REFERENCE.md) - Detailed API docs
136
- - [Configuration](jarviscore/docs/CONFIGURATION.md) - Settings reference
172
+ Documentation is included with the package:
173
+
174
+ ```bash
175
+ python -c "import jarviscore; print(jarviscore.__path__[0] + '/docs')"
176
+ ```
177
+
178
+ **Available guides:**
179
+ - `GETTING_STARTED.md` - 5-minute quickstart
180
+ - `CUSTOMAGENT_GUIDE.md` - CustomAgent patterns and framework integration
181
+ - `AUTOAGENT_GUIDE.md` - LLM-powered agents
182
+ - `USER_GUIDE.md` - Complete documentation
183
+ - `API_REFERENCE.md` - Detailed API docs
184
+ - `CONFIGURATION.md` - Settings reference
137
185
 
138
186
  ## Version
139
187
 
140
- **0.2.1**
188
+ **0.4.0**
141
189
 
142
190
  ## License
143
191