mindtrace-agents 0.10.0__tar.gz
This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
- mindtrace_agents-0.10.0/PKG-INFO +648 -0
- mindtrace_agents-0.10.0/README.md +624 -0
- mindtrace_agents-0.10.0/mindtrace/agents/__init__.py +73 -0
- mindtrace_agents-0.10.0/mindtrace/agents/_function_schema.py +211 -0
- mindtrace_agents-0.10.0/mindtrace/agents/_run_context.py +20 -0
- mindtrace_agents-0.10.0/mindtrace/agents/_tool_manager.py +60 -0
- mindtrace_agents-0.10.0/mindtrace/agents/callbacks/__init__.py +28 -0
- mindtrace_agents-0.10.0/mindtrace/agents/core/__init__.py +14 -0
- mindtrace_agents-0.10.0/mindtrace/agents/core/abstract.py +86 -0
- mindtrace_agents-0.10.0/mindtrace/agents/core/base.py +490 -0
- mindtrace_agents-0.10.0/mindtrace/agents/core/distributed.py +56 -0
- mindtrace_agents-0.10.0/mindtrace/agents/core/wrapper.py +81 -0
- mindtrace_agents-0.10.0/mindtrace/agents/events/__init__.py +33 -0
- mindtrace_agents-0.10.0/mindtrace/agents/events/_native.py +124 -0
- mindtrace_agents-0.10.0/mindtrace/agents/execution/__init__.py +9 -0
- mindtrace_agents-0.10.0/mindtrace/agents/execution/_queue.py +47 -0
- mindtrace_agents-0.10.0/mindtrace/agents/execution/local.py +70 -0
- mindtrace_agents-0.10.0/mindtrace/agents/execution/rabbitmq.py +135 -0
- mindtrace_agents-0.10.0/mindtrace/agents/history/__init__.py +44 -0
- mindtrace_agents-0.10.0/mindtrace/agents/memory/__init__.py +12 -0
- mindtrace_agents-0.10.0/mindtrace/agents/memory/_store.py +39 -0
- mindtrace_agents-0.10.0/mindtrace/agents/memory/in_memory.py +44 -0
- mindtrace_agents-0.10.0/mindtrace/agents/memory/json_file.py +80 -0
- mindtrace_agents-0.10.0/mindtrace/agents/memory/toolset.py +84 -0
- mindtrace_agents-0.10.0/mindtrace/agents/messages/__init__.py +15 -0
- mindtrace_agents-0.10.0/mindtrace/agents/messages/_builder.py +61 -0
- mindtrace_agents-0.10.0/mindtrace/agents/messages/_parts.py +39 -0
- mindtrace_agents-0.10.0/mindtrace/agents/models/__init__.py +11 -0
- mindtrace_agents-0.10.0/mindtrace/agents/models/_model.py +98 -0
- mindtrace_agents-0.10.0/mindtrace/agents/models/openai_chat.py +292 -0
- mindtrace_agents-0.10.0/mindtrace/agents/profiles/__init__.py +51 -0
- mindtrace_agents-0.10.0/mindtrace/agents/prompts.py +91 -0
- mindtrace_agents-0.10.0/mindtrace/agents/providers/__init__.py +14 -0
- mindtrace_agents-0.10.0/mindtrace/agents/providers/_provider.py +41 -0
- mindtrace_agents-0.10.0/mindtrace/agents/providers/gemini.py +98 -0
- mindtrace_agents-0.10.0/mindtrace/agents/providers/ollama.py +60 -0
- mindtrace_agents-0.10.0/mindtrace/agents/providers/openai.py +56 -0
- mindtrace_agents-0.10.0/mindtrace/agents/tools/__init__.py +14 -0
- mindtrace_agents-0.10.0/mindtrace/agents/tools/_tool.py +74 -0
- mindtrace_agents-0.10.0/mindtrace/agents/toolsets/__init__.py +19 -0
- mindtrace_agents-0.10.0/mindtrace/agents/toolsets/_filter.py +76 -0
- mindtrace_agents-0.10.0/mindtrace/agents/toolsets/_toolset.py +78 -0
- mindtrace_agents-0.10.0/mindtrace/agents/toolsets/compound.py +57 -0
- mindtrace_agents-0.10.0/mindtrace/agents/toolsets/filtered.py +41 -0
- mindtrace_agents-0.10.0/mindtrace/agents/toolsets/function.py +56 -0
- mindtrace_agents-0.10.0/mindtrace/agents/toolsets/mcp.py +147 -0
- mindtrace_agents-0.10.0/mindtrace_agents.egg-info/PKG-INFO +648 -0
- mindtrace_agents-0.10.0/mindtrace_agents.egg-info/SOURCES.txt +51 -0
- mindtrace_agents-0.10.0/mindtrace_agents.egg-info/dependency_links.txt +1 -0
- mindtrace_agents-0.10.0/mindtrace_agents.egg-info/requires.txt +15 -0
- mindtrace_agents-0.10.0/mindtrace_agents.egg-info/top_level.txt +1 -0
- mindtrace_agents-0.10.0/pyproject.toml +37 -0
- mindtrace_agents-0.10.0/setup.cfg +4 -0
|
@@ -0,0 +1,648 @@
|
|
|
1
|
+
Metadata-Version: 2.4
|
|
2
|
+
Name: mindtrace-agents
|
|
3
|
+
Version: 0.10.0
|
|
4
|
+
Summary: Agent framework for Mindtrace
|
|
5
|
+
Author: Mindtrace Team
|
|
6
|
+
License-Expression: Apache-2.0
|
|
7
|
+
Project-URL: Homepage, https://mindtrace.ai
|
|
8
|
+
Project-URL: Repository, https://github.com/mindtrace/mindtrace
|
|
9
|
+
Classifier: Programming Language :: Python :: 3
|
|
10
|
+
Classifier: Programming Language :: Python :: 3.12
|
|
11
|
+
Requires-Python: >=3.12
|
|
12
|
+
Description-Content-Type: text/markdown
|
|
13
|
+
Requires-Dist: openai>=1.0.0
|
|
14
|
+
Requires-Dist: pydantic>=2.0.0
|
|
15
|
+
Requires-Dist: mindtrace-core>=0.10.0
|
|
16
|
+
Provides-Extra: mcp
|
|
17
|
+
Requires-Dist: fastmcp>=2.13.0; extra == "mcp"
|
|
18
|
+
Provides-Extra: distributed-rabbitmq
|
|
19
|
+
Requires-Dist: aio-pika>=9.0; extra == "distributed-rabbitmq"
|
|
20
|
+
Provides-Extra: memory-redis
|
|
21
|
+
Requires-Dist: redis>=5.0; extra == "memory-redis"
|
|
22
|
+
Provides-Extra: memory-vector
|
|
23
|
+
Requires-Dist: chromadb>=0.4; extra == "memory-vector"
|
|
24
|
+
|
|
25
|
+
# mindtrace-agents
|
|
26
|
+
|
|
27
|
+
Agent framework for the Mindtrace platform. Build LLM-powered agents with tool use, conversation history, lifecycle callbacks, and pluggable model providers — all integrated with Mindtrace logging and configuration.
|
|
28
|
+
|
|
29
|
+
## Installation
|
|
30
|
+
|
|
31
|
+
```bash
|
|
32
|
+
uv add mindtrace-agents
|
|
33
|
+
# or
|
|
34
|
+
pip install mindtrace-agents
|
|
35
|
+
```
|
|
36
|
+
|
|
37
|
+
---
|
|
38
|
+
|
|
39
|
+
## Core Concepts
|
|
40
|
+
|
|
41
|
+
| Concept | Class | What it does |
|
|
42
|
+
|---------|-------|-------------|
|
|
43
|
+
| **Agent** | `MindtraceAgent` | Orchestrates model + tools, runs the conversation loop |
|
|
44
|
+
| **Model** | `OpenAIChatModel` | Calls the LLM API (supports streaming) |
|
|
45
|
+
| **Provider** | `OpenAIProvider` / `OllamaProvider` / `GeminiProvider` | Holds the authenticated client |
|
|
46
|
+
| **Tool** | `Tool` | Wraps a Python function for LLM tool-calling |
|
|
47
|
+
| **Toolset** | `FunctionToolset` / `CompoundToolset` / `MCPToolset` | Groups tools and controls which are exposed to the agent |
|
|
48
|
+
| **ToolFilter** | `ToolFilter` | Predicate for selectively showing/hiding tools by name or description |
|
|
49
|
+
| **Callbacks** | `AgentCallbacks` | Lifecycle hooks: before/after LLM call and tool call |
|
|
50
|
+
| **History** | `AbstractHistoryStrategy` / `InMemoryHistory` | Persists conversation across runs |
|
|
51
|
+
| **RunContext** | `RunContext[T]` | Injected into tools — carries deps, retry count, step |
|
|
52
|
+
| **Memory** | `MemoryToolset` / `InMemoryStore` / `JsonFileStore` | Agent-controlled persistent memory exposed as tools |
|
|
53
|
+
| **Task Queue** | `LocalTaskQueue` / `RabbitMQTaskQueue` | Distribute agent execution across processes or workers |
|
|
54
|
+
| **DistributedAgent** | `DistributedAgent` | Transparent wrapper that routes `run()` through a task queue |
|
|
55
|
+
|
|
56
|
+
---
|
|
57
|
+
|
|
58
|
+
## Quick Start
|
|
59
|
+
|
|
60
|
+
### Basic agent with OpenAI
|
|
61
|
+
|
|
62
|
+
```python
|
|
63
|
+
import asyncio
|
|
64
|
+
from mindtrace.agents import MindtraceAgent, OpenAIChatModel, OpenAIProvider
|
|
65
|
+
|
|
66
|
+
provider = OpenAIProvider() # reads OPENAI_API_KEY from env
|
|
67
|
+
model = OpenAIChatModel("gpt-4o-mini", provider=provider)
|
|
68
|
+
|
|
69
|
+
agent = MindtraceAgent(
|
|
70
|
+
model=model,
|
|
71
|
+
system_prompt="You are a helpful assistant.",
|
|
72
|
+
name="my_agent",
|
|
73
|
+
)
|
|
74
|
+
|
|
75
|
+
result = asyncio.run(agent.run("What is 2 + 2?"))
|
|
76
|
+
print(result) # "4"
|
|
77
|
+
```
|
|
78
|
+
|
|
79
|
+
### With Ollama (local)
|
|
80
|
+
|
|
81
|
+
```python
|
|
82
|
+
from mindtrace.agents import MindtraceAgent, OpenAIChatModel, OllamaProvider
|
|
83
|
+
|
|
84
|
+
provider = OllamaProvider(base_url="http://localhost:11434/v1")
|
|
85
|
+
model = OpenAIChatModel("llama3.2", provider=provider)
|
|
86
|
+
agent = MindtraceAgent(model=model, name="local_agent")
|
|
87
|
+
```
|
|
88
|
+
|
|
89
|
+
### With Gemini
|
|
90
|
+
|
|
91
|
+
```python
|
|
92
|
+
from mindtrace.agents import MindtraceAgent, OpenAIChatModel, GeminiProvider
|
|
93
|
+
|
|
94
|
+
provider = GeminiProvider() # reads GEMINI_API_KEY from env
|
|
95
|
+
model = OpenAIChatModel("gemini-2.0-flash", provider=provider)
|
|
96
|
+
agent = MindtraceAgent(model=model, name="gemini_agent")
|
|
97
|
+
```
|
|
98
|
+
|
|
99
|
+
---
|
|
100
|
+
|
|
101
|
+
## Adding Tools
|
|
102
|
+
|
|
103
|
+
Tools are Python functions (sync or async). Annotate parameters with types — these become the JSON schema shown to the model.
|
|
104
|
+
|
|
105
|
+
```python
|
|
106
|
+
from mindtrace.agents import MindtraceAgent, OpenAIChatModel, OpenAIProvider, Tool, RunContext
|
|
107
|
+
|
|
108
|
+
def get_weather(ctx: RunContext[None], city: str) -> str:
|
|
109
|
+
"""Get the current weather for a city."""
|
|
110
|
+
return f"Weather in {city}: Sunny, 22°C"
|
|
111
|
+
|
|
112
|
+
async def search_web(query: str, max_results: int = 5) -> list[str]:
|
|
113
|
+
"""Search the web and return URLs."""
|
|
114
|
+
# ctx is optional — omit it for plain tools
|
|
115
|
+
return [f"https://example.com/result/{i}" for i in range(max_results)]
|
|
116
|
+
|
|
117
|
+
provider = OpenAIProvider()
|
|
118
|
+
model = OpenAIChatModel("gpt-4o-mini", provider=provider)
|
|
119
|
+
|
|
120
|
+
agent = MindtraceAgent(
|
|
121
|
+
model=model,
|
|
122
|
+
tools=[Tool(get_weather), Tool(search_web)],
|
|
123
|
+
system_prompt="You have access to weather and search tools.",
|
|
124
|
+
)
|
|
125
|
+
|
|
126
|
+
result = asyncio.run(agent.run("What's the weather in Paris?"))
|
|
127
|
+
```
|
|
128
|
+
|
|
129
|
+
### Tool with dependencies (typed deps)
|
|
130
|
+
|
|
131
|
+
```python
|
|
132
|
+
from dataclasses import dataclass
|
|
133
|
+
|
|
134
|
+
@dataclass
|
|
135
|
+
class AppDeps:
|
|
136
|
+
db_url: str
|
|
137
|
+
api_key: str
|
|
138
|
+
|
|
139
|
+
def lookup_user(ctx: RunContext[AppDeps], user_id: str) -> dict:
|
|
140
|
+
"""Look up a user by ID."""
|
|
141
|
+
# ctx.deps is your AppDeps instance
|
|
142
|
+
return {"id": user_id, "db": ctx.deps.db_url}
|
|
143
|
+
|
|
144
|
+
deps = AppDeps(db_url="postgresql://...", api_key="secret")
|
|
145
|
+
result = asyncio.run(agent.run("Find user 123", deps=deps))
|
|
146
|
+
```
|
|
147
|
+
|
|
148
|
+
---
|
|
149
|
+
|
|
150
|
+
## Toolsets
|
|
151
|
+
|
|
152
|
+
Toolsets are the primary way to supply tools to an agent. They group related tools together and give you control over which tools are visible to the model at runtime.
|
|
153
|
+
|
|
154
|
+
### FunctionToolset
|
|
155
|
+
|
|
156
|
+
`FunctionToolset` collects Python `Tool` objects and exposes them to the agent. Use it when you want to organise tools manually or share a toolset across multiple agents.
|
|
157
|
+
|
|
158
|
+
```python
|
|
159
|
+
from mindtrace.agents.toolsets import FunctionToolset
|
|
160
|
+
from mindtrace.agents.tools import Tool
|
|
161
|
+
|
|
162
|
+
def add(a: int, b: int) -> int:
|
|
163
|
+
"""Add two numbers."""
|
|
164
|
+
return a + b
|
|
165
|
+
|
|
166
|
+
async def fetch(url: str) -> str:
|
|
167
|
+
"""Fetch a URL."""
|
|
168
|
+
...
|
|
169
|
+
|
|
170
|
+
toolset = FunctionToolset(max_retries=2)
|
|
171
|
+
toolset.add_tool(Tool(add))
|
|
172
|
+
toolset.add_tool(Tool(fetch))
|
|
173
|
+
|
|
174
|
+
agent = MindtraceAgent(model=model, toolset=toolset)
|
|
175
|
+
```
|
|
176
|
+
|
|
177
|
+
`max_retries` on the toolset is the default for every tool added to it. Override per-tool by setting `Tool(..., max_retries=N)` before calling `add_tool()`.
|
|
178
|
+
|
|
179
|
+
### CompoundToolset
|
|
180
|
+
|
|
181
|
+
`CompoundToolset` merges tools from multiple toolsets into one. Later toolsets win on name collisions — use `prefix` on `MCPToolset` to avoid conflicts.
|
|
182
|
+
|
|
183
|
+
```python
|
|
184
|
+
from mindtrace.agents.toolsets import CompoundToolset, FunctionToolset
|
|
185
|
+
|
|
186
|
+
agent = MindtraceAgent(
|
|
187
|
+
model=model,
|
|
188
|
+
toolset=CompoundToolset(
|
|
189
|
+
MCPToolset.from_http("http://localhost:8001/mcp-server/mcp/"),
|
|
190
|
+
FunctionToolset(), # local tools
|
|
191
|
+
),
|
|
192
|
+
)
|
|
193
|
+
```
|
|
194
|
+
|
|
195
|
+
### MCPToolset
|
|
196
|
+
|
|
197
|
+
`MCPToolset` exposes tools from any remote MCP server. Requires `fastmcp`:
|
|
198
|
+
|
|
199
|
+
```bash
|
|
200
|
+
pip install 'mindtrace-agents[mcp]'
|
|
201
|
+
```
|
|
202
|
+
|
|
203
|
+
**Constructors**
|
|
204
|
+
|
|
205
|
+
```python
|
|
206
|
+
from mindtrace.agents.toolsets import MCPToolset
|
|
207
|
+
|
|
208
|
+
# HTTP (streamable-http) — default for Mindtrace services
|
|
209
|
+
ts = MCPToolset.from_http("http://localhost:8001/mcp-server/mcp/")
|
|
210
|
+
|
|
211
|
+
# SSE (legacy HTTP)
|
|
212
|
+
ts = MCPToolset.from_sse("http://localhost:9000/sse")
|
|
213
|
+
|
|
214
|
+
# stdio — local subprocess servers (e.g. npx)
|
|
215
|
+
ts = MCPToolset.from_stdio(["npx", "-y", "@modelcontextprotocol/server-filesystem", "/tmp"])
|
|
216
|
+
```
|
|
217
|
+
|
|
218
|
+
**Prefix** — avoid name collisions when combining multiple MCP services:
|
|
219
|
+
|
|
220
|
+
```python
|
|
221
|
+
ts = MCPToolset.from_http("http://localhost:8002/mcp/", prefix="db")
|
|
222
|
+
# tools are exposed as "db__query", "db__list_tables", etc.
|
|
223
|
+
```
|
|
224
|
+
|
|
225
|
+
---
|
|
226
|
+
|
|
227
|
+
## Filtering tools
|
|
228
|
+
|
|
229
|
+
Every toolset exposes shorthand methods that return a `FilteredToolset`. Chain them to control exactly which tools the agent sees.
|
|
230
|
+
|
|
231
|
+
```python
|
|
232
|
+
# Allow only named tools
|
|
233
|
+
toolset.include("search", "summarise")
|
|
234
|
+
|
|
235
|
+
# Block a specific tool
|
|
236
|
+
toolset.exclude("drop_table")
|
|
237
|
+
|
|
238
|
+
# Glob patterns
|
|
239
|
+
toolset.include_pattern("read_*", "list_*")
|
|
240
|
+
toolset.exclude_pattern("admin_*")
|
|
241
|
+
|
|
242
|
+
# Compose via FilteredToolset.with_filter() for boolean logic
|
|
243
|
+
from mindtrace.agents.toolsets import ToolFilter
|
|
244
|
+
|
|
245
|
+
f = ToolFilter.include_pattern("read_*") & ~ToolFilter.include("read_credentials")
|
|
246
|
+
toolset.with_filter(f)
|
|
247
|
+
```
|
|
248
|
+
|
|
249
|
+
Filtering applies at `get_tools()` time — the underlying toolset is unchanged.
|
|
250
|
+
|
|
251
|
+
### ToolFilter API
|
|
252
|
+
|
|
253
|
+
| Factory | Behaviour |
|
|
254
|
+
|---------|-----------|
|
|
255
|
+
| `ToolFilter.include(*names)` | Allow only tools whose name is in `names` |
|
|
256
|
+
| `ToolFilter.exclude(*names)` | Block tools whose name is in `names` |
|
|
257
|
+
| `ToolFilter.include_pattern(*globs)` | Allow tools matching any glob (e.g. `"read_*"`) |
|
|
258
|
+
| `ToolFilter.exclude_pattern(*globs)` | Block tools matching any glob |
|
|
259
|
+
| `ToolFilter.by_description(fn)` | Custom predicate on the tool description string |
|
|
260
|
+
|
|
261
|
+
Filters compose with `&` (AND), `|` (OR), and `~` (NOT).
|
|
262
|
+
|
|
263
|
+
### Combining filtering with CompoundToolset
|
|
264
|
+
|
|
265
|
+
```python
|
|
266
|
+
agent = MindtraceAgent(
|
|
267
|
+
model=model,
|
|
268
|
+
toolset=CompoundToolset(
|
|
269
|
+
MCPToolset.from_http("http://localhost:8001/mcp/").include("generate_image"),
|
|
270
|
+
MCPToolset.from_http("http://localhost:8002/mcp/").exclude_pattern("admin_*"),
|
|
271
|
+
FunctionToolset(),
|
|
272
|
+
),
|
|
273
|
+
)
|
|
274
|
+
```
|
|
275
|
+
|
|
276
|
+
---
|
|
277
|
+
|
|
278
|
+
## Lifecycle Callbacks
|
|
279
|
+
|
|
280
|
+
Attach async or sync callbacks to hook into the agent's execution.
|
|
281
|
+
|
|
282
|
+
```python
|
|
283
|
+
from mindtrace.agents import MindtraceAgent, AgentCallbacks
|
|
284
|
+
|
|
285
|
+
def log_before_llm(messages, model_settings):
|
|
286
|
+
print(f"Sending {len(messages)} messages to LLM")
|
|
287
|
+
# Return (messages, model_settings) to modify, or None to keep unchanged
|
|
288
|
+
return None
|
|
289
|
+
|
|
290
|
+
async def log_after_tool(tool_name, args, result, ctx):
|
|
291
|
+
print(f"Tool {tool_name!r} returned: {result}")
|
|
292
|
+
# Return modified result or None to keep unchanged
|
|
293
|
+
return None
|
|
294
|
+
|
|
295
|
+
callbacks = AgentCallbacks(
|
|
296
|
+
before_llm_call=log_before_llm,
|
|
297
|
+
after_llm_call=None, # (response) -> response | None
|
|
298
|
+
before_tool_call=None, # (tool_name, args, ctx) -> (name, args) | None
|
|
299
|
+
after_tool_call=log_after_tool,
|
|
300
|
+
)
|
|
301
|
+
|
|
302
|
+
agent = MindtraceAgent(model=model, tools=[...], callbacks=callbacks)
|
|
303
|
+
```
|
|
304
|
+
|
|
305
|
+
### Callback signatures
|
|
306
|
+
|
|
307
|
+
| Callback | Arguments | Return |
|
|
308
|
+
|----------|-----------|--------|
|
|
309
|
+
| `before_llm_call` | `(messages: list, model_settings: dict\|None)` | `(messages, model_settings)` or `None` |
|
|
310
|
+
| `after_llm_call` | `(response: ModelResponse)` | `ModelResponse` or `None` |
|
|
311
|
+
| `before_tool_call` | `(tool_name: str, args: str, ctx: RunContext)` | `(tool_name, args)` or `None` |
|
|
312
|
+
| `after_tool_call` | `(tool_name: str, args: str, result: Any, ctx: RunContext)` | modified result or `None` |
|
|
313
|
+
|
|
314
|
+
All callbacks can be **sync or async** — the framework handles both transparently.
|
|
315
|
+
|
|
316
|
+
---
|
|
317
|
+
|
|
318
|
+
## Conversation History
|
|
319
|
+
|
|
320
|
+
Pass `session_id` to `run()` to automatically persist and reload conversation history.
|
|
321
|
+
|
|
322
|
+
```python
|
|
323
|
+
from mindtrace.agents import MindtraceAgent, InMemoryHistory
|
|
324
|
+
|
|
325
|
+
history = InMemoryHistory()
|
|
326
|
+
|
|
327
|
+
agent = MindtraceAgent(
|
|
328
|
+
model=model,
|
|
329
|
+
history=history,
|
|
330
|
+
system_prompt="You are a helpful assistant.",
|
|
331
|
+
)
|
|
332
|
+
|
|
333
|
+
# First turn — history is empty, gets saved under "user-123"
|
|
334
|
+
reply1 = asyncio.run(agent.run("My name is Alice.", session_id="user-123"))
|
|
335
|
+
|
|
336
|
+
# Second turn — history is loaded automatically, agent remembers Alice
|
|
337
|
+
reply2 = asyncio.run(agent.run("What's my name?", session_id="user-123"))
|
|
338
|
+
# → "Your name is Alice."
|
|
339
|
+
```
|
|
340
|
+
|
|
341
|
+
### Custom history backend
|
|
342
|
+
|
|
343
|
+
Implement `AbstractHistoryStrategy` to persist history anywhere (Redis, MongoDB, etc.):
|
|
344
|
+
|
|
345
|
+
```python
|
|
346
|
+
from mindtrace.agents import AbstractHistoryStrategy, ModelMessage
|
|
347
|
+
|
|
348
|
+
class RedisHistory(AbstractHistoryStrategy):
|
|
349
|
+
async def load(self, session_id: str) -> list[ModelMessage]:
|
|
350
|
+
data = await redis.get(f"history:{session_id}")
|
|
351
|
+
return deserialize(data) if data else []
|
|
352
|
+
|
|
353
|
+
async def save(self, session_id: str, messages: list[ModelMessage]) -> None:
|
|
354
|
+
await redis.set(f"history:{session_id}", serialize(messages))
|
|
355
|
+
|
|
356
|
+
async def clear(self, session_id: str) -> None:
|
|
357
|
+
await redis.delete(f"history:{session_id}")
|
|
358
|
+
```
|
|
359
|
+
|
|
360
|
+
---
|
|
361
|
+
|
|
362
|
+
## Streaming
|
|
363
|
+
|
|
364
|
+
Use `run_stream_events()` to receive events as the model generates tokens:
|
|
365
|
+
|
|
366
|
+
```python
|
|
367
|
+
from mindtrace.agents import PartDeltaEvent, PartStartEvent, ToolResultEvent, AgentRunResultEvent
|
|
368
|
+
|
|
369
|
+
async def stream_example():
|
|
370
|
+
async for event in agent.run_stream_events("Tell me a joke", session_id="s1"):
|
|
371
|
+
if isinstance(event, PartStartEvent) and event.part_kind == "text":
|
|
372
|
+
print("\n[Text started]")
|
|
373
|
+
elif isinstance(event, PartDeltaEvent):
|
|
374
|
+
if hasattr(event.delta, "content_delta"):
|
|
375
|
+
print(event.delta.content_delta, end="", flush=True)
|
|
376
|
+
elif isinstance(event, ToolResultEvent):
|
|
377
|
+
print(f"\n[Tool result: {event.content}]")
|
|
378
|
+
elif isinstance(event, AgentRunResultEvent):
|
|
379
|
+
print(f"\n[Done: {event.result.output}]")
|
|
380
|
+
|
|
381
|
+
asyncio.run(stream_example())
|
|
382
|
+
```
|
|
383
|
+
|
|
384
|
+
---
|
|
385
|
+
|
|
386
|
+
## Step-by-step iteration
|
|
387
|
+
|
|
388
|
+
Use `iter()` for fine-grained control over the execution loop:
|
|
389
|
+
|
|
390
|
+
```python
|
|
391
|
+
async def iterate_example():
|
|
392
|
+
async with agent.iter("What's 15% of 240?") as steps:
|
|
393
|
+
async for step in steps:
|
|
394
|
+
if step["step"] == "model_response":
|
|
395
|
+
print(f"LLM said: {step['text']}")
|
|
396
|
+
print(f"Tool calls: {step['tool_calls']}")
|
|
397
|
+
elif step["step"] == "tool_result":
|
|
398
|
+
print(f"Tool {step['tool_name']} → {step['result']}")
|
|
399
|
+
elif step["step"] == "complete":
|
|
400
|
+
print(f"Final answer: {step['result']}")
|
|
401
|
+
```
|
|
402
|
+
|
|
403
|
+
---
|
|
404
|
+
|
|
405
|
+
## WrapperAgent
|
|
406
|
+
|
|
407
|
+
Compose agents or add cross-cutting behaviour without modifying base classes:
|
|
408
|
+
|
|
409
|
+
```python
|
|
410
|
+
from mindtrace.agents import WrapperAgent
|
|
411
|
+
|
|
412
|
+
class TimedAgent(WrapperAgent):
|
|
413
|
+
async def run(self, input_data, *, deps=None, **kwargs):
|
|
414
|
+
import time
|
|
415
|
+
start = time.monotonic()
|
|
416
|
+
result = await super().run(input_data, deps=deps, **kwargs)
|
|
417
|
+
self.logger.info(f"Run took {time.monotonic() - start:.2f}s")
|
|
418
|
+
return result
|
|
419
|
+
|
|
420
|
+
timed = TimedAgent(agent)
|
|
421
|
+
result = asyncio.run(timed.run("Hello"))
|
|
422
|
+
```
|
|
423
|
+
|
|
424
|
+
---
|
|
425
|
+
|
|
426
|
+
## Sync usage
|
|
427
|
+
|
|
428
|
+
```python
|
|
429
|
+
result = agent.run_sync("What is the capital of France?")
|
|
430
|
+
# → "Paris"
|
|
431
|
+
```
|
|
432
|
+
|
|
433
|
+
---
|
|
434
|
+
|
|
435
|
+
## Logging and config
|
|
436
|
+
|
|
437
|
+
All agents, models, and providers inherit from `MindtraceABC` and automatically receive:
|
|
438
|
+
|
|
439
|
+
- `self.logger` — a structured logger scoped to the class name
|
|
440
|
+
- `self.config` — the `CoreConfig` instance
|
|
441
|
+
|
|
442
|
+
```python
|
|
443
|
+
agent.logger.info("Starting run", session_id="abc")
|
|
444
|
+
```
|
|
445
|
+
|
|
446
|
+
---
|
|
447
|
+
|
|
448
|
+
---
|
|
449
|
+
|
|
450
|
+
## Multi-Agent Coworking
|
|
451
|
+
|
|
452
|
+
Agents are first-class tools. Pass any `MindtraceAgent` directly into `tools=[]` alongside regular `Tool` objects — the framework converts it automatically.
|
|
453
|
+
|
|
454
|
+
```python
|
|
455
|
+
researcher = MindtraceAgent(
|
|
456
|
+
model=model,
|
|
457
|
+
name="researcher",
|
|
458
|
+
description="Research a topic and return facts", # shown to LLM as tool description
|
|
459
|
+
tools=[web_search_tool],
|
|
460
|
+
)
|
|
461
|
+
|
|
462
|
+
writer = MindtraceAgent(
|
|
463
|
+
model=model,
|
|
464
|
+
name="writer",
|
|
465
|
+
description="Write a structured report from given facts",
|
|
466
|
+
tools=[format_tool],
|
|
467
|
+
)
|
|
468
|
+
|
|
469
|
+
orchestrator = MindtraceAgent(
|
|
470
|
+
model=model,
|
|
471
|
+
tools=[researcher, writer, some_regular_tool], # mix freely
|
|
472
|
+
)
|
|
473
|
+
|
|
474
|
+
result = await orchestrator.run("Write a report on climate change")
|
|
475
|
+
```
|
|
476
|
+
|
|
477
|
+
The `description` field is required when using an agent as a tool — it's what the LLM reads to decide when to call it. Parent `deps` are forwarded to sub-agents automatically via `RunContext`.
|
|
478
|
+
|
|
479
|
+
### Context exchange between agents
|
|
480
|
+
|
|
481
|
+
| Deployment | Mechanism |
|
|
482
|
+
|---|---|
|
|
483
|
+
| Same process | `deps` carries shared state directly — mutations visible across agents |
|
|
484
|
+
| Distributed workers | `deps` carries a **client** (Redis, DB) — workers read/write through it, not raw data |
|
|
485
|
+
|
|
486
|
+
`deps` is serialised when submitted to a task queue. Don't put live in-memory objects in deps for distributed use — put connection config and reconnect on the worker side.
|
|
487
|
+
|
|
488
|
+
### HandoffPart
|
|
489
|
+
|
|
490
|
+
Use `HandoffPart` to mark an explicit handoff boundary in an agent's message history. Useful for observability and keeping sub-agent history scoped:
|
|
491
|
+
|
|
492
|
+
```python
|
|
493
|
+
from mindtrace.agents import HandoffPart
|
|
494
|
+
|
|
495
|
+
part = HandoffPart(
|
|
496
|
+
from_agent="orchestrator",
|
|
497
|
+
to_agent="writer",
|
|
498
|
+
summary="Researcher found: sea levels rose 20cm since 1980",
|
|
499
|
+
)
|
|
500
|
+
```
|
|
501
|
+
|
|
502
|
+
---
|
|
503
|
+
|
|
504
|
+
## Agent Memory
|
|
505
|
+
|
|
506
|
+
`MemoryToolset` exposes persistent memory as LLM-callable tools. The agent decides what to save and when to recall — no code wiring needed.
|
|
507
|
+
|
|
508
|
+
```python
|
|
509
|
+
from mindtrace.agents import MindtraceAgent, MemoryToolset, JsonFileStore, CompoundToolset
|
|
510
|
+
from mindtrace.agents.toolsets import FunctionToolset
|
|
511
|
+
|
|
512
|
+
memory = JsonFileStore("./agent_memory.json") # persists across restarts
|
|
513
|
+
|
|
514
|
+
agent = MindtraceAgent(
|
|
515
|
+
model=model,
|
|
516
|
+
toolset=CompoundToolset(
|
|
517
|
+
FunctionToolset([my_tools]),
|
|
518
|
+
MemoryToolset(memory, namespace="user_123"),
|
|
519
|
+
),
|
|
520
|
+
)
|
|
521
|
+
```
|
|
522
|
+
|
|
523
|
+
**Tools exposed to the agent:**
|
|
524
|
+
|
|
525
|
+
| Tool | Args | Effect |
|
|
526
|
+
|---|---|---|
|
|
527
|
+
| `save_memory` | `key, value` | Persist a fact |
|
|
528
|
+
| `recall_memory` | `key` | Retrieve by key |
|
|
529
|
+
| `search_memory` | `query, top_k=5` | Substring search across all memories |
|
|
530
|
+
| `forget_memory` | `key` | Delete an entry |
|
|
531
|
+
| `list_memories` | — | List all keys |
|
|
532
|
+
|
|
533
|
+
The `namespace` parameter scopes entries per-user or per-session — two `MemoryToolset` instances on the same store with different namespaces never see each other's data.
|
|
534
|
+
|
|
535
|
+
### Memory backends
|
|
536
|
+
|
|
537
|
+
| Class | Persistence | `search()` |
|
|
538
|
+
|---|---|---|
|
|
539
|
+
| `InMemoryStore` | None (lost on restart) | Substring |
|
|
540
|
+
| `JsonFileStore` | Local `.json` file | Substring |
|
|
541
|
+
|
|
542
|
+
Implement `AbstractMemoryStore` for custom backends (Redis, vector DB, etc.). The only method that varies meaningfully is `search()` — simple backends use substring, vector backends use embeddings.
|
|
543
|
+
|
|
544
|
+
```python
|
|
545
|
+
from mindtrace.agents import AbstractMemoryStore, MemoryEntry
|
|
546
|
+
|
|
547
|
+
class RedisMemoryStore(AbstractMemoryStore):
|
|
548
|
+
async def save(self, key, value, metadata=None): ...
|
|
549
|
+
async def get(self, key) -> MemoryEntry | None: ...
|
|
550
|
+
async def search(self, query, top_k=5) -> list[MemoryEntry]: ...
|
|
551
|
+
async def delete(self, key): ...
|
|
552
|
+
async def list_keys(self) -> list[str]: ...
|
|
553
|
+
```
|
|
554
|
+
|
|
555
|
+
Install optional extras for third-party backends:
|
|
556
|
+
|
|
557
|
+
```bash
|
|
558
|
+
pip install 'mindtrace-agents[memory-redis]' # Redis
|
|
559
|
+
pip install 'mindtrace-agents[memory-vector]' # ChromaDB
|
|
560
|
+
```
|
|
561
|
+
|
|
562
|
+
---
|
|
563
|
+
|
|
564
|
+
## Distributed Execution
|
|
565
|
+
|
|
566
|
+
### Task queues
|
|
567
|
+
|
|
568
|
+
`AbstractTaskQueue` decouples task submission from execution. Use `LocalTaskQueue` for single-process orchestration, `RabbitMQTaskQueue` for multi-worker deployments.
|
|
569
|
+
|
|
570
|
+
```python
|
|
571
|
+
from mindtrace.agents import LocalTaskQueue, AgentTask
|
|
572
|
+
|
|
573
|
+
queue = LocalTaskQueue()
|
|
574
|
+
queue.register(researcher) # agents must be registered by name
|
|
575
|
+
|
|
576
|
+
task_id = await queue.submit(AgentTask(
|
|
577
|
+
agent_name="researcher",
|
|
578
|
+
input="What caused the 2008 financial crisis?",
|
|
579
|
+
deps=my_deps,
|
|
580
|
+
session_id="s1",
|
|
581
|
+
))
|
|
582
|
+
result = await queue.get_result(task_id)
|
|
583
|
+
```
|
|
584
|
+
|
|
585
|
+
`TaskStatus` values: `PENDING → RUNNING → DONE | FAILED`
|
|
586
|
+
|
|
587
|
+
### DistributedAgent
|
|
588
|
+
|
|
589
|
+
`DistributedAgent` wraps any agent and routes `run()` through a task queue. The API is identical to `MindtraceAgent` — callers don't need to change.
|
|
590
|
+
|
|
591
|
+
```python
|
|
592
|
+
from mindtrace.agents import DistributedAgent
|
|
593
|
+
|
|
594
|
+
distributed_researcher = DistributedAgent(researcher, task_queue=queue)
|
|
595
|
+
result = await distributed_researcher.run("Research topic") # executes via queue
|
|
596
|
+
```
|
|
597
|
+
|
|
598
|
+
### RabbitMQ
|
|
599
|
+
|
|
600
|
+
Requires `aio-pika`:
|
|
601
|
+
|
|
602
|
+
```bash
|
|
603
|
+
pip install 'mindtrace-agents[distributed-rabbitmq]'
|
|
604
|
+
```
|
|
605
|
+
|
|
606
|
+
**Caller side** (submit tasks):
|
|
607
|
+
|
|
608
|
+
```python
|
|
609
|
+
from mindtrace.agents.execution.rabbitmq import RabbitMQTaskQueue
|
|
610
|
+
|
|
611
|
+
queue = RabbitMQTaskQueue(url="amqp://guest:guest@localhost/")
|
|
612
|
+
distributed = DistributedAgent(researcher, task_queue=queue)
|
|
613
|
+
result = await distributed.run("Research topic")
|
|
614
|
+
```
|
|
615
|
+
|
|
616
|
+
**Worker side** (consume and execute):
|
|
617
|
+
|
|
618
|
+
```python
|
|
619
|
+
queue = RabbitMQTaskQueue(url="amqp://guest:guest@localhost/")
|
|
620
|
+
await queue.serve(researcher) # blocks; run N replicas for parallelism
|
|
621
|
+
```
|
|
622
|
+
|
|
623
|
+
RabbitMQ round-robins tasks across replicas automatically. `AgentTask` is serialised with `pickle` — `deps` must be pickle-serialisable. For cross-process use, put connection config in `deps` (not live connections).
|
|
624
|
+
|
|
625
|
+
---
|
|
626
|
+
|
|
627
|
+
## Package layout
|
|
628
|
+
|
|
629
|
+
```
|
|
630
|
+
mindtrace/agents/
|
|
631
|
+
├── __init__.py # public API
|
|
632
|
+
├── _run_context.py # RunContext dataclass
|
|
633
|
+
├── _function_schema.py # type introspection + Pydantic validation
|
|
634
|
+
├── _tool_manager.py # tool dispatch + retry
|
|
635
|
+
├── prompts.py # UserPromptPart, BinaryContent, ImageUrl
|
|
636
|
+
├── profiles/ # ModelProfile capability flags
|
|
637
|
+
├── events/ # streaming event types
|
|
638
|
+
├── messages/ # ModelMessage, parts (incl. HandoffPart), builder
|
|
639
|
+
├── tools/ # Tool, ToolDefinition
|
|
640
|
+
├── toolsets/ # AbstractToolset, FunctionToolset, CompoundToolset, MCPToolset, ToolFilter, FilteredToolset
|
|
641
|
+
├── providers/ # Provider ABC + OpenAI, Ollama, Gemini
|
|
642
|
+
├── models/ # Model ABC + OpenAIChatModel
|
|
643
|
+
├── callbacks/ # AgentCallbacks + _invoke helper
|
|
644
|
+
├── history/ # AbstractHistoryStrategy + InMemoryHistory
|
|
645
|
+
├── memory/ # AbstractMemoryStore, InMemoryStore, JsonFileStore, MemoryToolset
|
|
646
|
+
├── execution/ # AbstractTaskQueue, AgentTask, LocalTaskQueue, RabbitMQTaskQueue
|
|
647
|
+
└── core/ # AbstractMindtraceAgent, MindtraceAgent, WrapperAgent, DistributedAgent
|
|
648
|
+
```
|