agentspan 0.0.3__tar.gz
This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
- agentspan-0.0.3/PKG-INFO +373 -0
- agentspan-0.0.3/README.md +346 -0
- agentspan-0.0.3/pyproject.toml +71 -0
- agentspan-0.0.3/setup.cfg +4 -0
- agentspan-0.0.3/src/agentspan/__init__.py +3 -0
- agentspan-0.0.3/src/agentspan/agents/__init__.py +198 -0
- agentspan-0.0.3/src/agentspan/agents/_internal/__init__.py +4 -0
- agentspan-0.0.3/src/agentspan/agents/_internal/model_parser.py +85 -0
- agentspan-0.0.3/src/agentspan/agents/_internal/provider_registry.py +60 -0
- agentspan-0.0.3/src/agentspan/agents/_internal/schema_utils.py +141 -0
- agentspan-0.0.3/src/agentspan/agents/agent.py +461 -0
- agentspan-0.0.3/src/agentspan/agents/code_execution_config.py +243 -0
- agentspan-0.0.3/src/agentspan/agents/code_executor.py +521 -0
- agentspan-0.0.3/src/agentspan/agents/config_serializer.py +337 -0
- agentspan-0.0.3/src/agentspan/agents/ext.py +265 -0
- agentspan-0.0.3/src/agentspan/agents/frameworks/__init__.py +18 -0
- agentspan-0.0.3/src/agentspan/agents/frameworks/serializer.py +335 -0
- agentspan-0.0.3/src/agentspan/agents/guardrail.py +398 -0
- agentspan-0.0.3/src/agentspan/agents/handoff.py +133 -0
- agentspan-0.0.3/src/agentspan/agents/memory.py +109 -0
- agentspan-0.0.3/src/agentspan/agents/result.py +517 -0
- agentspan-0.0.3/src/agentspan/agents/run.py +393 -0
- agentspan-0.0.3/src/agentspan/agents/runtime/__init__.py +9 -0
- agentspan-0.0.3/src/agentspan/agents/runtime/_dispatch.py +223 -0
- agentspan-0.0.3/src/agentspan/agents/runtime/config.py +116 -0
- agentspan-0.0.3/src/agentspan/agents/runtime/http_client.py +214 -0
- agentspan-0.0.3/src/agentspan/agents/runtime/mcp_discovery.py +162 -0
- agentspan-0.0.3/src/agentspan/agents/runtime/runtime.py +2981 -0
- agentspan-0.0.3/src/agentspan/agents/runtime/tool_registry.py +79 -0
- agentspan-0.0.3/src/agentspan/agents/runtime/worker_manager.py +121 -0
- agentspan-0.0.3/src/agentspan/agents/semantic_memory.py +253 -0
- agentspan-0.0.3/src/agentspan/agents/termination.py +306 -0
- agentspan-0.0.3/src/agentspan/agents/tool.py +633 -0
- agentspan-0.0.3/src/agentspan/agents/tracing.py +246 -0
- agentspan-0.0.3/src/agentspan.egg-info/PKG-INFO +373 -0
- agentspan-0.0.3/src/agentspan.egg-info/SOURCES.txt +37 -0
- agentspan-0.0.3/src/agentspan.egg-info/dependency_links.txt +1 -0
- agentspan-0.0.3/src/agentspan.egg-info/requires.txt +9 -0
- agentspan-0.0.3/src/agentspan.egg-info/top_level.txt +1 -0
agentspan-0.0.3/PKG-INFO
ADDED
|
@@ -0,0 +1,373 @@
|
|
|
1
|
+
Metadata-Version: 2.4
|
|
2
|
+
Name: agentspan
|
|
3
|
+
Version: 0.0.3
|
|
4
|
+
Summary: AgentSpan SDK — durable, scalable, observable AI agents
|
|
5
|
+
License: Apache-2.0
|
|
6
|
+
Classifier: Development Status :: 3 - Alpha
|
|
7
|
+
Classifier: Intended Audience :: Developers
|
|
8
|
+
Classifier: License :: OSI Approved :: Apache Software License
|
|
9
|
+
Classifier: Programming Language :: Python :: 3
|
|
10
|
+
Classifier: Programming Language :: Python :: 3.9
|
|
11
|
+
Classifier: Programming Language :: Python :: 3.10
|
|
12
|
+
Classifier: Programming Language :: Python :: 3.11
|
|
13
|
+
Classifier: Programming Language :: Python :: 3.12
|
|
14
|
+
Classifier: Programming Language :: Python :: 3.13
|
|
15
|
+
Classifier: Topic :: Software Development :: Libraries :: Python Modules
|
|
16
|
+
Classifier: Topic :: Scientific/Engineering :: Artificial Intelligence
|
|
17
|
+
Requires-Python: >=3.9
|
|
18
|
+
Description-Content-Type: text/markdown
|
|
19
|
+
Requires-Dist: conductor-python>=1.3.5-rc1
|
|
20
|
+
Requires-Dist: httpx>=0.24
|
|
21
|
+
Provides-Extra: dev
|
|
22
|
+
Requires-Dist: pytest>=7.0; extra == "dev"
|
|
23
|
+
Requires-Dist: pytest-asyncio>=0.21; extra == "dev"
|
|
24
|
+
Requires-Dist: pytest-cov>=4.0; extra == "dev"
|
|
25
|
+
Requires-Dist: ruff>=0.4; extra == "dev"
|
|
26
|
+
Requires-Dist: mypy>=1.10; extra == "dev"
|
|
27
|
+
|
|
28
|
+
# Conductor Agents SDK for Python
|
|
29
|
+
|
|
30
|
+
The **agent-first** Python SDK for [Conductor](https://github.com/conductor-oss/conductor) — build durable, scalable, observable AI agents in 5 lines of code.
|
|
31
|
+
|
|
32
|
+
```python
|
|
33
|
+
from agentspan.agents import Agent, tool, run
|
|
34
|
+
|
|
35
|
+
@tool
|
|
36
|
+
def get_weather(city: str) -> str:
|
|
37
|
+
"""Get current weather for a city."""
|
|
38
|
+
return f"72F and sunny in {city}"
|
|
39
|
+
|
|
40
|
+
agent = Agent(name="weatherbot", model="openai/gpt-4o", tools=[get_weather])
|
|
41
|
+
result = run(agent, "What's the weather in NYC?")
|
|
42
|
+
```
|
|
43
|
+
|
|
44
|
+
Every other agent SDK runs agents in-memory. When the process dies, the agent dies. Conductor Agents gives you **durable, scalable, observable agent execution** — agents that survive crashes, tools that scale independently, and human-in-the-loop workflows that can pause for days.
|
|
45
|
+
|
|
46
|
+
## Why Conductor Agents?
|
|
47
|
+
|
|
48
|
+
| Capability | In-memory SDKs | Conductor Agents |
|
|
49
|
+
|---|---|---|
|
|
50
|
+
| Process crashes | Agent dies | **Agent continues** (workflow-backed) |
|
|
51
|
+
| Tool scaling | Single process | **Distributed workers, any language** |
|
|
52
|
+
| Human approval | Minutes at best | **Days/weeks** (native WaitTask) |
|
|
53
|
+
| Debugging | Log files | **Visual workflow UI** |
|
|
54
|
+
| Long-running | Process-bound | **Weeks** (workflow-bound) |
|
|
55
|
+
| Observability | Limited traces | **Prometheus + UI + execution history** |
|
|
56
|
+
|
|
57
|
+
## Quickstart
|
|
58
|
+
|
|
59
|
+
### Prerequisites
|
|
60
|
+
|
|
61
|
+
- Python 3.9+
|
|
62
|
+
- A running Conductor server with LLM support
|
|
63
|
+
- An LLM provider (e.g., `openai`) configured in Conductor
|
|
64
|
+
|
|
65
|
+
### Install
|
|
66
|
+
|
|
67
|
+
```bash
|
|
68
|
+
pip install agentspan-sdk
|
|
69
|
+
```
|
|
70
|
+
|
|
71
|
+
### Configure
|
|
72
|
+
|
|
73
|
+
```bash
|
|
74
|
+
export CONDUCTOR_SERVER_URL=http://localhost:7001/api
|
|
75
|
+
# For Orkes Cloud:
|
|
76
|
+
# export CONDUCTOR_AUTH_KEY=your_key
|
|
77
|
+
# export CONDUCTOR_AUTH_SECRET=your_secret
|
|
78
|
+
```
|
|
79
|
+
|
|
80
|
+
### Hello World
|
|
81
|
+
|
|
82
|
+
```python
|
|
83
|
+
from agentspan.agents import Agent, run
|
|
84
|
+
|
|
85
|
+
agent = Agent(name="hello", model="openai/gpt-4o")
|
|
86
|
+
result = run(agent, "Say hello and tell me a fun fact.")
|
|
87
|
+
print(result.output)
|
|
88
|
+
print(f"Workflow: {result.workflow_id}") # View in Conductor UI
|
|
89
|
+
```
|
|
90
|
+
|
|
91
|
+
### Agent with Tools
|
|
92
|
+
|
|
93
|
+
```python
|
|
94
|
+
from agentspan.agents import Agent, tool, run
|
|
95
|
+
|
|
96
|
+
@tool
|
|
97
|
+
def get_weather(city: str) -> dict:
|
|
98
|
+
"""Get current weather for a city."""
|
|
99
|
+
return {"city": city, "temp": 72, "condition": "Sunny"}
|
|
100
|
+
|
|
101
|
+
@tool
|
|
102
|
+
def calculate(expression: str) -> dict:
|
|
103
|
+
"""Evaluate a math expression."""
|
|
104
|
+
return {"result": eval(expression)}
|
|
105
|
+
|
|
106
|
+
agent = Agent(
|
|
107
|
+
name="assistant",
|
|
108
|
+
model="openai/gpt-4o",
|
|
109
|
+
tools=[get_weather, calculate],
|
|
110
|
+
instructions="You are a helpful assistant.",
|
|
111
|
+
)
|
|
112
|
+
|
|
113
|
+
result = run(agent, "What's the weather in NYC? Also, what's 42 * 17?")
|
|
114
|
+
print(result.output)
|
|
115
|
+
```
|
|
116
|
+
|
|
117
|
+
### Structured Output
|
|
118
|
+
|
|
119
|
+
```python
|
|
120
|
+
from pydantic import BaseModel
|
|
121
|
+
from agentspan.agents import Agent, tool, run
|
|
122
|
+
|
|
123
|
+
class WeatherReport(BaseModel):
|
|
124
|
+
city: str
|
|
125
|
+
temperature: float
|
|
126
|
+
condition: str
|
|
127
|
+
recommendation: str
|
|
128
|
+
|
|
129
|
+
@tool
|
|
130
|
+
def get_weather(city: str) -> dict:
|
|
131
|
+
"""Get weather data for a city."""
|
|
132
|
+
return {"city": city, "temp_f": 72, "condition": "Sunny", "humidity": 45}
|
|
133
|
+
|
|
134
|
+
agent = Agent(
|
|
135
|
+
name="weather_reporter",
|
|
136
|
+
model="openai/gpt-4o",
|
|
137
|
+
tools=[get_weather],
|
|
138
|
+
output_type=WeatherReport,
|
|
139
|
+
)
|
|
140
|
+
|
|
141
|
+
result = run(agent, "What's the weather in NYC?")
|
|
142
|
+
report: WeatherReport = result.output
|
|
143
|
+
print(f"{report.city}: {report.temperature}F, {report.condition}")
|
|
144
|
+
print(f"Recommendation: {report.recommendation}")
|
|
145
|
+
```
|
|
146
|
+
|
|
147
|
+
### Multi-Agent Handoffs
|
|
148
|
+
|
|
149
|
+
```python
|
|
150
|
+
from agentspan.agents import Agent, tool, run
|
|
151
|
+
|
|
152
|
+
@tool
|
|
153
|
+
def check_balance(account_id: str) -> dict:
|
|
154
|
+
"""Check account balance."""
|
|
155
|
+
return {"account_id": account_id, "balance": 5432.10}
|
|
156
|
+
|
|
157
|
+
billing = Agent(
|
|
158
|
+
name="billing",
|
|
159
|
+
model="openai/gpt-4o",
|
|
160
|
+
instructions="Handle billing: balances, payments, invoices.",
|
|
161
|
+
tools=[check_balance],
|
|
162
|
+
)
|
|
163
|
+
|
|
164
|
+
technical = Agent(
|
|
165
|
+
name="technical",
|
|
166
|
+
model="openai/gpt-4o",
|
|
167
|
+
instructions="Handle technical: orders, shipping, returns.",
|
|
168
|
+
)
|
|
169
|
+
|
|
170
|
+
support = Agent(
|
|
171
|
+
name="support",
|
|
172
|
+
model="openai/gpt-4o",
|
|
173
|
+
instructions="Route customer requests to billing or technical.",
|
|
174
|
+
agents=[billing, technical],
|
|
175
|
+
strategy="handoff", # LLM chooses which sub-agent handles the request
|
|
176
|
+
)
|
|
177
|
+
|
|
178
|
+
result = run(support, "What's the balance on account ACC-123?")
|
|
179
|
+
```
|
|
180
|
+
|
|
181
|
+
### Sequential Pipeline
|
|
182
|
+
|
|
183
|
+
```python
|
|
184
|
+
from agentspan.agents import Agent, run
|
|
185
|
+
|
|
186
|
+
researcher = Agent(name="researcher", model="openai/gpt-4o",
|
|
187
|
+
instructions="Research the topic and provide key facts.")
|
|
188
|
+
writer = Agent(name="writer", model="openai/gpt-4o",
|
|
189
|
+
instructions="Write an engaging article from the research.")
|
|
190
|
+
editor = Agent(name="editor", model="openai/gpt-4o",
|
|
191
|
+
instructions="Polish the article for publication.")
|
|
192
|
+
|
|
193
|
+
# >> operator creates a sequential pipeline
|
|
194
|
+
pipeline = researcher >> writer >> editor
|
|
195
|
+
result = run(pipeline, "AI agents in software development")
|
|
196
|
+
print(result.output)
|
|
197
|
+
```
|
|
198
|
+
|
|
199
|
+
### Parallel Agents
|
|
200
|
+
|
|
201
|
+
```python
|
|
202
|
+
from agentspan.agents import Agent, run
|
|
203
|
+
|
|
204
|
+
market = Agent(name="market", model="openai/gpt-4o",
|
|
205
|
+
instructions="Analyze market size, growth, key players.")
|
|
206
|
+
risk = Agent(name="risk", model="openai/gpt-4o",
|
|
207
|
+
instructions="Analyze regulatory, technical, competitive risks.")
|
|
208
|
+
|
|
209
|
+
analysis = Agent(
|
|
210
|
+
name="analysis",
|
|
211
|
+
model="openai/gpt-4o",
|
|
212
|
+
agents=[market, risk],
|
|
213
|
+
strategy="parallel", # Both run concurrently, results aggregated
|
|
214
|
+
)
|
|
215
|
+
|
|
216
|
+
result = run(analysis, "Launching an AI healthcare tool in the US")
|
|
217
|
+
```
|
|
218
|
+
|
|
219
|
+
### Human-in-the-Loop
|
|
220
|
+
|
|
221
|
+
```python
|
|
222
|
+
from agentspan.agents import Agent, tool, start
|
|
223
|
+
|
|
224
|
+
@tool(approval_required=True)
|
|
225
|
+
def transfer_funds(from_acct: str, to_acct: str, amount: float) -> dict:
|
|
226
|
+
"""Transfer funds. Requires human approval."""
|
|
227
|
+
return {"status": "completed", "amount": amount}
|
|
228
|
+
|
|
229
|
+
agent = Agent(name="banker", model="openai/gpt-4o", tools=[transfer_funds])
|
|
230
|
+
|
|
231
|
+
handle = start(agent, "Transfer $5000 from checking to savings")
|
|
232
|
+
# Workflow pauses at transfer_funds...
|
|
233
|
+
|
|
234
|
+
# Hours or days later, from any process:
|
|
235
|
+
status = handle.get_status()
|
|
236
|
+
if status.is_waiting:
|
|
237
|
+
handle.approve() # Or: handle.reject("Amount too high")
|
|
238
|
+
```
|
|
239
|
+
|
|
240
|
+
### Guardrails
|
|
241
|
+
|
|
242
|
+
```python
|
|
243
|
+
from agentspan.agents import Agent, Guardrail, GuardrailResult, OnFail, guardrail, run
|
|
244
|
+
|
|
245
|
+
@guardrail
|
|
246
|
+
def word_limit(content: str) -> GuardrailResult:
|
|
247
|
+
"""Keep responses concise."""
|
|
248
|
+
if len(content.split()) > 500:
|
|
249
|
+
return GuardrailResult(passed=False, message="Too long. Be more concise.")
|
|
250
|
+
return GuardrailResult(passed=True)
|
|
251
|
+
|
|
252
|
+
agent = Agent(
|
|
253
|
+
name="concise_bot",
|
|
254
|
+
model="openai/gpt-4o",
|
|
255
|
+
guardrails=[Guardrail(word_limit, on_fail=OnFail.RETRY)],
|
|
256
|
+
)
|
|
257
|
+
|
|
258
|
+
result = run(agent, "Explain quantum computing.")
|
|
259
|
+
```
|
|
260
|
+
|
|
261
|
+
### Streaming
|
|
262
|
+
|
|
263
|
+
```python
|
|
264
|
+
from agentspan.agents import Agent, stream
|
|
265
|
+
|
|
266
|
+
agent = Agent(name="writer", model="openai/gpt-4o")
|
|
267
|
+
for event in stream(agent, "Write a haiku about Python"):
|
|
268
|
+
match event.type:
|
|
269
|
+
case "tool_call": print(f"Calling {event.tool_name}...")
|
|
270
|
+
case "thinking": print(f"Thinking: {event.content}")
|
|
271
|
+
case "guardrail_pass": print(f"Guardrail passed: {event.guardrail_name}")
|
|
272
|
+
case "guardrail_fail": print(f"Guardrail failed: {event.guardrail_name}")
|
|
273
|
+
case "done": print(f"\n{event.output}")
|
|
274
|
+
```
|
|
275
|
+
|
|
276
|
+
### Server-Side Tools (No Workers Needed)
|
|
277
|
+
|
|
278
|
+
```python
|
|
279
|
+
from agentspan.agents import Agent, http_tool, mcp_tool, run
|
|
280
|
+
|
|
281
|
+
# HTTP endpoint as a tool — Conductor makes the call server-side
|
|
282
|
+
weather_api = http_tool(
|
|
283
|
+
name="get_weather",
|
|
284
|
+
description="Get weather for a city",
|
|
285
|
+
url="https://api.weather.com/v1/current",
|
|
286
|
+
method="GET",
|
|
287
|
+
input_schema={"type": "object", "properties": {"city": {"type": "string"}}},
|
|
288
|
+
)
|
|
289
|
+
|
|
290
|
+
# MCP server tools — discovered at runtime
|
|
291
|
+
github = mcp_tool(server_url="http://localhost:8080/mcp")
|
|
292
|
+
|
|
293
|
+
agent = Agent(name="assistant", model="openai/gpt-4o", tools=[weather_api, github])
|
|
294
|
+
```
|
|
295
|
+
|
|
296
|
+
## API Reference
|
|
297
|
+
|
|
298
|
+
See [AGENTS.md](AGENTS.md) for the complete API reference and architecture guide.
|
|
299
|
+
|
|
300
|
+
## Examples
|
|
301
|
+
|
|
302
|
+
| Example | Description |
|
|
303
|
+
|---|---|
|
|
304
|
+
| [`01_basic_agent.py`](examples/01_basic_agent.py) | 5-line hello world |
|
|
305
|
+
| [`02_tools.py`](examples/02_tools.py) | Multiple tools, approval |
|
|
306
|
+
| [`02a_simple_tools.py`](examples/02a_simple_tools.py) | Two tools, LLM picks the right one |
|
|
307
|
+
| [`02b_multi_step_tools.py`](examples/02b_multi_step_tools.py) | Chained lookups and calculations |
|
|
308
|
+
| [`03_structured_output.py`](examples/03_structured_output.py) | Pydantic output types |
|
|
309
|
+
| [`04_http_and_mcp_tools.py`](examples/04_http_and_mcp_tools.py) | Server-side HTTP and MCP tools |
|
|
310
|
+
| [`04_mcp_weather.py`](examples/04_mcp_weather.py) | MCP server tools (live weather) |
|
|
311
|
+
| [`05_handoffs.py`](examples/05_handoffs.py) | Agent delegation |
|
|
312
|
+
| [`06_sequential_pipeline.py`](examples/06_sequential_pipeline.py) | Agent >> Agent >> Agent |
|
|
313
|
+
| [`07_parallel_agents.py`](examples/07_parallel_agents.py) | Fan-out / fan-in |
|
|
314
|
+
| [`08_router_agent.py`](examples/08_router_agent.py) | LLM routing to specialists |
|
|
315
|
+
| [`09_human_in_the_loop.py`](examples/09_human_in_the_loop.py) | Approval workflows |
|
|
316
|
+
| [`09b_hitl_with_feedback.py`](examples/09b_hitl_with_feedback.py) | Custom feedback (respond API) |
|
|
317
|
+
| [`09c_hitl_streaming.py`](examples/09c_hitl_streaming.py) | Streaming + HITL approval |
|
|
318
|
+
| [`10_guardrails.py`](examples/10_guardrails.py) | Output validation + retry |
|
|
319
|
+
| [`11_streaming.py`](examples/11_streaming.py) | Real-time events |
|
|
320
|
+
| [`12_long_running.py`](examples/12_long_running.py) | Fire-and-forget with polling |
|
|
321
|
+
| [`13_hierarchical_agents.py`](examples/13_hierarchical_agents.py) | Nested agent teams |
|
|
322
|
+
| [`14_existing_workers.py`](examples/14_existing_workers.py) | Using existing Conductor workers as tools |
|
|
323
|
+
| [`15_agent_discussion.py`](examples/15_agent_discussion.py) | Round-robin debate between agents |
|
|
324
|
+
| [`16_random_strategy.py`](examples/16_random_strategy.py) | Random agent selection |
|
|
325
|
+
| [`17_swarm_orchestration.py`](examples/17_swarm_orchestration.py) | Swarm with handoff conditions |
|
|
326
|
+
| [`18_manual_selection.py`](examples/18_manual_selection.py) | Human picks which agent speaks |
|
|
327
|
+
| [`19_composable_termination.py`](examples/19_composable_termination.py) | Composable termination conditions |
|
|
328
|
+
| [`20_constrained_transitions.py`](examples/20_constrained_transitions.py) | Restricted agent transitions |
|
|
329
|
+
| [`21_regex_guardrails.py`](examples/21_regex_guardrails.py) | RegexGuardrail (block/allow patterns) |
|
|
330
|
+
| [`22_llm_guardrails.py`](examples/22_llm_guardrails.py) | LLMGuardrail (AI judge) |
|
|
331
|
+
| [`23_token_tracking.py`](examples/23_token_tracking.py) | Token usage and cost tracking |
|
|
332
|
+
| [`24_code_execution.py`](examples/24_code_execution.py) | Code execution sandboxes |
|
|
333
|
+
| [`25_semantic_memory.py`](examples/25_semantic_memory.py) | Long-term memory with retrieval |
|
|
334
|
+
| [`26_opentelemetry_tracing.py`](examples/26_opentelemetry_tracing.py) | OpenTelemetry spans for observability |
|
|
335
|
+
| [`27_user_proxy_agent.py`](examples/27_user_proxy_agent.py) | Human stand-in for interactive conversations |
|
|
336
|
+
| [`28_gpt_assistant_agent.py`](examples/28_gpt_assistant_agent.py) | OpenAI Assistants API wrapper |
|
|
337
|
+
| [`29_agent_introductions.py`](examples/29_agent_introductions.py) | Agents introduce themselves |
|
|
338
|
+
| [`30_multimodal_agent.py`](examples/30_multimodal_agent.py) | Image/video analysis with vision models |
|
|
339
|
+
| [`31_tool_guardrails.py`](examples/31_tool_guardrails.py) | Pre-execution validation on tool inputs |
|
|
340
|
+
| [`32_human_guardrail.py`](examples/32_human_guardrail.py) | Pause for human review on guardrail failure |
|
|
341
|
+
| [`33_external_workers.py`](examples/33_external_workers.py) | Reference workers in other services |
|
|
342
|
+
| [`34_prompt_templates.py`](examples/34_prompt_templates.py) | Reusable server-side prompt templates |
|
|
343
|
+
| [`35_standalone_guardrails.py`](examples/35_standalone_guardrails.py) | Guardrails as plain callables (no agent) |
|
|
344
|
+
| [`36_simple_agent_guardrails.py`](examples/36_simple_agent_guardrails.py) | Guardrails on agents without tools |
|
|
345
|
+
| [`37_fix_guardrail.py`](examples/37_fix_guardrail.py) | Auto-correct output with on_fail="fix" |
|
|
346
|
+
| [`38_tech_trends.py`](examples/38_tech_trends.py) | Tech trends research with tools |
|
|
347
|
+
| [`39_local_code_execution.py`](examples/39_local_code_execution.py) | Local code execution sandbox |
|
|
348
|
+
| [`40_media_generation_agent.py`](examples/40_media_generation_agent.py) | Image/audio/video generation tools |
|
|
349
|
+
| [`41_sequential_pipeline_tools.py`](examples/41_sequential_pipeline_tools.py) | Sequential pipeline with per-stage tools |
|
|
350
|
+
| [`42_security_testing.py`](examples/42_security_testing.py) | Red-team security testing pipeline |
|
|
351
|
+
| [`43_data_security_pipeline.py`](examples/43_data_security_pipeline.py) | Data fetch, redact, and respond pipeline |
|
|
352
|
+
| [`44_safety_guardrails.py`](examples/44_safety_guardrails.py) | PII detection and sanitization pipeline |
|
|
353
|
+
|
|
354
|
+
### Google ADK Compatibility
|
|
355
|
+
|
|
356
|
+
We provide a full compatibility layer for [Google ADK](https://github.com/google/adk-python) — use the same `google.adk.agents` API backed by Conductor's durable execution. See [`examples/adk/`](examples/adk/) for 28 examples covering Agent, SequentialAgent, ParallelAgent, LoopAgent, sub_agents, AgentTool, callbacks, and more.
|
|
357
|
+
|
|
358
|
+
```python
|
|
359
|
+
from google.adk.agents import Agent, SequentialAgent
|
|
360
|
+
|
|
361
|
+
researcher = Agent(name="researcher", model="gemini-2.0-flash",
|
|
362
|
+
instruction="Research the topic.", tools=[search])
|
|
363
|
+
writer = Agent(name="writer", model="gemini-2.0-flash",
|
|
364
|
+
instruction="Write an article from the research.")
|
|
365
|
+
|
|
366
|
+
pipeline = SequentialAgent(name="pipeline", sub_agents=[researcher, writer])
|
|
367
|
+
```
|
|
368
|
+
|
|
369
|
+
See [ADK Samples Status](examples/adk/ADK_SAMPLES_STATUS.md) for full Google ADK coverage details.
|
|
370
|
+
|
|
371
|
+
## License
|
|
372
|
+
|
|
373
|
+
Apache 2.0
|