chuk-tool-processor 0.7.0__py3-none-any.whl → 0.10__py3-none-any.whl
This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
Potentially problematic release.
This version of chuk-tool-processor might be problematic. Click here for more details.
- chuk_tool_processor/__init__.py +114 -0
- chuk_tool_processor/core/__init__.py +31 -0
- chuk_tool_processor/core/exceptions.py +218 -12
- chuk_tool_processor/core/processor.py +391 -43
- chuk_tool_processor/execution/wrappers/__init__.py +42 -0
- chuk_tool_processor/execution/wrappers/caching.py +43 -10
- chuk_tool_processor/execution/wrappers/circuit_breaker.py +370 -0
- chuk_tool_processor/execution/wrappers/rate_limiting.py +31 -1
- chuk_tool_processor/execution/wrappers/retry.py +93 -53
- chuk_tool_processor/logging/__init__.py +5 -8
- chuk_tool_processor/logging/context.py +2 -5
- chuk_tool_processor/mcp/__init__.py +3 -0
- chuk_tool_processor/mcp/mcp_tool.py +8 -3
- chuk_tool_processor/mcp/models.py +87 -0
- chuk_tool_processor/mcp/setup_mcp_http_streamable.py +38 -2
- chuk_tool_processor/mcp/setup_mcp_sse.py +38 -2
- chuk_tool_processor/mcp/setup_mcp_stdio.py +92 -12
- chuk_tool_processor/mcp/stream_manager.py +109 -6
- chuk_tool_processor/mcp/transport/http_streamable_transport.py +18 -5
- chuk_tool_processor/mcp/transport/sse_transport.py +16 -3
- chuk_tool_processor/models/__init__.py +20 -0
- chuk_tool_processor/models/tool_call.py +34 -1
- chuk_tool_processor/models/tool_export_mixin.py +4 -4
- chuk_tool_processor/models/tool_spec.py +350 -0
- chuk_tool_processor/models/validated_tool.py +22 -2
- chuk_tool_processor/observability/__init__.py +30 -0
- chuk_tool_processor/observability/metrics.py +312 -0
- chuk_tool_processor/observability/setup.py +105 -0
- chuk_tool_processor/observability/tracing.py +346 -0
- chuk_tool_processor/py.typed +0 -0
- chuk_tool_processor/registry/interface.py +7 -7
- chuk_tool_processor/registry/providers/__init__.py +2 -1
- chuk_tool_processor/registry/tool_export.py +1 -6
- chuk_tool_processor-0.10.dist-info/METADATA +2326 -0
- chuk_tool_processor-0.10.dist-info/RECORD +69 -0
- chuk_tool_processor-0.7.0.dist-info/METADATA +0 -1230
- chuk_tool_processor-0.7.0.dist-info/RECORD +0 -61
- {chuk_tool_processor-0.7.0.dist-info → chuk_tool_processor-0.10.dist-info}/WHEEL +0 -0
- {chuk_tool_processor-0.7.0.dist-info → chuk_tool_processor-0.10.dist-info}/top_level.txt +0 -0
|
@@ -0,0 +1,2326 @@
|
|
|
1
|
+
Metadata-Version: 2.4
|
|
2
|
+
Name: chuk-tool-processor
|
|
3
|
+
Version: 0.10
|
|
4
|
+
Summary: Async-native framework for registering, discovering, and executing tools referenced in LLM responses
|
|
5
|
+
Author-email: CHUK Team <chrishayuk@somejunkmailbox.com>
|
|
6
|
+
Maintainer-email: CHUK Team <chrishayuk@somejunkmailbox.com>
|
|
7
|
+
License: MIT
|
|
8
|
+
Keywords: llm,tools,async,ai,openai,mcp,model-context-protocol,tool-calling,function-calling
|
|
9
|
+
Classifier: Development Status :: 4 - Beta
|
|
10
|
+
Classifier: Intended Audience :: Developers
|
|
11
|
+
Classifier: License :: OSI Approved :: MIT License
|
|
12
|
+
Classifier: Operating System :: OS Independent
|
|
13
|
+
Classifier: Programming Language :: Python :: 3
|
|
14
|
+
Classifier: Programming Language :: Python :: 3.11
|
|
15
|
+
Classifier: Programming Language :: Python :: 3.12
|
|
16
|
+
Classifier: Programming Language :: Python :: 3.13
|
|
17
|
+
Classifier: Topic :: Software Development :: Libraries :: Python Modules
|
|
18
|
+
Classifier: Topic :: Scientific/Engineering :: Artificial Intelligence
|
|
19
|
+
Classifier: Framework :: AsyncIO
|
|
20
|
+
Classifier: Typing :: Typed
|
|
21
|
+
Requires-Python: >=3.11
|
|
22
|
+
Description-Content-Type: text/markdown
|
|
23
|
+
Requires-Dist: chuk-mcp>=0.8.1
|
|
24
|
+
Requires-Dist: dotenv>=0.9.9
|
|
25
|
+
Requires-Dist: psutil>=7.0.0
|
|
26
|
+
Requires-Dist: pydantic>=2.11.3
|
|
27
|
+
Requires-Dist: uuid>=1.30
|
|
28
|
+
|
|
29
|
+
# CHUK Tool Processor — Production-grade execution for LLM tool calls
|
|
30
|
+
|
|
31
|
+
[](https://pypi.org/project/chuk-tool-processor/)
|
|
32
|
+
[](https://pypi.org/project/chuk-tool-processor/)
|
|
33
|
+
[](LICENSE)
|
|
34
|
+
[](https://www.python.org/dev/peps/pep-0561/)
|
|
35
|
+
[](https://pypi.org/project/chuk-tool-processor/)
|
|
36
|
+
[](docs/OBSERVABILITY.md)
|
|
37
|
+
|
|
38
|
+
**Reliable tool execution for LLMs — timeouts, retries, caching, rate limits, circuit breakers, and MCP integration — in one composable layer.**
|
|
39
|
+
|
|
40
|
+
---
|
|
41
|
+
|
|
42
|
+
## The Missing Layer for Reliable Tool Execution
|
|
43
|
+
|
|
44
|
+
LLMs are good at *calling* tools. The hard part is **executing** those tools reliably.
|
|
45
|
+
|
|
46
|
+
**CHUK Tool Processor:**
|
|
47
|
+
- Parses tool calls from any model (Anthropic XML, OpenAI `tool_calls`, JSON)
|
|
48
|
+
- Executes them with **timeouts, retries, caching, rate limits, circuit breaker, observability**
|
|
49
|
+
- Runs tools locally, in **isolated subprocesses**, or **remote via MCP**
|
|
50
|
+
|
|
51
|
+
CHUK Tool Processor is the execution layer between LLM responses and real tools.
|
|
52
|
+
|
|
53
|
+
It sits **below** agent frameworks and prompt orchestration, and **above** raw tool implementations.
|
|
54
|
+
|
|
55
|
+
```
|
|
56
|
+
LLM Output
|
|
57
|
+
↓
|
|
58
|
+
CHUK Tool Processor
|
|
59
|
+
↓
|
|
60
|
+
┌──────────────┬────────────────────┐
|
|
61
|
+
│ Local Tools │ Remote Tools (MCP) │
|
|
62
|
+
└──────────────┴────────────────────┘
|
|
63
|
+
```
|
|
64
|
+
|
|
65
|
+
**How it works internally:**
|
|
66
|
+
|
|
67
|
+
```
|
|
68
|
+
LLM Output
|
|
69
|
+
↓
|
|
70
|
+
Parsers (XML / OpenAI / JSON)
|
|
71
|
+
↓
|
|
72
|
+
┌─────────────────────────────┐
|
|
73
|
+
│ Execution Middleware │
|
|
74
|
+
│ (Applied in this order) │
|
|
75
|
+
│ • Cache │
|
|
76
|
+
│ • Rate Limit │
|
|
77
|
+
│ • Retry (with backoff) │
|
|
78
|
+
│ • Circuit Breaker │
|
|
79
|
+
└─────────────────────────────┘
|
|
80
|
+
↓
|
|
81
|
+
Execution Strategy
|
|
82
|
+
┌──────────────────────┐
|
|
83
|
+
│ • InProcess │ ← Fast, trusted
|
|
84
|
+
│ • Isolated/Subprocess│ ← Safe, untrusted
|
|
85
|
+
│ • Remote via MCP │ ← Distributed
|
|
86
|
+
└──────────────────────┘
|
|
87
|
+
```
|
|
88
|
+
|
|
89
|
+
Works with OpenAI, Anthropic, local models (Ollama/MLX/vLLM), and any framework (LangChain, LlamaIndex, custom).
|
|
90
|
+
|
|
91
|
+
## Executive TL;DR
|
|
92
|
+
|
|
93
|
+
* **Parse any format:** `XML` (Anthropic), `OpenAI tool_calls`, or raw `JSON`
|
|
94
|
+
* **Execute with production policies:** timeouts/retries/cache/rate-limits/circuit-breaker/idempotency
|
|
95
|
+
* **Run anywhere:** locally (fast), isolated (subprocess sandbox), or remote via MCP (HTTP/STDIO/SSE)
|
|
96
|
+
|
|
97
|
+
```python
|
|
98
|
+
import asyncio
|
|
99
|
+
from chuk_tool_processor import ToolProcessor, register_tool, initialize
|
|
100
|
+
|
|
101
|
+
@register_tool(name="weather")
|
|
102
|
+
class WeatherTool:
|
|
103
|
+
async def execute(self, city: str) -> dict:
|
|
104
|
+
return {"temp": 72, "condition": "sunny", "city": city}
|
|
105
|
+
|
|
106
|
+
async def main():
|
|
107
|
+
await initialize()
|
|
108
|
+
async with ToolProcessor(enable_caching=True, enable_retries=True) as p:
|
|
109
|
+
# Works with OpenAI, Anthropic, or JSON formats
|
|
110
|
+
result = await p.process('<tool name="weather" args=\'{"city": "SF"}\'/>')
|
|
111
|
+
print(result[0].result) # {'temp': 72, 'condition': 'sunny', 'city': 'SF'}
|
|
112
|
+
|
|
113
|
+
asyncio.run(main())
|
|
114
|
+
```
|
|
115
|
+
|
|
116
|
+
> **If you only remember three things:**
|
|
117
|
+
>
|
|
118
|
+
> 1. **Parse** `XML`, `OpenAI tool_calls`, or raw `JSON` automatically
|
|
119
|
+
> 2. **Execute** with timeouts/retries/cache/rate-limits/circuit-breaker
|
|
120
|
+
> 3. **Run** tools locally, isolated (subprocess), or remote via MCP
|
|
121
|
+
|
|
122
|
+
## When to Use This
|
|
123
|
+
|
|
124
|
+
Use **CHUK Tool Processor** when:
|
|
125
|
+
- Your LLM calls tools or APIs
|
|
126
|
+
- You need **retries, timeouts, caching, or rate limits**
|
|
127
|
+
- You need to **run untrusted tools safely**
|
|
128
|
+
- Your tools are **local or remote (MCP)**
|
|
129
|
+
|
|
130
|
+
Do **not** use this if:
|
|
131
|
+
- You want an agent framework
|
|
132
|
+
- You want conversation flow/memory orchestration
|
|
133
|
+
|
|
134
|
+
**This is the execution layer, not the agent.**
|
|
135
|
+
|
|
136
|
+
> **Not a framework.**
|
|
137
|
+
> If LangChain/LlamaIndex help decide *which* tool to call,
|
|
138
|
+
> CHUK Tool Processor makes sure the tool call **actually succeeds**.
|
|
139
|
+
|
|
140
|
+
## Table of Contents
|
|
141
|
+
|
|
142
|
+
- [The Problem](#the-problem)
|
|
143
|
+
- [Why chuk-tool-processor?](#why-chuk-tool-processor)
|
|
144
|
+
- [Compatibility Matrix](#compatibility-matrix)
|
|
145
|
+
- [Developer Experience Highlights](#developer-experience-highlights)
|
|
146
|
+
- [Quick Start](#quick-start)
|
|
147
|
+
- [Documentation Quick Reference](#documentation-quick-reference)
|
|
148
|
+
- [Choose Your Path](#choose-your-path)
|
|
149
|
+
- [Core Concepts](#core-concepts)
|
|
150
|
+
- [Getting Started](#getting-started)
|
|
151
|
+
- [Advanced Topics](#advanced-topics)
|
|
152
|
+
- [Configuration](#configuration)
|
|
153
|
+
- [Architecture Principles](#architecture-principles)
|
|
154
|
+
- [Examples](#examples)
|
|
155
|
+
- [FAQ](#faq)
|
|
156
|
+
- [Comparison with Other Tools](#comparison-with-other-tools)
|
|
157
|
+
- [Development & Publishing](#development--publishing)
|
|
158
|
+
- [Stability & Versioning](#stability--versioning)
|
|
159
|
+
- [Contributing & Support](#contributing--support)
|
|
160
|
+
|
|
161
|
+
## The Problem
|
|
162
|
+
|
|
163
|
+
LLMs generate tool calls. **The hard part is executing them reliably.**
|
|
164
|
+
|
|
165
|
+
CHUK Tool Processor **is that execution layer.**
|
|
166
|
+
|
|
167
|
+
## Why chuk-tool-processor?
|
|
168
|
+
|
|
169
|
+
**Composable execution layers:**
|
|
170
|
+
|
|
171
|
+
```
|
|
172
|
+
┌─────────────────────────────────┐
|
|
173
|
+
│ Your LLM Application │
|
|
174
|
+
│ (handles prompts, responses) │
|
|
175
|
+
└────────────┬────────────────────┘
|
|
176
|
+
│ tool calls
|
|
177
|
+
▼
|
|
178
|
+
┌─────────────────────────────────┐
|
|
179
|
+
│ Caching Wrapper │ ← Cache expensive results (idempotency keys)
|
|
180
|
+
├─────────────────────────────────┤
|
|
181
|
+
│ Rate Limiting Wrapper │ ← Prevent API abuse
|
|
182
|
+
├─────────────────────────────────┤
|
|
183
|
+
│ Retry Wrapper │ ← Handle transient failures (exponential backoff)
|
|
184
|
+
├─────────────────────────────────┤
|
|
185
|
+
│ Circuit Breaker Wrapper │ ← Prevent cascading failures (CLOSED/OPEN/HALF_OPEN)
|
|
186
|
+
├─────────────────────────────────┤
|
|
187
|
+
│ Execution Strategy │ ← How to run tools
|
|
188
|
+
│ • InProcess (fast) │
|
|
189
|
+
│ • Isolated (subprocess) │
|
|
190
|
+
├─────────────────────────────────┤
|
|
191
|
+
│ Tool Registry │ ← Your registered tools
|
|
192
|
+
└─────────────────────────────────┘
|
|
193
|
+
```
|
|
194
|
+
|
|
195
|
+
Each layer is **optional** and **configurable**. Mix and match what you need.
|
|
196
|
+
|
|
197
|
+
### It's a Building Block, Not a Framework
|
|
198
|
+
|
|
199
|
+
Unlike full-fledged LLM frameworks (LangChain, LlamaIndex, etc.), CHUK Tool Processor:
|
|
200
|
+
|
|
201
|
+
- ✅ **Does one thing well**: Process tool calls reliably
|
|
202
|
+
- ✅ **Plugs into any LLM app**: Works with any framework or no framework
|
|
203
|
+
- ✅ **Composable by design**: Stack strategies and wrappers like middleware
|
|
204
|
+
- ✅ **No opinions about your LLM**: Bring your own OpenAI, Anthropic, local model
|
|
205
|
+
- ❌ **Doesn't manage conversations**: That's your job
|
|
206
|
+
- ❌ **Doesn't do prompt engineering**: Use whatever prompting you want
|
|
207
|
+
- ❌ **Doesn't bundle an LLM client**: Use any client library you prefer
|
|
208
|
+
|
|
209
|
+
### It's Built for Production
|
|
210
|
+
|
|
211
|
+
Research code vs production code is about handling the edges. CHUK Tool Processor includes:
|
|
212
|
+
|
|
213
|
+
- ✅ **Timeouts** — Every tool execution has proper timeout handling
|
|
214
|
+
- ✅ **Retries** — Automatic retry with exponential backoff and deadline awareness
|
|
215
|
+
- ✅ **Rate Limiting** — Global and per-tool rate limits with sliding windows → [CONFIGURATION.md](docs/CONFIGURATION.md)
|
|
216
|
+
- ✅ **Caching** — Intelligent result caching with TTL and idempotency key support
|
|
217
|
+
- ✅ **Circuit Breakers** — Prevent cascading failures with automatic fault detection
|
|
218
|
+
- ✅ **Idempotency** — SHA256-based deduplication of LLM retry quirks
|
|
219
|
+
- ✅ **Error Handling** — Machine-readable error codes with structured details → [ERRORS.md](docs/ERRORS.md)
|
|
220
|
+
- ✅ **Observability** — Structured logging, metrics, OpenTelemetry tracing → [OBSERVABILITY.md](docs/OBSERVABILITY.md)
|
|
221
|
+
- ✅ **Safety** — Subprocess isolation for untrusted code (zero crash blast radius)
|
|
222
|
+
- ✅ **Type Safety** — PEP 561 compliant with full mypy support
|
|
223
|
+
- ✅ **Resource Management** — Context managers for automatic cleanup
|
|
224
|
+
- ✅ **Tool Discovery** — Formal schema export (OpenAI, Anthropic, MCP formats)
|
|
225
|
+
- ✅ **Cancellation** — Cooperative cancellation with request-scoped deadlines
|
|
226
|
+
|
|
227
|
+
## Compatibility Matrix
|
|
228
|
+
|
|
229
|
+
Runs the same on macOS, Linux, and Windows — locally, serverside, and inside containers.
|
|
230
|
+
|
|
231
|
+
| Component | Supported Versions | Notes |
|
|
232
|
+
|-----------|-------------------|-------|
|
|
233
|
+
| **Python** | 3.11, 3.12, 3.13 | Python 3.11+ required |
|
|
234
|
+
| **Operating Systems** | macOS, Linux, Windows | All platforms fully supported |
|
|
235
|
+
| **LLM Providers** | OpenAI, Anthropic, Local models | Any LLM that outputs tool calls |
|
|
236
|
+
| **MCP Transports** | HTTP Streamable, STDIO, SSE | All MCP 1.0 transports |
|
|
237
|
+
| **MCP Servers** | Notion, SQLite, Atlassian, Echo, Custom | Any MCP-compliant server |
|
|
238
|
+
|
|
239
|
+
**Tested Configurations:**
|
|
240
|
+
- ✅ macOS 14+ (Apple Silicon & Intel)
|
|
241
|
+
- ✅ Ubuntu 20.04+ / Debian 11+
|
|
242
|
+
- ✅ Windows 10+ (native & WSL2)
|
|
243
|
+
- ✅ Python 3.11.0+, 3.12.0+, 3.13.0+
|
|
244
|
+
- ✅ OpenAI GPT-4, GPT-4 Turbo
|
|
245
|
+
- ✅ Anthropic Claude 3 (Opus, Sonnet, Haiku)
|
|
246
|
+
- ✅ Local models (Ollama, LM Studio)
|
|
247
|
+
|
|
248
|
+
## Developer Experience Highlights
|
|
249
|
+
|
|
250
|
+
**What makes CHUK Tool Processor easy to use:**
|
|
251
|
+
|
|
252
|
+
* **Auto-parsing**: XML (Claude), OpenAI `tool_calls`, direct JSON—all work automatically
|
|
253
|
+
* **One call**: `process()` handles multiple calls & formats in a single invocation
|
|
254
|
+
* **Auto-coercion**: Pydantic-powered argument cleanup (whitespace, type conversion, extra fields ignored)
|
|
255
|
+
* **Safe defaults**: timeouts, retries, caching toggles built-in
|
|
256
|
+
* **Observability in one line**: `setup_observability(...)` for traces + metrics
|
|
257
|
+
* **MCP in one call**: `setup_mcp_http_streamable|stdio|sse(...)` connects to remote tools instantly
|
|
258
|
+
* **Context managers**: `async with ToolProcessor() as p:` ensures automatic cleanup
|
|
259
|
+
* **Full type safety**: PEP 561 compliant—mypy, pyright, and IDEs get complete type information
|
|
260
|
+
|
|
261
|
+
## Quick Start
|
|
262
|
+
|
|
263
|
+
### Installation
|
|
264
|
+
|
|
265
|
+
**Prerequisites:** Python 3.11+ • Works on macOS, Linux, Windows
|
|
266
|
+
|
|
267
|
+
```bash
|
|
268
|
+
# Using pip
|
|
269
|
+
pip install chuk-tool-processor
|
|
270
|
+
|
|
271
|
+
# Using uv (recommended)
|
|
272
|
+
uv pip install chuk-tool-processor
|
|
273
|
+
```
|
|
274
|
+
|
|
275
|
+
<details>
|
|
276
|
+
<summary><strong>Install from source or with extras</strong></summary>
|
|
277
|
+
|
|
278
|
+
```bash
|
|
279
|
+
# From source
|
|
280
|
+
git clone https://github.com/chrishayuk/chuk-tool-processor.git
|
|
281
|
+
cd chuk-tool-processor
|
|
282
|
+
uv pip install -e .
|
|
283
|
+
|
|
284
|
+
# With observability extras (OpenTelemetry + Prometheus)
|
|
285
|
+
pip install chuk-tool-processor[observability]
|
|
286
|
+
|
|
287
|
+
# With MCP extras
|
|
288
|
+
pip install chuk-tool-processor[mcp]
|
|
289
|
+
|
|
290
|
+
# All extras
|
|
291
|
+
pip install chuk-tool-processor[all]
|
|
292
|
+
```
|
|
293
|
+
|
|
294
|
+
</details>
|
|
295
|
+
|
|
296
|
+
<details>
|
|
297
|
+
<summary><strong>Type Checking Support (PEP 561 compliant)</strong></summary>
|
|
298
|
+
|
|
299
|
+
CHUK Tool Processor includes **full type checking support**:
|
|
300
|
+
|
|
301
|
+
```python
|
|
302
|
+
# mypy, pyright, and IDEs get full type information!
|
|
303
|
+
from chuk_tool_processor import ToolProcessor, ToolCall, ToolResult
|
|
304
|
+
|
|
305
|
+
async with ToolProcessor() as processor:
|
|
306
|
+
# Full autocomplete and type checking
|
|
307
|
+
results: list[ToolResult] = await processor.process(llm_output)
|
|
308
|
+
tools: list[str] = await processor.list_tools()
|
|
309
|
+
```
|
|
310
|
+
|
|
311
|
+
**Features:**
|
|
312
|
+
- ✅ `py.typed` marker for PEP 561 compliance
|
|
313
|
+
- ✅ Comprehensive type hints on all public APIs
|
|
314
|
+
- ✅ Works with mypy, pyright, pylance
|
|
315
|
+
- ✅ Full IDE autocomplete support
|
|
316
|
+
|
|
317
|
+
**No special mypy configuration needed** - just import and use!
|
|
318
|
+
|
|
319
|
+
</details>
|
|
320
|
+
|
|
321
|
+
## 60-Second Quick Start
|
|
322
|
+
|
|
323
|
+
### From raw LLM output to safe execution in 3 lines
|
|
324
|
+
|
|
325
|
+
```python
|
|
326
|
+
from chuk_tool_processor import ToolProcessor, initialize
|
|
327
|
+
|
|
328
|
+
await initialize()
|
|
329
|
+
async with ToolProcessor() as p:
|
|
330
|
+
results = await p.process('<tool name="calculator" args=\'{"operation":"multiply","a":15,"b":23}\'/>')
|
|
331
|
+
```
|
|
332
|
+
|
|
333
|
+
**Note:** This assumes you've registered a "calculator" tool. See complete example below.
|
|
334
|
+
|
|
335
|
+
### Works with Both OpenAI and Anthropic (No Adapters Needed)
|
|
336
|
+
|
|
337
|
+
```python
|
|
338
|
+
from chuk_tool_processor import ToolProcessor, register_tool, initialize
|
|
339
|
+
|
|
340
|
+
@register_tool(name="search")
|
|
341
|
+
class SearchTool:
|
|
342
|
+
async def execute(self, query: str) -> dict:
|
|
343
|
+
return {"results": [f"Found: {query}"]}
|
|
344
|
+
|
|
345
|
+
await initialize()
|
|
346
|
+
async with ToolProcessor() as p:
|
|
347
|
+
# OpenAI format
|
|
348
|
+
openai_response = {"tool_calls": [{"type": "function", "function": {"name": "search", "arguments": '{"query": "Python"}'}}]}
|
|
349
|
+
|
|
350
|
+
# Anthropic format
|
|
351
|
+
anthropic_response = '<tool name="search" args=\'{"query": "Python"}\'/>'
|
|
352
|
+
|
|
353
|
+
# Both work identically
|
|
354
|
+
results_openai = await p.process(openai_response)
|
|
355
|
+
results_anthropic = await p.process(anthropic_response)
|
|
356
|
+
```
|
|
357
|
+
|
|
358
|
+
**Absolutely minimal example** → See `examples/01_getting_started/hello_tool.py`:
|
|
359
|
+
|
|
360
|
+
```bash
|
|
361
|
+
python examples/01_getting_started/hello_tool.py
|
|
362
|
+
```
|
|
363
|
+
|
|
364
|
+
Single file that demonstrates:
|
|
365
|
+
- Registering a tool
|
|
366
|
+
- Parsing OpenAI & Anthropic formats
|
|
367
|
+
- Executing and getting results
|
|
368
|
+
|
|
369
|
+
Takes 60 seconds to understand, 3 minutes to master.
|
|
370
|
+
|
|
371
|
+
### 3-Minute Example
|
|
372
|
+
|
|
373
|
+
Copy-paste this into a file and run it:
|
|
374
|
+
|
|
375
|
+
```python
|
|
376
|
+
import asyncio
|
|
377
|
+
from chuk_tool_processor import ToolProcessor, register_tool, initialize
|
|
378
|
+
|
|
379
|
+
# Step 1: Define a tool
|
|
380
|
+
@register_tool(name="calculator")
|
|
381
|
+
class Calculator:
|
|
382
|
+
async def execute(self, operation: str, a: float, b: float) -> dict:
|
|
383
|
+
ops = {"add": a + b, "multiply": a * b, "subtract": a - b}
|
|
384
|
+
if operation not in ops:
|
|
385
|
+
raise ValueError(f"Unsupported operation: {operation}")
|
|
386
|
+
return {"result": ops[operation]}
|
|
387
|
+
|
|
388
|
+
# Step 2: Process LLM output
|
|
389
|
+
async def main():
|
|
390
|
+
await initialize()
|
|
391
|
+
|
|
392
|
+
# Use context manager for automatic cleanup
|
|
393
|
+
async with ToolProcessor() as processor:
|
|
394
|
+
# Your LLM returned this tool call
|
|
395
|
+
llm_output = '<tool name="calculator" args=\'{"operation": "multiply", "a": 15, "b": 23}\'/>'
|
|
396
|
+
|
|
397
|
+
# Process it
|
|
398
|
+
results = await processor.process(llm_output)
|
|
399
|
+
|
|
400
|
+
# Each result is a ToolResult with: tool, result, error, duration, cached
|
|
401
|
+
if results[0].error:
|
|
402
|
+
print(f"Error: {results[0].error}")
|
|
403
|
+
else:
|
|
404
|
+
print(results[0].result) # {'result': 345}
|
|
405
|
+
|
|
406
|
+
# Processor automatically cleaned up!
|
|
407
|
+
|
|
408
|
+
asyncio.run(main())
|
|
409
|
+
```
|
|
410
|
+
|
|
411
|
+
**That's it.** You now have production-ready tool execution with:
|
|
412
|
+
- ✅ Automatic timeouts, retries, and caching
|
|
413
|
+
- ✅ Clean resource management (context manager)
|
|
414
|
+
- ✅ Full type checking support
|
|
415
|
+
|
|
416
|
+
> **Why not just use OpenAI tool calls?**
|
|
417
|
+
> OpenAI's function calling is great for parsing, but you still need: parsing multiple formats (Anthropic XML, etc.), timeouts, retries, rate limits, caching, subprocess isolation, connecting to external MCP servers, and **per-tool** policy control with cross-provider parsing and MCP fan-out. CHUK Tool Processor **is** that missing middle layer.
|
|
418
|
+
|
|
419
|
+
## Quick Decision Tree (Commit This to Memory)
|
|
420
|
+
|
|
421
|
+
```
|
|
422
|
+
╭──────────────────────────────────────────╮
|
|
423
|
+
│ Do you trust the code you're executing? │
|
|
424
|
+
│ ✅ Yes → InProcessStrategy │
|
|
425
|
+
│ ⚠️ No → IsolatedStrategy (sandboxed) │
|
|
426
|
+
│ │
|
|
427
|
+
│ Where do your tools live? │
|
|
428
|
+
│ 📦 Local → @register_tool │
|
|
429
|
+
│ 🌐 Remote → setup_mcp_http_streamable │
|
|
430
|
+
╰──────────────────────────────────────────╯
|
|
431
|
+
```
|
|
432
|
+
|
|
433
|
+
**That's all you need to pick the right pattern.**
|
|
434
|
+
|
|
435
|
+
## Registry & Processor Lifecycle
|
|
436
|
+
|
|
437
|
+
Understanding the lifecycle helps you use CHUK Tool Processor correctly:
|
|
438
|
+
|
|
439
|
+
1. **`await initialize()`** — loads the global registry; call **once per process** at application startup
|
|
440
|
+
2. Create a **`ToolProcessor(...)`** (or use the one returned by `setup_mcp_*`)
|
|
441
|
+
3. Use **`async with ToolProcessor() as p:`** to ensure cleanup
|
|
442
|
+
4. **`setup_mcp_*`** returns `(processor, manager)` — reuse that `processor`
|
|
443
|
+
5. If you need a custom registry, pass it explicitly to the strategy
|
|
444
|
+
6. You rarely need `get_default_registry()` unless you're composing advanced setups
|
|
445
|
+
|
|
446
|
+
**⚠️ Important:** `initialize()` must run **once per process**, not once per request or processor instance. Running it multiple times will duplicate tools in the registry.
|
|
447
|
+
|
|
448
|
+
```python
|
|
449
|
+
# Standard pattern
|
|
450
|
+
await initialize() # Step 1: Register tools
|
|
451
|
+
|
|
452
|
+
async with ToolProcessor() as p: # Step 2-3: Create + auto cleanup
|
|
453
|
+
results = await p.process(llm_output)
|
|
454
|
+
# Step 4: Processor automatically cleaned up on exit
|
|
455
|
+
```
|
|
456
|
+
|
|
457
|
+
## Production Features by Example
|
|
458
|
+
|
|
459
|
+
### Idempotency & Deduplication
|
|
460
|
+
|
|
461
|
+
Automatically deduplicate LLM retry quirks using SHA256-based idempotency keys:
|
|
462
|
+
|
|
463
|
+
```python
|
|
464
|
+
from chuk_tool_processor import ToolProcessor, initialize
|
|
465
|
+
|
|
466
|
+
await initialize()
|
|
467
|
+
async with ToolProcessor(enable_caching=True, cache_ttl=300) as p:
|
|
468
|
+
# LLM retries the same call (common with streaming or errors)
|
|
469
|
+
call1 = '<tool name="search" args=\'{"query": "Python"}\'/>'
|
|
470
|
+
call2 = '<tool name="search" args=\'{"query": "Python"}\'/>' # Identical
|
|
471
|
+
|
|
472
|
+
results1 = await p.process(call1) # Executes
|
|
473
|
+
results2 = await p.process(call2) # Cache hit! (idempotency key match)
|
|
474
|
+
|
|
475
|
+
assert results1[0].cached == False
|
|
476
|
+
assert results2[0].cached == True
|
|
477
|
+
```
|
|
478
|
+
|
|
479
|
+
### Cancellation & Deadlines
|
|
480
|
+
|
|
481
|
+
Cooperative cancellation with request-scoped deadlines:
|
|
482
|
+
|
|
483
|
+
```python
|
|
484
|
+
import asyncio
|
|
485
|
+
from chuk_tool_processor import ToolProcessor, initialize
|
|
486
|
+
|
|
487
|
+
async def main():
|
|
488
|
+
await initialize()
|
|
489
|
+
async with ToolProcessor(default_timeout=60.0) as p:
|
|
490
|
+
try:
|
|
491
|
+
# Hard deadline for the whole batch (e.g., user request budget)
|
|
492
|
+
async with asyncio.timeout(5.0):
|
|
493
|
+
async for event in p.astream('<tool name="slow_report" args=\'{"n": 1000000}\'/>'):
|
|
494
|
+
print("chunk:", event)
|
|
495
|
+
except TimeoutError:
|
|
496
|
+
print("Request cancelled: deadline exceeded")
|
|
497
|
+
# Processor automatically cancels the tool and cleans up
|
|
498
|
+
|
|
499
|
+
asyncio.run(main())
|
|
500
|
+
```
|
|
501
|
+
|
|
502
|
+
### Per-Tool Policy Overrides
|
|
503
|
+
|
|
504
|
+
Override timeouts, retries, and rate limits per tool:
|
|
505
|
+
|
|
506
|
+
```python
|
|
507
|
+
from chuk_tool_processor import ToolProcessor, initialize
|
|
508
|
+
|
|
509
|
+
await initialize()
|
|
510
|
+
async with ToolProcessor(
|
|
511
|
+
default_timeout=30.0,
|
|
512
|
+
enable_retries=True,
|
|
513
|
+
max_retries=2,
|
|
514
|
+
enable_rate_limiting=True,
|
|
515
|
+
global_rate_limit=120, # 120 requests/min across all tools
|
|
516
|
+
tool_rate_limits={
|
|
517
|
+
"expensive_api": (5, 60), # 5 requests per 60 seconds
|
|
518
|
+
"fast_local": (1000, 60), # 1000 requests per 60 seconds
|
|
519
|
+
}
|
|
520
|
+
) as p:
|
|
521
|
+
# Tools run with their specific policies
|
|
522
|
+
results = await p.process('''
|
|
523
|
+
<tool name="expensive_api" args='{"q":"abc"}'/>
|
|
524
|
+
<tool name="fast_local" args='{"data":"xyz"}'/>
|
|
525
|
+
''')
|
|
526
|
+
```
|
|
527
|
+
|
|
528
|
+
## Documentation Quick Reference
|
|
529
|
+
|
|
530
|
+
| Document | What It Covers |
|
|
531
|
+
|----------|----------------|
|
|
532
|
+
| 📘 [CONFIGURATION.md](docs/CONFIGURATION.md) | **All config knobs & defaults**: ToolProcessor options, timeouts, retry policy, rate limits, circuit breakers, caching, environment variables |
|
|
533
|
+
| 🚨 [ERRORS.md](docs/ERRORS.md) | **Error taxonomy**: All error codes, exception classes, error details structure, handling patterns, retryability guide |
|
|
534
|
+
| 📊 [OBSERVABILITY.md](docs/OBSERVABILITY.md) | **Metrics & tracing**: OpenTelemetry setup, Prometheus metrics, spans reference, PromQL queries |
|
|
535
|
+
| 🔌 [examples/01_getting_started/hello_tool.py](examples/01_getting_started/hello_tool.py) | **60-second starter**: Single-file, copy-paste-and-run example |
|
|
536
|
+
| 🎯 [examples/](examples/) | **20+ working examples**: MCP integration, OAuth flows, streaming, production patterns |
|
|
537
|
+
|
|
538
|
+
## Choose Your Path
|
|
539
|
+
|
|
540
|
+
**Use this when OpenAI/Claude tool calling is not enough** — because you need retries, caching, rate limits, subprocess isolation, or MCP integration.
|
|
541
|
+
|
|
542
|
+
| Your Goal | What You Need | Where to Look |
|
|
543
|
+
|-----------|---------------|---------------|
|
|
544
|
+
| ☕ **Just process LLM tool calls** | Basic tool registration + processor | [60-Second Quick Start](#60-second-quick-start) |
|
|
545
|
+
| 🔌 **Connect to external tools** | MCP integration (HTTP/STDIO/SSE) | [MCP Integration](#5-mcp-integration-external-tools) |
|
|
546
|
+
| 🛡️ **Production deployment** | Timeouts, retries, rate limits, caching | [CONFIGURATION.md](docs/CONFIGURATION.md) |
|
|
547
|
+
| 🔒 **Run untrusted code safely** | Isolated strategy (subprocess) | [Isolated Strategy](#using-isolated-strategy) |
|
|
548
|
+
| 📊 **Monitor and observe** | OpenTelemetry + Prometheus | [OBSERVABILITY.md](docs/OBSERVABILITY.md) |
|
|
549
|
+
| 🌊 **Stream incremental results** | StreamingTool pattern | [StreamingTool](#streamingtool-real-time-results) |
|
|
550
|
+
| 🚨 **Handle errors reliably** | Error codes & taxonomy | [ERRORS.md](docs/ERRORS.md) |
|
|
551
|
+
|
|
552
|
+
### Real-World Quick Start
|
|
553
|
+
|
|
554
|
+
Here are the most common patterns you'll use:
|
|
555
|
+
|
|
556
|
+
**Pattern 1: Local tools only**
|
|
557
|
+
```python
|
|
558
|
+
import asyncio
|
|
559
|
+
from chuk_tool_processor import ToolProcessor, register_tool, initialize
|
|
560
|
+
|
|
561
|
+
@register_tool(name="my_tool")
|
|
562
|
+
class MyTool:
|
|
563
|
+
async def execute(self, arg: str) -> dict:
|
|
564
|
+
return {"result": f"Processed: {arg}"}
|
|
565
|
+
|
|
566
|
+
async def main():
|
|
567
|
+
await initialize()
|
|
568
|
+
|
|
569
|
+
async with ToolProcessor() as processor:
|
|
570
|
+
llm_output = '<tool name="my_tool" args=\'{"arg": "hello"}\'/>'
|
|
571
|
+
results = await processor.process(llm_output)
|
|
572
|
+
print(results[0].result) # {'result': 'Processed: hello'}
|
|
573
|
+
|
|
574
|
+
asyncio.run(main())
|
|
575
|
+
```
|
|
576
|
+
|
|
577
|
+
<details>
|
|
578
|
+
<summary><strong>More patterns: MCP integration (local + remote tools)</strong></summary>
|
|
579
|
+
|
|
580
|
+
**Pattern 2: Mix local + remote MCP tools (Notion)**
|
|
581
|
+
```python
|
|
582
|
+
import asyncio
|
|
583
|
+
from chuk_tool_processor import register_tool, initialize, setup_mcp_http_streamable
|
|
584
|
+
|
|
585
|
+
@register_tool(name="local_calculator")
|
|
586
|
+
class Calculator:
|
|
587
|
+
async def execute(self, a: int, b: int) -> int:
|
|
588
|
+
return a + b
|
|
589
|
+
|
|
590
|
+
async def main():
|
|
591
|
+
# Register local tools first
|
|
592
|
+
await initialize()
|
|
593
|
+
|
|
594
|
+
# Then add Notion MCP tools (requires OAuth token)
|
|
595
|
+
processor, manager = await setup_mcp_http_streamable(
|
|
596
|
+
servers=[{
|
|
597
|
+
"name": "notion",
|
|
598
|
+
"url": "https://mcp.notion.com/mcp",
|
|
599
|
+
"headers": {"Authorization": f"Bearer {access_token}"}
|
|
600
|
+
}],
|
|
601
|
+
namespace="notion",
|
|
602
|
+
initialization_timeout=120.0
|
|
603
|
+
)
|
|
604
|
+
|
|
605
|
+
# Now you have both local and remote tools!
|
|
606
|
+
results = await processor.process('''
|
|
607
|
+
<tool name="local_calculator" args='{"a": 5, "b": 3}'/>
|
|
608
|
+
<tool name="notion.search_pages" args='{"query": "project docs"}'/>
|
|
609
|
+
''')
|
|
610
|
+
print(f"Local result: {results[0].result}")
|
|
611
|
+
print(f"Notion result: {results[1].result}")
|
|
612
|
+
|
|
613
|
+
# Clean up
|
|
614
|
+
await manager.close()
|
|
615
|
+
|
|
616
|
+
asyncio.run(main())
|
|
617
|
+
```
|
|
618
|
+
|
|
619
|
+
See `examples/04_mcp_integration/notion_oauth.py` for complete OAuth flow.
|
|
620
|
+
|
|
621
|
+
**Pattern 3: Local SQLite database via STDIO**
|
|
622
|
+
```python
|
|
623
|
+
import asyncio
|
|
624
|
+
import json
|
|
625
|
+
from chuk_tool_processor.mcp import setup_mcp_stdio
|
|
626
|
+
|
|
627
|
+
async def main():
|
|
628
|
+
# Configure SQLite MCP server (runs locally)
|
|
629
|
+
config = {
|
|
630
|
+
"mcpServers": {
|
|
631
|
+
"sqlite": {
|
|
632
|
+
"command": "uvx",
|
|
633
|
+
"args": ["mcp-server-sqlite", "--db-path", "./app.db"],
|
|
634
|
+
"transport": "stdio"
|
|
635
|
+
}
|
|
636
|
+
}
|
|
637
|
+
}
|
|
638
|
+
|
|
639
|
+
with open("mcp_config.json", "w") as f:
|
|
640
|
+
json.dump(config, f)
|
|
641
|
+
|
|
642
|
+
processor, manager = await setup_mcp_stdio(
|
|
643
|
+
config_file="mcp_config.json",
|
|
644
|
+
servers=["sqlite"],
|
|
645
|
+
namespace="db",
|
|
646
|
+
initialization_timeout=120.0 # First run downloads the package
|
|
647
|
+
)
|
|
648
|
+
|
|
649
|
+
# Query your local database via MCP
|
|
650
|
+
results = await processor.process(
|
|
651
|
+
'<tool name="db.query" args=\'{"sql": "SELECT * FROM users LIMIT 10"}\'/>'
|
|
652
|
+
)
|
|
653
|
+
print(results[0].result)
|
|
654
|
+
|
|
655
|
+
asyncio.run(main())
|
|
656
|
+
```
|
|
657
|
+
|
|
658
|
+
See `examples/04_mcp_integration/stdio_sqlite.py` for complete working example.
|
|
659
|
+
|
|
660
|
+
</details>
|
|
661
|
+
|
|
662
|
+
## Core Concepts
|
|
663
|
+
|
|
664
|
+
### 1. Tool Registry
|
|
665
|
+
|
|
666
|
+
The **registry** is where you register tools for execution. Tools can be:
|
|
667
|
+
|
|
668
|
+
- **Simple classes** with an `async execute()` method
|
|
669
|
+
- **ValidatedTool** subclasses with Pydantic validation
|
|
670
|
+
- **StreamingTool** for real-time incremental results
|
|
671
|
+
- **Functions** registered via `register_fn_tool()`
|
|
672
|
+
|
|
673
|
+
> **Note:** The registry is global, processors are scoped.
|
|
674
|
+
|
|
675
|
+
```python
|
|
676
|
+
from chuk_tool_processor import register_tool
|
|
677
|
+
from chuk_tool_processor.models.validated_tool import ValidatedTool
|
|
678
|
+
from pydantic import BaseModel, Field
|
|
679
|
+
|
|
680
|
+
@register_tool(name="weather")
|
|
681
|
+
class WeatherTool(ValidatedTool):
|
|
682
|
+
class Arguments(BaseModel):
|
|
683
|
+
location: str = Field(..., description="City name")
|
|
684
|
+
units: str = Field("celsius", description="Temperature units")
|
|
685
|
+
|
|
686
|
+
class Result(BaseModel):
|
|
687
|
+
temperature: float
|
|
688
|
+
conditions: str
|
|
689
|
+
|
|
690
|
+
async def _execute(self, location: str, units: str) -> Result:
|
|
691
|
+
# Your weather API logic here
|
|
692
|
+
return self.Result(temperature=22.5, conditions="Sunny")
|
|
693
|
+
```
|
|
694
|
+
|
|
695
|
+
### 2. Execution Strategies
|
|
696
|
+
|
|
697
|
+
**Strategies** determine *how* tools run:
|
|
698
|
+
|
|
699
|
+
| Strategy | Use Case | Trade-offs |
|
|
700
|
+
|----------|----------|------------|
|
|
701
|
+
| **InProcessStrategy** | Fast, trusted tools | Speed ✅, Isolation ❌ |
|
|
702
|
+
| **IsolatedStrategy** | Untrusted or risky code | Isolation ✅, Speed ❌ |
|
|
703
|
+
|
|
704
|
+
```python
|
|
705
|
+
import asyncio
|
|
706
|
+
from chuk_tool_processor import ToolProcessor, IsolatedStrategy, get_default_registry
|
|
707
|
+
|
|
708
|
+
async def main():
|
|
709
|
+
registry = await get_default_registry()
|
|
710
|
+
processor = ToolProcessor(
|
|
711
|
+
strategy=IsolatedStrategy(
|
|
712
|
+
registry=registry,
|
|
713
|
+
max_workers=4,
|
|
714
|
+
default_timeout=30.0
|
|
715
|
+
)
|
|
716
|
+
)
|
|
717
|
+
# Use processor...
|
|
718
|
+
|
|
719
|
+
asyncio.run(main())
|
|
720
|
+
```
|
|
721
|
+
|
|
722
|
+
**Note:** `IsolatedStrategy` is an alias of `SubprocessStrategy` for backwards compatibility. Use `IsolatedStrategy` for clarity—it better communicates the security boundary intent.
|
|
723
|
+
|
|
724
|
+
### 3. Execution Wrappers (Middleware)
|
|
725
|
+
|
|
726
|
+
**Wrappers** add production features as composable layers:
|
|
727
|
+
|
|
728
|
+
```python
|
|
729
|
+
processor = ToolProcessor(
|
|
730
|
+
enable_caching=True, # Cache expensive calls
|
|
731
|
+
cache_ttl=600, # 10 minutes
|
|
732
|
+
enable_rate_limiting=True, # Prevent abuse
|
|
733
|
+
global_rate_limit=100, # 100 req/min globally
|
|
734
|
+
enable_retries=True, # Auto-retry failures
|
|
735
|
+
max_retries=3 # Up to 3 attempts
|
|
736
|
+
)
|
|
737
|
+
```
|
|
738
|
+
|
|
739
|
+
The processor stacks them automatically: **Cache → Rate Limit → Retry → Strategy → Tool**
|
|
740
|
+
|
|
741
|
+
### 4. Input Parsers (Plugins)
|
|
742
|
+
|
|
743
|
+
**Parsers** extract tool calls from various LLM output formats:
|
|
744
|
+
|
|
745
|
+
**XML Tags (Anthropic-style)**
|
|
746
|
+
```xml
|
|
747
|
+
<tool name="search" args='{"query": "Python"}'/>
|
|
748
|
+
```
|
|
749
|
+
|
|
750
|
+
**OpenAI `tool_calls` (JSON)**
|
|
751
|
+
```json
|
|
752
|
+
{
|
|
753
|
+
"tool_calls": [
|
|
754
|
+
{
|
|
755
|
+
"type": "function",
|
|
756
|
+
"function": {
|
|
757
|
+
"name": "search",
|
|
758
|
+
"arguments": "{\"query\": \"Python\"}"
|
|
759
|
+
}
|
|
760
|
+
}
|
|
761
|
+
]
|
|
762
|
+
}
|
|
763
|
+
```
|
|
764
|
+
|
|
765
|
+
**Direct JSON (array of calls)**
|
|
766
|
+
```json
|
|
767
|
+
[
|
|
768
|
+
{ "tool": "search", "arguments": { "query": "Python" } }
|
|
769
|
+
]
|
|
770
|
+
```
|
|
771
|
+
|
|
772
|
+
All formats work automatically—no configuration needed.
|
|
773
|
+
|
|
774
|
+
**Input Format Compatibility:**
|
|
775
|
+
|
|
776
|
+
| Format | Example | Use Case |
|
|
777
|
+
|--------|---------|----------|
|
|
778
|
+
| **XML Tool Tag** | `<tool name="search" args='{"q":"Python"}'/>`| Anthropic Claude, XML-based LLMs |
|
|
779
|
+
| **OpenAI tool_calls** | JSON object (above) | OpenAI GPT-4 function calling |
|
|
780
|
+
| **Direct JSON** | `[{"tool": "search", "arguments": {"q": "Python"}}]` | Generic API integrations |
|
|
781
|
+
| **Single dict** | `{"tool": "search", "arguments": {"q": "Python"}}` | Programmatic calls |
|
|
782
|
+
|
|
783
|
+
### 5. MCP Integration (External Tools)
|
|
784
|
+
|
|
785
|
+
Connect to **remote tool servers** using the [Model Context Protocol](https://modelcontextprotocol.io). CHUK Tool Processor supports three transport mechanisms for different use cases:
|
|
786
|
+
|
|
787
|
+
#### HTTP Streamable (⭐ Recommended for Cloud Services)
|
|
788
|
+
|
|
789
|
+
**Use for:** Cloud SaaS services (OAuth, long-running streams, resilient reconnects)
|
|
790
|
+
|
|
791
|
+
Modern HTTP streaming transport for cloud-based MCP servers like Notion:
|
|
792
|
+
|
|
793
|
+
```python
|
|
794
|
+
from chuk_tool_processor.mcp import setup_mcp_http_streamable
|
|
795
|
+
|
|
796
|
+
# Connect to Notion MCP with OAuth
|
|
797
|
+
servers = [
|
|
798
|
+
{
|
|
799
|
+
"name": "notion",
|
|
800
|
+
"url": "https://mcp.notion.com/mcp",
|
|
801
|
+
"headers": {"Authorization": f"Bearer {access_token}"}
|
|
802
|
+
}
|
|
803
|
+
]
|
|
804
|
+
|
|
805
|
+
processor, manager = await setup_mcp_http_streamable(
|
|
806
|
+
servers=servers,
|
|
807
|
+
namespace="notion",
|
|
808
|
+
initialization_timeout=120.0, # Some services need time to initialize
|
|
809
|
+
enable_caching=True,
|
|
810
|
+
enable_retries=True
|
|
811
|
+
)
|
|
812
|
+
|
|
813
|
+
# Use Notion tools through MCP
|
|
814
|
+
results = await processor.process(
|
|
815
|
+
'<tool name="notion.search_pages" args=\'{"query": "meeting notes"}\'/>'
|
|
816
|
+
)
|
|
817
|
+
```
|
|
818
|
+
|
|
819
|
+
<details>
|
|
820
|
+
<summary><strong>Other MCP Transports (STDIO for local tools, SSE for legacy)</strong></summary>
|
|
821
|
+
|
|
822
|
+
#### STDIO (Best for Local/On-Device Tools)
|
|
823
|
+
|
|
824
|
+
**Use for:** Local/embedded tools and databases (SQLite, file systems, local services)
|
|
825
|
+
|
|
826
|
+
For running local MCP servers as subprocesses—great for databases, file systems, and local tools:
|
|
827
|
+
|
|
828
|
+
```python
|
|
829
|
+
from chuk_tool_processor.mcp import setup_mcp_stdio
|
|
830
|
+
import json
|
|
831
|
+
|
|
832
|
+
# Configure SQLite MCP server
|
|
833
|
+
config = {
|
|
834
|
+
"mcpServers": {
|
|
835
|
+
"sqlite": {
|
|
836
|
+
"command": "uvx",
|
|
837
|
+
"args": ["mcp-server-sqlite", "--db-path", "/path/to/database.db"],
|
|
838
|
+
"env": {"MCP_SERVER_NAME": "sqlite"},
|
|
839
|
+
"transport": "stdio"
|
|
840
|
+
}
|
|
841
|
+
}
|
|
842
|
+
}
|
|
843
|
+
|
|
844
|
+
# Save config to file
|
|
845
|
+
with open("mcp_config.json", "w") as f:
|
|
846
|
+
json.dump(config, f)
|
|
847
|
+
|
|
848
|
+
# Connect to local SQLite server
|
|
849
|
+
processor, manager = await setup_mcp_stdio(
|
|
850
|
+
config_file="mcp_config.json",
|
|
851
|
+
servers=["sqlite"],
|
|
852
|
+
namespace="db",
|
|
853
|
+
initialization_timeout=120.0 # First run downloads packages
|
|
854
|
+
)
|
|
855
|
+
|
|
856
|
+
# Query your local database via MCP
|
|
857
|
+
results = await processor.process(
|
|
858
|
+
'<tool name="db.query" args=\'{"sql": "SELECT * FROM users LIMIT 10"}\'/>'
|
|
859
|
+
)
|
|
860
|
+
```
|
|
861
|
+
|
|
862
|
+
#### SSE (Legacy Support)
|
|
863
|
+
|
|
864
|
+
**Use for:** Legacy compatibility only. Prefer HTTP Streamable for new integrations.
|
|
865
|
+
|
|
866
|
+
For backward compatibility with older MCP servers using Server-Sent Events:
|
|
867
|
+
|
|
868
|
+
```python
|
|
869
|
+
from chuk_tool_processor.mcp import setup_mcp_sse
|
|
870
|
+
|
|
871
|
+
# Connect to Atlassian with OAuth via SSE
|
|
872
|
+
servers = [
|
|
873
|
+
{
|
|
874
|
+
"name": "atlassian",
|
|
875
|
+
"url": "https://mcp.atlassian.com/v1/sse",
|
|
876
|
+
"headers": {"Authorization": f"Bearer {access_token}"}
|
|
877
|
+
}
|
|
878
|
+
]
|
|
879
|
+
|
|
880
|
+
processor, manager = await setup_mcp_sse(
|
|
881
|
+
servers=servers,
|
|
882
|
+
namespace="atlassian",
|
|
883
|
+
initialization_timeout=120.0
|
|
884
|
+
)
|
|
885
|
+
```
|
|
886
|
+
|
|
887
|
+
</details>
|
|
888
|
+
|
|
889
|
+
**Transport Comparison:**
|
|
890
|
+
|
|
891
|
+
| Transport | Use Case | Real Examples |
|
|
892
|
+
|-----------|----------|---------------|
|
|
893
|
+
| **HTTP Streamable** | Cloud APIs, SaaS services | Notion (`mcp.notion.com`) |
|
|
894
|
+
| **STDIO** | Local tools, databases | SQLite (`mcp-server-sqlite`), Echo (`chuk-mcp-echo`) |
|
|
895
|
+
| **SSE** | Legacy cloud services | Atlassian (`mcp.atlassian.com`) |
|
|
896
|
+
|
|
897
|
+
**How MCP fits into the architecture:**
|
|
898
|
+
|
|
899
|
+
```
|
|
900
|
+
LLM Output
|
|
901
|
+
↓
|
|
902
|
+
Tool Processor
|
|
903
|
+
↓
|
|
904
|
+
┌──────────────┬────────────────────┐
|
|
905
|
+
│ Local Tools │ Remote Tools (MCP) │
|
|
906
|
+
└──────────────┴────────────────────┘
|
|
907
|
+
```
|
|
908
|
+
|
|
909
|
+
**Relationship with [chuk-mcp](https://github.com/chrishayuk/chuk-mcp):**
|
|
910
|
+
- `chuk-mcp` is a low-level MCP protocol client (handles transports, protocol negotiation)
|
|
911
|
+
- `chuk-tool-processor` wraps `chuk-mcp` to integrate external tools into your execution pipeline
|
|
912
|
+
- You can use local tools, remote MCP tools, or both in the same processor
|
|
913
|
+
|
|
914
|
+
## Getting Started
|
|
915
|
+
|
|
916
|
+
### Creating Tools
|
|
917
|
+
|
|
918
|
+
CHUK Tool Processor supports multiple patterns for defining tools:
|
|
919
|
+
|
|
920
|
+
#### Simple Function-Based Tools
|
|
921
|
+
```python
|
|
922
|
+
from chuk_tool_processor import register_fn_tool
|
|
923
|
+
from datetime import datetime
|
|
924
|
+
from zoneinfo import ZoneInfo
|
|
925
|
+
|
|
926
|
+
def get_current_time(timezone: str = "UTC") -> str:
|
|
927
|
+
"""Get the current time in the specified timezone."""
|
|
928
|
+
now = datetime.now(ZoneInfo(timezone))
|
|
929
|
+
return now.strftime("%Y-%m-%d %H:%M:%S %Z")
|
|
930
|
+
|
|
931
|
+
# Register the function as a tool (sync — no await needed)
|
|
932
|
+
register_fn_tool(get_current_time, namespace="utilities")
|
|
933
|
+
```
|
|
934
|
+
|
|
935
|
+
#### ValidatedTool (Pydantic Type Safety)
|
|
936
|
+
|
|
937
|
+
For production tools, use Pydantic validation:
|
|
938
|
+
|
|
939
|
+
```python
|
|
940
|
+
@register_tool(name="weather")
|
|
941
|
+
class WeatherTool(ValidatedTool):
|
|
942
|
+
class Arguments(BaseModel):
|
|
943
|
+
location: str = Field(..., description="City name")
|
|
944
|
+
units: str = Field("celsius", description="Temperature units")
|
|
945
|
+
|
|
946
|
+
class Result(BaseModel):
|
|
947
|
+
temperature: float
|
|
948
|
+
conditions: str
|
|
949
|
+
|
|
950
|
+
async def _execute(self, location: str, units: str) -> Result:
|
|
951
|
+
return self.Result(temperature=22.5, conditions="Sunny")
|
|
952
|
+
```
|
|
953
|
+
|
|
954
|
+
#### StreamingTool (Real-time Results)
|
|
955
|
+
|
|
956
|
+
For long-running operations that produce incremental results:
|
|
957
|
+
|
|
958
|
+
```python
|
|
959
|
+
from chuk_tool_processor.models import StreamingTool
|
|
960
|
+
|
|
961
|
+
@register_tool(name="file_processor")
|
|
962
|
+
class FileProcessor(StreamingTool):
|
|
963
|
+
class Arguments(BaseModel):
|
|
964
|
+
file_path: str
|
|
965
|
+
|
|
966
|
+
class Result(BaseModel):
|
|
967
|
+
line: int
|
|
968
|
+
content: str
|
|
969
|
+
|
|
970
|
+
async def _stream_execute(self, file_path: str):
|
|
971
|
+
with open(file_path) as f:
|
|
972
|
+
for i, line in enumerate(f, 1):
|
|
973
|
+
yield self.Result(line=i, content=line.strip())
|
|
974
|
+
```
|
|
975
|
+
|
|
976
|
+
**Consuming streaming results:**
|
|
977
|
+
|
|
978
|
+
```python
|
|
979
|
+
import asyncio
|
|
980
|
+
from chuk_tool_processor import ToolProcessor, initialize
|
|
981
|
+
|
|
982
|
+
async def main():
|
|
983
|
+
await initialize()
|
|
984
|
+
processor = ToolProcessor()
|
|
985
|
+
|
|
986
|
+
# Stream can be cancelled by breaking or raising an exception
|
|
987
|
+
try:
|
|
988
|
+
async for event in processor.astream('<tool name="file_processor" args=\'{"file_path":"README.md"}\'/>'):
|
|
989
|
+
# 'event' is a streamed chunk (either your Result model instance or a dict)
|
|
990
|
+
line = event["line"] if isinstance(event, dict) else getattr(event, "line", None)
|
|
991
|
+
content = event["content"] if isinstance(event, dict) else getattr(event, "content", None)
|
|
992
|
+
print(f"Line {line}: {content}")
|
|
993
|
+
|
|
994
|
+
# Example: cancel after 100 lines
|
|
995
|
+
if line and line > 100:
|
|
996
|
+
break # Cleanup happens automatically
|
|
997
|
+
except asyncio.CancelledError:
|
|
998
|
+
# Stream cleanup is automatic even on cancellation
|
|
999
|
+
pass
|
|
1000
|
+
|
|
1001
|
+
asyncio.run(main())
|
|
1002
|
+
```
|
|
1003
|
+
|
|
1004
|
+
### Using the Processor
|
|
1005
|
+
|
|
1006
|
+
#### Basic Usage
|
|
1007
|
+
|
|
1008
|
+
Call `await initialize()` once at startup to load your registry. Use context managers for automatic cleanup:
|
|
1009
|
+
|
|
1010
|
+
```python
|
|
1011
|
+
import asyncio
|
|
1012
|
+
from chuk_tool_processor import ToolProcessor, initialize
|
|
1013
|
+
|
|
1014
|
+
async def main():
|
|
1015
|
+
await initialize()
|
|
1016
|
+
|
|
1017
|
+
# Context manager automatically handles cleanup
|
|
1018
|
+
async with ToolProcessor() as processor:
|
|
1019
|
+
# Discover available tools
|
|
1020
|
+
tools = await processor.list_tools()
|
|
1021
|
+
print(f"Available tools: {tools}")
|
|
1022
|
+
|
|
1023
|
+
# Process LLM output
|
|
1024
|
+
llm_output = '<tool name="calculator" args=\'{"operation":"add","a":2,"b":3}\'/>'
|
|
1025
|
+
results = await processor.process(llm_output)
|
|
1026
|
+
|
|
1027
|
+
for result in results:
|
|
1028
|
+
if result.error:
|
|
1029
|
+
print(f"Error: {result.error}")
|
|
1030
|
+
else:
|
|
1031
|
+
print(f"Success: {result.result}")
|
|
1032
|
+
|
|
1033
|
+
# Processor automatically cleaned up here!
|
|
1034
|
+
|
|
1035
|
+
asyncio.run(main())
|
|
1036
|
+
```
|
|
1037
|
+
|
|
1038
|
+
#### Production Configuration
|
|
1039
|
+
|
|
1040
|
+
```python
|
|
1041
|
+
from chuk_tool_processor import ToolProcessor, initialize
|
|
1042
|
+
import asyncio
|
|
1043
|
+
|
|
1044
|
+
async def main():
|
|
1045
|
+
await initialize()
|
|
1046
|
+
|
|
1047
|
+
# Use context manager with production config
|
|
1048
|
+
async with ToolProcessor(
|
|
1049
|
+
# Execution settings
|
|
1050
|
+
default_timeout=30.0,
|
|
1051
|
+
max_concurrency=20,
|
|
1052
|
+
|
|
1053
|
+
# Production features
|
|
1054
|
+
enable_caching=True,
|
|
1055
|
+
cache_ttl=600,
|
|
1056
|
+
enable_rate_limiting=True,
|
|
1057
|
+
global_rate_limit=100,
|
|
1058
|
+
enable_retries=True,
|
|
1059
|
+
max_retries=3
|
|
1060
|
+
) as processor:
|
|
1061
|
+
# Use processor...
|
|
1062
|
+
results = await processor.process(llm_output)
|
|
1063
|
+
|
|
1064
|
+
# Automatic cleanup on exit
|
|
1065
|
+
|
|
1066
|
+
asyncio.run(main())
|
|
1067
|
+
```
|
|
1068
|
+
|
|
1069
|
+
### Advanced Production Features
|
|
1070
|
+
|
|
1071
|
+
Beyond basic configuration, CHUK Tool Processor includes several advanced features for production environments:
|
|
1072
|
+
|
|
1073
|
+
#### Circuit Breaker Pattern
|
|
1074
|
+
|
|
1075
|
+
Prevent cascading failures by automatically opening circuits for failing tools:
|
|
1076
|
+
|
|
1077
|
+
```python
|
|
1078
|
+
from chuk_tool_processor import ToolProcessor
|
|
1079
|
+
|
|
1080
|
+
processor = ToolProcessor(
|
|
1081
|
+
enable_circuit_breaker=True,
|
|
1082
|
+
circuit_breaker_threshold=5, # Open after 5 failures
|
|
1083
|
+
circuit_breaker_timeout=60.0, # Try recovery after 60s
|
|
1084
|
+
)
|
|
1085
|
+
|
|
1086
|
+
# Circuit states: CLOSED → OPEN → HALF_OPEN → CLOSED
|
|
1087
|
+
# - CLOSED: Normal operation
|
|
1088
|
+
# - OPEN: Blocking requests (too many failures)
|
|
1089
|
+
# - HALF_OPEN: Testing recovery with limited requests
|
|
1090
|
+
```
|
|
1091
|
+
|
|
1092
|
+
**How it works:**
|
|
1093
|
+
1. Tool fails repeatedly (hits threshold)
|
|
1094
|
+
2. Circuit opens → requests blocked immediately
|
|
1095
|
+
3. After timeout, circuit enters HALF_OPEN
|
|
1096
|
+
4. If test requests succeed → circuit closes
|
|
1097
|
+
5. If test requests fail → back to OPEN
|
|
1098
|
+
|
|
1099
|
+
**Benefits:**
|
|
1100
|
+
- Prevents wasting resources on failing services
|
|
1101
|
+
- Fast-fail for better UX
|
|
1102
|
+
- Automatic recovery detection
|
|
1103
|
+
|
|
1104
|
+
#### Idempotency Keys
|
|
1105
|
+
|
|
1106
|
+
Automatically deduplicate LLM tool calls using SHA256-based keys:
|
|
1107
|
+
|
|
1108
|
+
```python
|
|
1109
|
+
from chuk_tool_processor.models.tool_call import ToolCall
|
|
1110
|
+
|
|
1111
|
+
# Idempotency keys are auto-generated
|
|
1112
|
+
call1 = ToolCall(tool="search", arguments={"query": "Python"})
|
|
1113
|
+
call2 = ToolCall(tool="search", arguments={"query": "Python"})
|
|
1114
|
+
|
|
1115
|
+
# Same arguments = same idempotency key
|
|
1116
|
+
assert call1.idempotency_key == call2.idempotency_key
|
|
1117
|
+
|
|
1118
|
+
# Used automatically by caching layer
|
|
1119
|
+
processor = ToolProcessor(enable_caching=True)
|
|
1120
|
+
results1 = await processor.process([call1]) # Executes
|
|
1121
|
+
results2 = await processor.process([call2]) # Cache hit!
|
|
1122
|
+
```
|
|
1123
|
+
|
|
1124
|
+
**Benefits:**
|
|
1125
|
+
- Prevents duplicate executions from LLM retries
|
|
1126
|
+
- Deterministic cache keys
|
|
1127
|
+
- No manual key management needed
|
|
1128
|
+
|
|
1129
|
+
**Cache scope:** In-memory per-process by default. Cache backend is pluggable—see [CONFIGURATION.md](docs/CONFIGURATION.md) for custom cache backends.
|
|
1130
|
+
|
|
1131
|
+
#### Tool Schema Export
|
|
1132
|
+
|
|
1133
|
+
Export tool definitions to multiple formats for LLM prompting:
|
|
1134
|
+
|
|
1135
|
+
```python
|
|
1136
|
+
from chuk_tool_processor.models.tool_spec import ToolSpec, ToolCapability
|
|
1137
|
+
from chuk_tool_processor.models.validated_tool import ValidatedTool
|
|
1138
|
+
|
|
1139
|
+
@register_tool(name="weather")
|
|
1140
|
+
class WeatherTool(ValidatedTool):
|
|
1141
|
+
"""Get current weather for a location."""
|
|
1142
|
+
|
|
1143
|
+
class Arguments(BaseModel):
|
|
1144
|
+
location: str = Field(..., description="City name")
|
|
1145
|
+
|
|
1146
|
+
class Result(BaseModel):
|
|
1147
|
+
temperature: float
|
|
1148
|
+
conditions: str
|
|
1149
|
+
|
|
1150
|
+
# Generate tool spec
|
|
1151
|
+
spec = ToolSpec.from_validated_tool(WeatherTool)
|
|
1152
|
+
|
|
1153
|
+
# Export to different formats
|
|
1154
|
+
openai_format = spec.to_openai() # For OpenAI function calling
|
|
1155
|
+
anthropic_format = spec.to_anthropic() # For Claude tools
|
|
1156
|
+
mcp_format = spec.to_mcp() # For MCP servers
|
|
1157
|
+
|
|
1158
|
+
# Example OpenAI format:
|
|
1159
|
+
# {
|
|
1160
|
+
# "type": "function",
|
|
1161
|
+
# "function": {
|
|
1162
|
+
# "name": "weather",
|
|
1163
|
+
# "description": "Get current weather for a location.",
|
|
1164
|
+
# "parameters": {...} # JSON Schema
|
|
1165
|
+
# }
|
|
1166
|
+
# }
|
|
1167
|
+
```
|
|
1168
|
+
|
|
1169
|
+
**Use cases:**
|
|
1170
|
+
- Generate tool definitions for LLM system prompts
|
|
1171
|
+
- Documentation generation
|
|
1172
|
+
- API contract validation
|
|
1173
|
+
- Cross-platform tool sharing
|
|
1174
|
+
|
|
1175
|
+
#### Machine-Readable Error Codes
|
|
1176
|
+
|
|
1177
|
+
Structured error handling with error codes for programmatic responses.
|
|
1178
|
+
|
|
1179
|
+
**Error Contract:** Every error includes a machine-readable code, human-readable message, and structured details:
|
|
1180
|
+
|
|
1181
|
+
```python
|
|
1182
|
+
from chuk_tool_processor.core.exceptions import (
|
|
1183
|
+
ErrorCode,
|
|
1184
|
+
ToolNotFoundError,
|
|
1185
|
+
ToolTimeoutError,
|
|
1186
|
+
ToolCircuitOpenError,
|
|
1187
|
+
)
|
|
1188
|
+
|
|
1189
|
+
try:
|
|
1190
|
+
results = await processor.process(llm_output)
|
|
1191
|
+
except ToolNotFoundError as e:
|
|
1192
|
+
if e.code == ErrorCode.TOOL_NOT_FOUND:
|
|
1193
|
+
# Suggest available tools to LLM
|
|
1194
|
+
available = e.details.get("available_tools", [])
|
|
1195
|
+
print(f"Try one of: {available}")
|
|
1196
|
+
except ToolTimeoutError as e:
|
|
1197
|
+
if e.code == ErrorCode.TOOL_TIMEOUT:
|
|
1198
|
+
# Inform LLM to use faster alternative
|
|
1199
|
+
timeout = e.details["timeout"]
|
|
1200
|
+
print(f"Tool timed out after {timeout}s")
|
|
1201
|
+
except ToolCircuitOpenError as e:
|
|
1202
|
+
if e.code == ErrorCode.TOOL_CIRCUIT_OPEN:
|
|
1203
|
+
# Tell LLM this service is temporarily down
|
|
1204
|
+
reset_time = e.details.get("reset_timeout")
|
|
1205
|
+
print(f"Service unavailable, retry in {reset_time}s")
|
|
1206
|
+
|
|
1207
|
+
# All errors include .to_dict() for logging
|
|
1208
|
+
error_dict = e.to_dict()
|
|
1209
|
+
# {
|
|
1210
|
+
# "error": "ToolCircuitOpenError",
|
|
1211
|
+
# "code": "TOOL_CIRCUIT_OPEN",
|
|
1212
|
+
# "message": "Tool 'api_tool' circuit breaker is open...",
|
|
1213
|
+
# "details": {"tool_name": "api_tool", "failure_count": 5, ...}
|
|
1214
|
+
# }
|
|
1215
|
+
```
|
|
1216
|
+
|
|
1217
|
+
**Available error codes:**
|
|
1218
|
+
- `TOOL_NOT_FOUND` - Tool doesn't exist in registry
|
|
1219
|
+
- `TOOL_EXECUTION_FAILED` - Tool execution error
|
|
1220
|
+
- `TOOL_TIMEOUT` - Tool exceeded timeout
|
|
1221
|
+
- `TOOL_CIRCUIT_OPEN` - Circuit breaker is open
|
|
1222
|
+
- `TOOL_RATE_LIMITED` - Rate limit exceeded
|
|
1223
|
+
- `TOOL_VALIDATION_ERROR` - Argument validation failed
|
|
1224
|
+
- `MCP_CONNECTION_FAILED` - MCP server unreachable
|
|
1225
|
+
- Plus 11 more for comprehensive error handling
|
|
1226
|
+
|
|
1227
|
+
#### LLM-Friendly Argument Coercion
|
|
1228
|
+
|
|
1229
|
+
Automatically coerce LLM outputs to correct types:
|
|
1230
|
+
|
|
1231
|
+
```python
|
|
1232
|
+
from chuk_tool_processor.models.validated_tool import ValidatedTool
|
|
1233
|
+
|
|
1234
|
+
class SearchTool(ValidatedTool):
|
|
1235
|
+
class Arguments(BaseModel):
|
|
1236
|
+
query: str
|
|
1237
|
+
limit: int = 10
|
|
1238
|
+
category: str = "all"
|
|
1239
|
+
|
|
1240
|
+
# Pydantic config for LLM outputs:
|
|
1241
|
+
# - str_strip_whitespace=True → Remove accidental whitespace
|
|
1242
|
+
# - extra="ignore" → Ignore unknown fields
|
|
1243
|
+
# - use_enum_values=True → Convert enums to values
|
|
1244
|
+
# - coerce_numbers_to_str=False → Keep type strictness
|
|
1245
|
+
|
|
1246
|
+
# LLM outputs often have quirks:
|
|
1247
|
+
llm_output = {
|
|
1248
|
+
"query": " Python tutorials ", # Extra whitespace
|
|
1249
|
+
"limit": "5", # String instead of int
|
|
1250
|
+
"unknown_field": "ignored" # Extra field
|
|
1251
|
+
}
|
|
1252
|
+
|
|
1253
|
+
# ValidatedTool automatically coerces and validates
|
|
1254
|
+
tool = SearchTool()
|
|
1255
|
+
result = await tool.execute(**llm_output)
|
|
1256
|
+
# ✅ Works! Whitespace stripped, "5" → 5, extra field ignored
|
|
1257
|
+
```
|
|
1258
|
+
|
|
1259
|
+
## Advanced Topics
|
|
1260
|
+
|
|
1261
|
+
### Using Isolated Strategy
|
|
1262
|
+
|
|
1263
|
+
Use `IsolatedStrategy` when running untrusted, third-party, or potentially unsafe code that shouldn't share the same process as your main app.
|
|
1264
|
+
|
|
1265
|
+
For isolation and safety when running untrusted code:
|
|
1266
|
+
|
|
1267
|
+
```python
|
|
1268
|
+
import asyncio
|
|
1269
|
+
from chuk_tool_processor import ToolProcessor, IsolatedStrategy, get_default_registry
|
|
1270
|
+
|
|
1271
|
+
async def main():
|
|
1272
|
+
registry = await get_default_registry()
|
|
1273
|
+
processor = ToolProcessor(
|
|
1274
|
+
strategy=IsolatedStrategy(
|
|
1275
|
+
registry=registry,
|
|
1276
|
+
max_workers=4,
|
|
1277
|
+
default_timeout=30.0
|
|
1278
|
+
)
|
|
1279
|
+
)
|
|
1280
|
+
# Use processor...
|
|
1281
|
+
|
|
1282
|
+
asyncio.run(main())
|
|
1283
|
+
```
|
|
1284
|
+
|
|
1285
|
+
> **Security & Isolation — Threat Model**
|
|
1286
|
+
>
|
|
1287
|
+
> Untrusted tool code runs in subprocesses; faults and crashes don't bring down your app. **Zero crash blast radius.** For hard CPU/RAM/network limits, run the processor inside a container with `--cpus`, `--memory`, and egress filtering. Secrets are never injected by default—pass them explicitly via tool arguments or scoped environment variables.
|
|
1288
|
+
|
|
1289
|
+
### Real-World MCP Examples
|
|
1290
|
+
|
|
1291
|
+
#### Example 1: Notion Integration with OAuth
|
|
1292
|
+
|
|
1293
|
+
Complete OAuth flow connecting to Notion's MCP server:
|
|
1294
|
+
|
|
1295
|
+
```python
|
|
1296
|
+
from chuk_tool_processor.mcp import setup_mcp_http_streamable
|
|
1297
|
+
|
|
1298
|
+
# After completing OAuth flow (see examples/04_mcp_integration/notion_oauth.py for full flow)
|
|
1299
|
+
processor, manager = await setup_mcp_http_streamable(
|
|
1300
|
+
servers=[{
|
|
1301
|
+
"name": "notion",
|
|
1302
|
+
"url": "https://mcp.notion.com/mcp",
|
|
1303
|
+
"headers": {"Authorization": f"Bearer {access_token}"}
|
|
1304
|
+
}],
|
|
1305
|
+
namespace="notion",
|
|
1306
|
+
initialization_timeout=120.0
|
|
1307
|
+
)
|
|
1308
|
+
|
|
1309
|
+
# Get available Notion tools
|
|
1310
|
+
tools = manager.get_all_tools()
|
|
1311
|
+
print(f"Available tools: {[t['name'] for t in tools]}")
|
|
1312
|
+
|
|
1313
|
+
# Use Notion tools in your LLM workflow
|
|
1314
|
+
results = await processor.process(
|
|
1315
|
+
'<tool name="notion.search_pages" args=\'{"query": "Q4 planning"}\'/>'
|
|
1316
|
+
)
|
|
1317
|
+
```
|
|
1318
|
+
|
|
1319
|
+
<details>
|
|
1320
|
+
<summary><strong>Click to expand more MCP examples (SQLite, Echo Server)</strong></summary>
|
|
1321
|
+
|
|
1322
|
+
#### Example 2: Local SQLite Database Access
|
|
1323
|
+
|
|
1324
|
+
Run SQLite MCP server locally for database operations:
|
|
1325
|
+
|
|
1326
|
+
```python
|
|
1327
|
+
from chuk_tool_processor.mcp import setup_mcp_stdio
|
|
1328
|
+
import json
|
|
1329
|
+
|
|
1330
|
+
# Configure SQLite server
|
|
1331
|
+
config = {
|
|
1332
|
+
"mcpServers": {
|
|
1333
|
+
"sqlite": {
|
|
1334
|
+
"command": "uvx",
|
|
1335
|
+
"args": ["mcp-server-sqlite", "--db-path", "./data/app.db"],
|
|
1336
|
+
"transport": "stdio"
|
|
1337
|
+
}
|
|
1338
|
+
}
|
|
1339
|
+
}
|
|
1340
|
+
|
|
1341
|
+
with open("mcp_config.json", "w") as f:
|
|
1342
|
+
json.dump(config, f)
|
|
1343
|
+
|
|
1344
|
+
# Connect to local database
|
|
1345
|
+
processor, manager = await setup_mcp_stdio(
|
|
1346
|
+
config_file="mcp_config.json",
|
|
1347
|
+
servers=["sqlite"],
|
|
1348
|
+
namespace="db",
|
|
1349
|
+
initialization_timeout=120.0 # First run downloads mcp-server-sqlite
|
|
1350
|
+
)
|
|
1351
|
+
|
|
1352
|
+
# Query your database via LLM
|
|
1353
|
+
results = await processor.process(
|
|
1354
|
+
'<tool name="db.query" args=\'{"sql": "SELECT COUNT(*) FROM users"}\'/>'
|
|
1355
|
+
)
|
|
1356
|
+
```
|
|
1357
|
+
|
|
1358
|
+
#### Example 3: Simple STDIO Echo Server
|
|
1359
|
+
|
|
1360
|
+
Minimal example for testing STDIO transport:
|
|
1361
|
+
|
|
1362
|
+
```python
|
|
1363
|
+
from chuk_tool_processor.mcp import setup_mcp_stdio
|
|
1364
|
+
import json
|
|
1365
|
+
|
|
1366
|
+
# Configure echo server (great for testing)
|
|
1367
|
+
config = {
|
|
1368
|
+
"mcpServers": {
|
|
1369
|
+
"echo": {
|
|
1370
|
+
"command": "uvx",
|
|
1371
|
+
"args": ["chuk-mcp-echo", "stdio"],
|
|
1372
|
+
"transport": "stdio"
|
|
1373
|
+
}
|
|
1374
|
+
}
|
|
1375
|
+
}
|
|
1376
|
+
|
|
1377
|
+
with open("echo_config.json", "w") as f:
|
|
1378
|
+
json.dump(config, f)
|
|
1379
|
+
|
|
1380
|
+
processor, manager = await setup_mcp_stdio(
|
|
1381
|
+
config_file="echo_config.json",
|
|
1382
|
+
servers=["echo"],
|
|
1383
|
+
namespace="echo",
|
|
1384
|
+
initialization_timeout=60.0
|
|
1385
|
+
)
|
|
1386
|
+
|
|
1387
|
+
# Test echo functionality
|
|
1388
|
+
results = await processor.process(
|
|
1389
|
+
'<tool name="echo.echo" args=\'{"message": "Hello MCP!"}\'/>'
|
|
1390
|
+
)
|
|
1391
|
+
```
|
|
1392
|
+
|
|
1393
|
+
</details>
|
|
1394
|
+
|
|
1395
|
+
See `examples/04_mcp_integration/notion_oauth.py`, `examples/04_mcp_integration/stdio_sqlite.py`, and `examples/04_mcp_integration/stdio_echo.py` for complete working implementations.
|
|
1396
|
+
|
|
1397
|
+
#### OAuth Token Refresh
|
|
1398
|
+
|
|
1399
|
+
<details>
|
|
1400
|
+
<summary><strong>Click to expand OAuth token refresh guide</strong></summary>
|
|
1401
|
+
|
|
1402
|
+
For MCP servers that use OAuth authentication, CHUK Tool Processor supports automatic token refresh when access tokens expire. This prevents your tools from failing due to expired tokens during long-running sessions.
|
|
1403
|
+
|
|
1404
|
+
**How it works:**
|
|
1405
|
+
1. When a tool call receives an OAuth-related error (e.g., "invalid_token", "expired token", "unauthorized")
|
|
1406
|
+
2. The processor automatically calls your refresh callback
|
|
1407
|
+
3. Updates the authentication headers with the new token
|
|
1408
|
+
4. Retries the tool call with fresh credentials
|
|
1409
|
+
|
|
1410
|
+
**Setup with HTTP Streamable:**
|
|
1411
|
+
|
|
1412
|
+
```python
|
|
1413
|
+
from chuk_tool_processor.mcp import setup_mcp_http_streamable
|
|
1414
|
+
|
|
1415
|
+
async def refresh_oauth_token():
|
|
1416
|
+
"""Called automatically when tokens expire."""
|
|
1417
|
+
# Your token refresh logic here
|
|
1418
|
+
# Return dict with new Authorization header
|
|
1419
|
+
new_token = await your_refresh_logic()
|
|
1420
|
+
return {"Authorization": f"Bearer {new_token}"}
|
|
1421
|
+
|
|
1422
|
+
processor, manager = await setup_mcp_http_streamable(
|
|
1423
|
+
servers=[{
|
|
1424
|
+
"name": "notion",
|
|
1425
|
+
"url": "https://mcp.notion.com/mcp",
|
|
1426
|
+
"headers": {"Authorization": f"Bearer {initial_access_token}"}
|
|
1427
|
+
}],
|
|
1428
|
+
namespace="notion",
|
|
1429
|
+
oauth_refresh_callback=refresh_oauth_token # Enable auto-refresh
|
|
1430
|
+
)
|
|
1431
|
+
```
|
|
1432
|
+
|
|
1433
|
+
**Setup with SSE:**
|
|
1434
|
+
|
|
1435
|
+
```python
|
|
1436
|
+
from chuk_tool_processor.mcp import setup_mcp_sse
|
|
1437
|
+
|
|
1438
|
+
async def refresh_oauth_token():
|
|
1439
|
+
"""Refresh expired OAuth token."""
|
|
1440
|
+
# Exchange refresh token for new access token
|
|
1441
|
+
new_access_token = await exchange_refresh_token(refresh_token)
|
|
1442
|
+
return {"Authorization": f"Bearer {new_access_token}"}
|
|
1443
|
+
|
|
1444
|
+
processor, manager = await setup_mcp_sse(
|
|
1445
|
+
servers=[{
|
|
1446
|
+
"name": "atlassian",
|
|
1447
|
+
"url": "https://mcp.atlassian.com/v1/sse",
|
|
1448
|
+
"headers": {"Authorization": f"Bearer {initial_token}"}
|
|
1449
|
+
}],
|
|
1450
|
+
namespace="atlassian",
|
|
1451
|
+
oauth_refresh_callback=refresh_oauth_token
|
|
1452
|
+
)
|
|
1453
|
+
```
|
|
1454
|
+
|
|
1455
|
+
**OAuth errors detected automatically:**
|
|
1456
|
+
- `invalid_token`
|
|
1457
|
+
- `expired token`
|
|
1458
|
+
- `OAuth validation failed`
|
|
1459
|
+
- `unauthorized`
|
|
1460
|
+
- `token expired`
|
|
1461
|
+
- `authentication failed`
|
|
1462
|
+
- `invalid access token`
|
|
1463
|
+
|
|
1464
|
+
**Important notes:**
|
|
1465
|
+
- The refresh callback must return a dict with an `Authorization` key
|
|
1466
|
+
- If refresh fails or returns invalid headers, the original error is returned
|
|
1467
|
+
- Token refresh is attempted only once per tool call (no infinite retry loops)
|
|
1468
|
+
- After successful refresh, the updated headers are used for all subsequent calls
|
|
1469
|
+
|
|
1470
|
+
See `examples/04_mcp_integration/notion_oauth.py` for a complete OAuth 2.1 implementation with PKCE and automatic token refresh.
|
|
1471
|
+
|
|
1472
|
+
</details>
|
|
1473
|
+
|
|
1474
|
+
### Observability
|
|
1475
|
+
|
|
1476
|
+
#### Structured Logging
|
|
1477
|
+
|
|
1478
|
+
Enable JSON logging for production observability:
|
|
1479
|
+
|
|
1480
|
+
```python
|
|
1481
|
+
import asyncio
|
|
1482
|
+
from chuk_tool_processor.logging import setup_logging, get_logger
|
|
1483
|
+
|
|
1484
|
+
async def main():
|
|
1485
|
+
await setup_logging(
|
|
1486
|
+
level="INFO",
|
|
1487
|
+
structured=True, # JSON output (structured=False for human-readable)
|
|
1488
|
+
log_file="tool_processor.log"
|
|
1489
|
+
)
|
|
1490
|
+
logger = get_logger("my_app")
|
|
1491
|
+
logger.info("logging ready")
|
|
1492
|
+
|
|
1493
|
+
asyncio.run(main())
|
|
1494
|
+
```
|
|
1495
|
+
|
|
1496
|
+
When `structured=True`, logs are output as JSON. When `structured=False`, they're human-readable text.
|
|
1497
|
+
|
|
1498
|
+
Example JSON log output:
|
|
1499
|
+
|
|
1500
|
+
```json
|
|
1501
|
+
{
|
|
1502
|
+
"timestamp": "2025-01-15T10:30:45.123Z",
|
|
1503
|
+
"level": "INFO",
|
|
1504
|
+
"tool": "calculator",
|
|
1505
|
+
"status": "success",
|
|
1506
|
+
"duration_ms": 4.2,
|
|
1507
|
+
"cached": false,
|
|
1508
|
+
"attempts": 1
|
|
1509
|
+
}
|
|
1510
|
+
```
|
|
1511
|
+
|
|
1512
|
+
#### Automatic Metrics
|
|
1513
|
+
|
|
1514
|
+
Metrics are automatically collected for:
|
|
1515
|
+
- ✅ Tool execution (success/failure rates, duration)
|
|
1516
|
+
- ✅ Cache performance (hit/miss rates)
|
|
1517
|
+
- ✅ Parser accuracy (which parsers succeeded)
|
|
1518
|
+
- ✅ Retry attempts (how many retries per tool)
|
|
1519
|
+
|
|
1520
|
+
Access metrics programmatically:
|
|
1521
|
+
|
|
1522
|
+
```python
|
|
1523
|
+
import asyncio
|
|
1524
|
+
from chuk_tool_processor.logging import metrics
|
|
1525
|
+
|
|
1526
|
+
async def main():
|
|
1527
|
+
# Metrics are logged automatically, but you can also access them
|
|
1528
|
+
await metrics.log_tool_execution(
|
|
1529
|
+
tool="custom_tool",
|
|
1530
|
+
success=True,
|
|
1531
|
+
duration=1.5,
|
|
1532
|
+
cached=False,
|
|
1533
|
+
attempts=1
|
|
1534
|
+
)
|
|
1535
|
+
|
|
1536
|
+
asyncio.run(main())
|
|
1537
|
+
```
|
|
1538
|
+
|
|
1539
|
+
#### OpenTelemetry & Prometheus (Drop-in Observability)
|
|
1540
|
+
|
|
1541
|
+
<details>
|
|
1542
|
+
<summary><strong>Click to expand complete observability guide</strong></summary>
|
|
1543
|
+
|
|
1544
|
+
**3-Line Setup:**
|
|
1545
|
+
|
|
1546
|
+
```python
|
|
1547
|
+
from chuk_tool_processor.observability import setup_observability
|
|
1548
|
+
|
|
1549
|
+
setup_observability(
|
|
1550
|
+
service_name="my-tool-service",
|
|
1551
|
+
enable_tracing=True, # → OpenTelemetry traces
|
|
1552
|
+
enable_metrics=True, # → Prometheus metrics at :9090/metrics
|
|
1553
|
+
metrics_port=9090
|
|
1554
|
+
)
|
|
1555
|
+
# That's it! Every tool execution is now automatically traced and metered.
|
|
1556
|
+
```
|
|
1557
|
+
|
|
1558
|
+
**What you get automatically:**
|
|
1559
|
+
- ✅ Distributed traces (Jaeger, Zipkin, any OTLP collector)
|
|
1560
|
+
- ✅ Prometheus metrics (error rate, latency P50/P95/P99, cache hit rate)
|
|
1561
|
+
- ✅ Circuit breaker state monitoring
|
|
1562
|
+
- ✅ Retry attempt tracking
|
|
1563
|
+
- ✅ Zero code changes to your tools
|
|
1564
|
+
|
|
1565
|
+
**Why Telemetry Matters**: In production, you need to know *what* your tools are doing, *how long* they take, *when* they fail, and *why*. CHUK Tool Processor provides **enterprise-grade telemetry** that operations teams expect—with zero manual instrumentation.
|
|
1566
|
+
|
|
1567
|
+
**What You Get (Automatically)**
|
|
1568
|
+
|
|
1569
|
+
✅ **Distributed Traces** - Understand exactly what happened in each tool call
|
|
1570
|
+
- See the complete execution timeline for every tool
|
|
1571
|
+
- Track retries, cache hits, circuit breaker state changes
|
|
1572
|
+
- Correlate failures across your system
|
|
1573
|
+
- Export to Jaeger, Zipkin, or any OTLP-compatible backend
|
|
1574
|
+
|
|
1575
|
+
✅ **Production Metrics** - Monitor health and performance in real-time
|
|
1576
|
+
- Track error rates, latency percentiles (P50/P95/P99)
|
|
1577
|
+
- Monitor cache hit rates and retry attempts
|
|
1578
|
+
- Alert on circuit breaker opens and rate limit hits
|
|
1579
|
+
- Export to Prometheus, Grafana, or any metrics backend
|
|
1580
|
+
|
|
1581
|
+
✅ **Zero Configuration** - Works out of the box
|
|
1582
|
+
- No manual instrumentation needed
|
|
1583
|
+
- No code changes to existing tools
|
|
1584
|
+
- Gracefully degrades if packages not installed
|
|
1585
|
+
- Standard OTEL and Prometheus formats
|
|
1586
|
+
|
|
1587
|
+
**Installation**
|
|
1588
|
+
|
|
1589
|
+
```bash
|
|
1590
|
+
# Install observability dependencies
|
|
1591
|
+
pip install chuk-tool-processor[observability]
|
|
1592
|
+
|
|
1593
|
+
# Or manually
|
|
1594
|
+
pip install opentelemetry-api opentelemetry-sdk opentelemetry-exporter-otlp prometheus-client
|
|
1595
|
+
|
|
1596
|
+
# Or with uv (recommended)
|
|
1597
|
+
uv pip install chuk-tool-processor --group observability
|
|
1598
|
+
```
|
|
1599
|
+
|
|
1600
|
+
> **⚠️ SRE Note**: Observability packages are **optional**. If not installed, all observability calls are no-ops—your tools run normally without tracing/metrics. Zero crashes, zero warnings. Safe to deploy without observability dependencies.
|
|
1601
|
+
|
|
1602
|
+
**Quick Start: See Your Tools in Action**
|
|
1603
|
+
|
|
1604
|
+
```python
|
|
1605
|
+
import asyncio
|
|
1606
|
+
from chuk_tool_processor.observability import setup_observability
|
|
1607
|
+
from chuk_tool_processor import ToolProcessor, initialize, register_tool
|
|
1608
|
+
|
|
1609
|
+
@register_tool(name="weather_api")
|
|
1610
|
+
class WeatherTool:
|
|
1611
|
+
async def execute(self, location: str) -> dict:
|
|
1612
|
+
# Simulating API call
|
|
1613
|
+
return {"temperature": 72, "conditions": "sunny", "location": location}
|
|
1614
|
+
|
|
1615
|
+
async def main():
|
|
1616
|
+
# 1. Enable observability (one line!)
|
|
1617
|
+
setup_observability(
|
|
1618
|
+
service_name="weather-service",
|
|
1619
|
+
enable_tracing=True,
|
|
1620
|
+
enable_metrics=True,
|
|
1621
|
+
metrics_port=9090
|
|
1622
|
+
)
|
|
1623
|
+
|
|
1624
|
+
# 2. Create processor with production features
|
|
1625
|
+
await initialize()
|
|
1626
|
+
processor = ToolProcessor(
|
|
1627
|
+
enable_caching=True, # Cache expensive API calls
|
|
1628
|
+
enable_retries=True, # Auto-retry on failures
|
|
1629
|
+
enable_circuit_breaker=True, # Prevent cascading failures
|
|
1630
|
+
enable_rate_limiting=True, # Prevent API abuse
|
|
1631
|
+
)
|
|
1632
|
+
|
|
1633
|
+
# 3. Execute tools - automatically traced and metered
|
|
1634
|
+
results = await processor.process(
|
|
1635
|
+
'<tool name="weather_api" args=\'{"location": "San Francisco"}\'/>'
|
|
1636
|
+
)
|
|
1637
|
+
|
|
1638
|
+
print(f"Result: {results[0].result}")
|
|
1639
|
+
print(f"Duration: {results[0].duration}s")
|
|
1640
|
+
print(f"Cached: {results[0].cached}")
|
|
1641
|
+
|
|
1642
|
+
asyncio.run(main())
|
|
1643
|
+
```
|
|
1644
|
+
|
|
1645
|
+
**View Your Data**
|
|
1646
|
+
|
|
1647
|
+
```bash
|
|
1648
|
+
# Start Jaeger for trace visualization
|
|
1649
|
+
docker run -d -p 4317:4317 -p 16686:16686 jaegertracing/all-in-one:latest
|
|
1650
|
+
|
|
1651
|
+
# Start your application
|
|
1652
|
+
python your_app.py
|
|
1653
|
+
|
|
1654
|
+
# View distributed traces
|
|
1655
|
+
open http://localhost:16686
|
|
1656
|
+
|
|
1657
|
+
# View Prometheus metrics
|
|
1658
|
+
curl http://localhost:9090/metrics | grep tool_
|
|
1659
|
+
```
|
|
1660
|
+
|
|
1661
|
+
**What Gets Traced (Automatic Spans)**
|
|
1662
|
+
|
|
1663
|
+
Every execution layer creates standardized OpenTelemetry spans:
|
|
1664
|
+
|
|
1665
|
+
| Span Name | When Created | Key Attributes |
|
|
1666
|
+
|-----------|--------------|----------------|
|
|
1667
|
+
| `tool.execute` | Every tool execution | `tool.name`, `tool.namespace`, `tool.duration_ms`, `tool.cached`, `tool.error`, `tool.success` |
|
|
1668
|
+
| `tool.cache.lookup` | Cache lookup | `cache.hit` (true/false), `cache.operation=lookup` |
|
|
1669
|
+
| `tool.cache.set` | Cache write | `cache.ttl`, `cache.operation=set` |
|
|
1670
|
+
| `tool.retry.attempt` | Each retry | `retry.attempt`, `retry.max_attempts`, `retry.success` |
|
|
1671
|
+
| `tool.circuit_breaker.check` | Circuit state check | `circuit.state` (CLOSED/OPEN/HALF_OPEN) |
|
|
1672
|
+
| `tool.rate_limit.check` | Rate limit check | `rate_limit.allowed` (true/false) |
|
|
1673
|
+
|
|
1674
|
+
**Example trace hierarchy:**
|
|
1675
|
+
```
|
|
1676
|
+
tool.execute (weather_api)
|
|
1677
|
+
├── tool.cache.lookup (miss)
|
|
1678
|
+
├── tool.retry.attempt (0)
|
|
1679
|
+
│ └── tool.execute (actual API call)
|
|
1680
|
+
├── tool.retry.attempt (1) [if first failed]
|
|
1681
|
+
└── tool.cache.set (store result)
|
|
1682
|
+
```
|
|
1683
|
+
|
|
1684
|
+
**What Gets Metered (Automatic Metrics)**
|
|
1685
|
+
|
|
1686
|
+
Standard Prometheus metrics exposed at `/metrics`:
|
|
1687
|
+
|
|
1688
|
+
| Metric | Type | Labels | Use For |
|
|
1689
|
+
|--------|------|--------|---------|
|
|
1690
|
+
| `tool_executions_total` | Counter | `tool`, `namespace`, `status` | Error rate, request volume |
|
|
1691
|
+
| `tool_execution_duration_seconds` | Histogram | `tool`, `namespace` | P50/P95/P99 latency |
|
|
1692
|
+
| `tool_cache_operations_total` | Counter | `tool`, `operation`, `result` | Cache hit rate |
|
|
1693
|
+
| `tool_retry_attempts_total` | Counter | `tool`, `attempt`, `success` | Retry frequency |
|
|
1694
|
+
| `tool_circuit_breaker_state` | Gauge | `tool` | Circuit health (0=CLOSED, 1=OPEN, 2=HALF_OPEN) |
|
|
1695
|
+
| `tool_circuit_breaker_failures_total` | Counter | `tool` | Failure count |
|
|
1696
|
+
| `tool_rate_limit_checks_total` | Counter | `tool`, `allowed` | Rate limit hits |
|
|
1697
|
+
|
|
1698
|
+
**Useful PromQL Queries**
|
|
1699
|
+
|
|
1700
|
+
```promql
|
|
1701
|
+
# Error rate per tool (last 5 minutes)
|
|
1702
|
+
rate(tool_executions_total{status="error"}[5m])
|
|
1703
|
+
/ rate(tool_executions_total[5m])
|
|
1704
|
+
|
|
1705
|
+
# P95 latency
|
|
1706
|
+
histogram_quantile(0.95, rate(tool_execution_duration_seconds_bucket[5m]))
|
|
1707
|
+
|
|
1708
|
+
# Cache hit rate
|
|
1709
|
+
rate(tool_cache_operations_total{result="hit"}[5m])
|
|
1710
|
+
/ rate(tool_cache_operations_total{operation="lookup"}[5m])
|
|
1711
|
+
|
|
1712
|
+
# Tools currently circuit broken
|
|
1713
|
+
tool_circuit_breaker_state == 1
|
|
1714
|
+
|
|
1715
|
+
# Retry rate (how often tools need retries)
|
|
1716
|
+
rate(tool_retry_attempts_total{attempt!="0"}[5m])
|
|
1717
|
+
/ rate(tool_executions_total[5m])
|
|
1718
|
+
```
|
|
1719
|
+
|
|
1720
|
+
**Configuration**
|
|
1721
|
+
|
|
1722
|
+
Configure via environment variables:
|
|
1723
|
+
|
|
1724
|
+
```bash
|
|
1725
|
+
# OTLP endpoint (where traces are sent)
|
|
1726
|
+
export OTEL_EXPORTER_OTLP_ENDPOINT=http://otel-collector:4317
|
|
1727
|
+
|
|
1728
|
+
# Service name (shown in traces)
|
|
1729
|
+
export OTEL_SERVICE_NAME=production-api
|
|
1730
|
+
|
|
1731
|
+
# Sampling (reduce overhead in high-traffic scenarios)
|
|
1732
|
+
export OTEL_TRACES_SAMPLER=traceidratio
|
|
1733
|
+
export OTEL_TRACES_SAMPLER_ARG=0.1 # Sample 10% of traces
|
|
1734
|
+
```
|
|
1735
|
+
|
|
1736
|
+
Or in code:
|
|
1737
|
+
|
|
1738
|
+
```python
|
|
1739
|
+
status = setup_observability(
|
|
1740
|
+
service_name="my-service",
|
|
1741
|
+
enable_tracing=True,
|
|
1742
|
+
enable_metrics=True,
|
|
1743
|
+
metrics_port=9090,
|
|
1744
|
+
metrics_host="0.0.0.0" # Allow external Prometheus scraping
|
|
1745
|
+
)
|
|
1746
|
+
|
|
1747
|
+
# Check status
|
|
1748
|
+
if status["tracing_enabled"]:
|
|
1749
|
+
print("Traces exporting to OTLP endpoint")
|
|
1750
|
+
if status["metrics_server_started"]:
|
|
1751
|
+
print("Metrics available at http://localhost:9090/metrics")
|
|
1752
|
+
```
|
|
1753
|
+
|
|
1754
|
+
**Production Integration**
|
|
1755
|
+
|
|
1756
|
+
**With Grafana + Prometheus:**
|
|
1757
|
+
```yaml
|
|
1758
|
+
# prometheus.yml
|
|
1759
|
+
scrape_configs:
|
|
1760
|
+
- job_name: 'chuk-tool-processor'
|
|
1761
|
+
scrape_interval: 15s
|
|
1762
|
+
static_configs:
|
|
1763
|
+
- targets: ['app:9090']
|
|
1764
|
+
```
|
|
1765
|
+
|
|
1766
|
+
**With OpenTelemetry Collector:**
|
|
1767
|
+
```yaml
|
|
1768
|
+
# otel-collector-config.yaml
|
|
1769
|
+
receivers:
|
|
1770
|
+
otlp:
|
|
1771
|
+
protocols:
|
|
1772
|
+
grpc:
|
|
1773
|
+
endpoint: 0.0.0.0:4317
|
|
1774
|
+
|
|
1775
|
+
exporters:
|
|
1776
|
+
jaeger:
|
|
1777
|
+
endpoint: jaeger:14250
|
|
1778
|
+
prometheus:
|
|
1779
|
+
endpoint: 0.0.0.0:8889
|
|
1780
|
+
|
|
1781
|
+
service:
|
|
1782
|
+
pipelines:
|
|
1783
|
+
traces:
|
|
1784
|
+
receivers: [otlp]
|
|
1785
|
+
exporters: [jaeger]
|
|
1786
|
+
```
|
|
1787
|
+
|
|
1788
|
+
**With Cloud Providers:**
|
|
1789
|
+
```bash
|
|
1790
|
+
# AWS X-Ray
|
|
1791
|
+
export OTEL_TRACES_SAMPLER=xray
|
|
1792
|
+
|
|
1793
|
+
# Google Cloud Trace
|
|
1794
|
+
export OTEL_EXPORTER_OTLP_ENDPOINT=https://cloudtrace.googleapis.com/v1/projects/PROJECT_ID/traces
|
|
1795
|
+
|
|
1796
|
+
# Datadog
|
|
1797
|
+
export OTEL_EXPORTER_OTLP_ENDPOINT=http://datadog-agent:4317
|
|
1798
|
+
```
|
|
1799
|
+
|
|
1800
|
+
**Why This Matters**
|
|
1801
|
+
|
|
1802
|
+
❌ **Without telemetry:**
|
|
1803
|
+
- "Why is this tool slow?" → No idea
|
|
1804
|
+
- "Is caching helping?" → Guessing
|
|
1805
|
+
- "Did that retry work?" → Check logs manually
|
|
1806
|
+
- "Is the circuit breaker working?" → Hope so
|
|
1807
|
+
- "Which tool is failing?" → Debug blindly
|
|
1808
|
+
|
|
1809
|
+
✅ **With telemetry:**
|
|
1810
|
+
- See exact execution timeline in Jaeger
|
|
1811
|
+
- Monitor cache hit rate in Grafana
|
|
1812
|
+
- Alert when retry rate spikes
|
|
1813
|
+
- Dashboard shows circuit breaker states
|
|
1814
|
+
- Metrics pinpoint the failing tool immediately
|
|
1815
|
+
|
|
1816
|
+
**Learn More**
|
|
1817
|
+
|
|
1818
|
+
📖 **Complete Guide**: See [`OBSERVABILITY.md`](OBSERVABILITY.md) for:
|
|
1819
|
+
- Complete span and metric specifications
|
|
1820
|
+
- Architecture and implementation details
|
|
1821
|
+
- Integration guides (Jaeger, Grafana, OTEL Collector)
|
|
1822
|
+
- Testing observability features
|
|
1823
|
+
- Environment variable configuration
|
|
1824
|
+
|
|
1825
|
+
🎯 **Working Example**: See `examples/02_production_features/observability_demo.py` for a complete demonstration with retries, caching, and circuit breakers
|
|
1826
|
+
|
|
1827
|
+
**Benefits**
|
|
1828
|
+
|
|
1829
|
+
✅ **Drop-in** - One function call, zero code changes
|
|
1830
|
+
✅ **Automatic** - All execution layers instrumented
|
|
1831
|
+
✅ **Standard** - OTEL + Prometheus (works with existing tools)
|
|
1832
|
+
✅ **Production-ready** - Ops teams get exactly what they expect
|
|
1833
|
+
✅ **Optional** - Gracefully degrades if packages not installed
|
|
1834
|
+
✅ **Zero-overhead** - No performance impact when disabled
|
|
1835
|
+
|
|
1836
|
+
</details>
|
|
1837
|
+
|
|
1838
|
+
### Error Handling
|
|
1839
|
+
|
|
1840
|
+
```python
|
|
1841
|
+
results = await processor.process(llm_output)
|
|
1842
|
+
|
|
1843
|
+
for result in results:
|
|
1844
|
+
if result.error:
|
|
1845
|
+
print(f"Tool '{result.tool}' failed: {result.error}")
|
|
1846
|
+
print(f"Duration: {result.duration}s")
|
|
1847
|
+
else:
|
|
1848
|
+
print(f"Tool '{result.tool}' succeeded: {result.result}")
|
|
1849
|
+
```
|
|
1850
|
+
|
|
1851
|
+
### Testing Tools
|
|
1852
|
+
|
|
1853
|
+
```python
|
|
1854
|
+
import pytest
|
|
1855
|
+
from chuk_tool_processor import ToolProcessor, initialize
|
|
1856
|
+
|
|
1857
|
+
@pytest.mark.asyncio
|
|
1858
|
+
async def test_calculator():
|
|
1859
|
+
await initialize()
|
|
1860
|
+
processor = ToolProcessor()
|
|
1861
|
+
|
|
1862
|
+
results = await processor.process(
|
|
1863
|
+
'<tool name="calculator" args=\'{"operation": "add", "a": 5, "b": 3}\'/>'
|
|
1864
|
+
)
|
|
1865
|
+
|
|
1866
|
+
assert results[0].result["result"] == 8
|
|
1867
|
+
```
|
|
1868
|
+
|
|
1869
|
+
**Fake tool pattern for testing:**
|
|
1870
|
+
|
|
1871
|
+
```python
|
|
1872
|
+
import pytest
|
|
1873
|
+
from chuk_tool_processor import ToolProcessor, register_tool, initialize
|
|
1874
|
+
|
|
1875
|
+
@register_tool(name="fake_tool")
|
|
1876
|
+
class FakeTool:
|
|
1877
|
+
"""No-op tool for testing processor behavior."""
|
|
1878
|
+
call_count = 0
|
|
1879
|
+
|
|
1880
|
+
async def execute(self, **kwargs) -> dict:
|
|
1881
|
+
FakeTool.call_count += 1
|
|
1882
|
+
return {"called": True, "args": kwargs}
|
|
1883
|
+
|
|
1884
|
+
@pytest.mark.asyncio
|
|
1885
|
+
async def test_processor_with_fake_tool():
|
|
1886
|
+
await initialize()
|
|
1887
|
+
processor = ToolProcessor()
|
|
1888
|
+
|
|
1889
|
+
# Reset counter
|
|
1890
|
+
FakeTool.call_count = 0
|
|
1891
|
+
|
|
1892
|
+
# Execute fake tool
|
|
1893
|
+
results = await processor.process(
|
|
1894
|
+
'<tool name="fake_tool" args=\'{"test_arg": "value"}\'/>'
|
|
1895
|
+
)
|
|
1896
|
+
|
|
1897
|
+
# Assert behavior
|
|
1898
|
+
assert FakeTool.call_count == 1
|
|
1899
|
+
assert results[0].result["called"] is True
|
|
1900
|
+
assert results[0].result["args"]["test_arg"] == "value"
|
|
1901
|
+
```
|
|
1902
|
+
|
|
1903
|
+
## Configuration
|
|
1904
|
+
|
|
1905
|
+
### Timeout Configuration
|
|
1906
|
+
|
|
1907
|
+
CHUK Tool Processor uses a unified timeout configuration system that applies to all MCP transports (HTTP Streamable, SSE, STDIO) and the StreamManager. Instead of managing dozens of individual timeout values, there are just **4 logical timeout categories**:
|
|
1908
|
+
|
|
1909
|
+
```python
|
|
1910
|
+
from chuk_tool_processor.mcp.transport import TimeoutConfig
|
|
1911
|
+
|
|
1912
|
+
# Create custom timeout configuration
|
|
1913
|
+
# (Defaults are: connect=30, operation=30, quick=5, shutdown=2)
|
|
1914
|
+
timeout_config = TimeoutConfig(
|
|
1915
|
+
connect=30.0, # Connection establishment, initialization, session discovery
|
|
1916
|
+
operation=30.0, # Normal operations (tool calls, listing tools/resources/prompts)
|
|
1917
|
+
quick=5.0, # Fast health checks and pings
|
|
1918
|
+
shutdown=2.0 # Cleanup and shutdown operations
|
|
1919
|
+
)
|
|
1920
|
+
```
|
|
1921
|
+
|
|
1922
|
+
**Using timeout configuration with StreamManager:**
|
|
1923
|
+
|
|
1924
|
+
```python
|
|
1925
|
+
from chuk_tool_processor.mcp.stream_manager import StreamManager
|
|
1926
|
+
from chuk_tool_processor.mcp.transport import TimeoutConfig
|
|
1927
|
+
|
|
1928
|
+
# Create StreamManager with custom timeouts
|
|
1929
|
+
timeout_config = TimeoutConfig(
|
|
1930
|
+
connect=60.0, # Longer for slow initialization
|
|
1931
|
+
operation=45.0, # Longer for heavy operations
|
|
1932
|
+
quick=3.0, # Faster health checks
|
|
1933
|
+
shutdown=5.0 # More time for cleanup
|
|
1934
|
+
)
|
|
1935
|
+
|
|
1936
|
+
manager = StreamManager(timeout_config=timeout_config)
|
|
1937
|
+
```
|
|
1938
|
+
|
|
1939
|
+
**Timeout categories explained:**
|
|
1940
|
+
|
|
1941
|
+
| Category | Default | Used For | Examples |
|
|
1942
|
+
|----------|---------|----------|----------|
|
|
1943
|
+
| `connect` | 30.0s | Connection setup, initialization, discovery | HTTP connection, SSE session discovery, STDIO subprocess launch |
|
|
1944
|
+
| `operation` | 30.0s | Normal tool operations | Tool calls, listing tools/resources/prompts, get_tools() |
|
|
1945
|
+
| `quick` | 5.0s | Fast health/status checks | Ping operations, health checks |
|
|
1946
|
+
| `shutdown` | 2.0s | Cleanup and teardown | Transport close, connection cleanup |
|
|
1947
|
+
|
|
1948
|
+
**Why this matters:**
|
|
1949
|
+
- ✅ **Simple**: 4 timeout values instead of 20+
|
|
1950
|
+
- ✅ **Consistent**: Same timeout behavior across all transports
|
|
1951
|
+
- ✅ **Configurable**: Adjust timeouts based on your environment (slow networks, large datasets, etc.)
|
|
1952
|
+
- ✅ **Type-safe**: Pydantic validation ensures correct values
|
|
1953
|
+
|
|
1954
|
+
**Example: Adjusting for slow environments**
|
|
1955
|
+
|
|
1956
|
+
```python
|
|
1957
|
+
from chuk_tool_processor.mcp import setup_mcp_stdio
|
|
1958
|
+
from chuk_tool_processor.mcp.transport import TimeoutConfig
|
|
1959
|
+
|
|
1960
|
+
# For slow network or resource-constrained environments
|
|
1961
|
+
slow_timeouts = TimeoutConfig(
|
|
1962
|
+
connect=120.0, # Allow more time for package downloads
|
|
1963
|
+
operation=60.0, # Allow more time for heavy operations
|
|
1964
|
+
quick=10.0, # Be patient with health checks
|
|
1965
|
+
shutdown=10.0 # Allow thorough cleanup
|
|
1966
|
+
)
|
|
1967
|
+
|
|
1968
|
+
processor, manager = await setup_mcp_stdio(
|
|
1969
|
+
config_file="mcp_config.json",
|
|
1970
|
+
servers=["sqlite"],
|
|
1971
|
+
namespace="db",
|
|
1972
|
+
initialization_timeout=120.0
|
|
1973
|
+
)
|
|
1974
|
+
|
|
1975
|
+
# Set custom timeouts on the manager
|
|
1976
|
+
manager.timeout_config = slow_timeouts
|
|
1977
|
+
```
|
|
1978
|
+
|
|
1979
|
+
### Environment Variables
|
|
1980
|
+
|
|
1981
|
+
| Variable | Default | Description |
|
|
1982
|
+
|----------|---------|-------------|
|
|
1983
|
+
| `CHUK_TOOL_REGISTRY_PROVIDER` | `memory` | Registry backend |
|
|
1984
|
+
| `CHUK_DEFAULT_TIMEOUT` | `30.0` | Default timeout (seconds) |
|
|
1985
|
+
| `CHUK_LOG_LEVEL` | `INFO` | Logging level |
|
|
1986
|
+
| `CHUK_STRUCTURED_LOGGING` | `true` | Enable JSON logging |
|
|
1987
|
+
| `MCP_BEARER_TOKEN` | - | Bearer token for MCP SSE |
|
|
1988
|
+
|
|
1989
|
+
### ToolProcessor Options
|
|
1990
|
+
|
|
1991
|
+
```python
|
|
1992
|
+
processor = ToolProcessor(
|
|
1993
|
+
default_timeout=30.0, # Timeout per tool
|
|
1994
|
+
max_concurrency=10, # Max concurrent executions
|
|
1995
|
+
enable_caching=True, # Result caching
|
|
1996
|
+
cache_ttl=300, # Cache TTL (seconds)
|
|
1997
|
+
enable_rate_limiting=False, # Rate limiting
|
|
1998
|
+
global_rate_limit=None, # (requests per minute) global cap
|
|
1999
|
+
enable_retries=True, # Auto-retry failures
|
|
2000
|
+
max_retries=3, # Max retry attempts
|
|
2001
|
+
# Optional per-tool rate limits: {"tool.name": (requests, per_seconds)}
|
|
2002
|
+
tool_rate_limits=None
|
|
2003
|
+
)
|
|
2004
|
+
```
|
|
2005
|
+
|
|
2006
|
+
### Performance & Tuning
|
|
2007
|
+
|
|
2008
|
+
| Parameter | Default | When to Adjust |
|
|
2009
|
+
|-----------|---------|----------------|
|
|
2010
|
+
| `default_timeout` | `30.0` | Increase for slow tools (e.g., AI APIs) |
|
|
2011
|
+
| `max_concurrency` | `10` | Increase for I/O-bound tools, decrease for CPU-bound |
|
|
2012
|
+
| `enable_caching` | `True` | Keep on for deterministic tools |
|
|
2013
|
+
| `cache_ttl` | `300` | Longer for stable data, shorter for real-time |
|
|
2014
|
+
| `enable_rate_limiting` | `False` | Enable when hitting API rate limits |
|
|
2015
|
+
| `global_rate_limit` | `None` | Set a global requests/min cap across all tools |
|
|
2016
|
+
| `enable_retries` | `True` | Disable for non-idempotent operations |
|
|
2017
|
+
| `max_retries` | `3` | Increase for flaky external APIs |
|
|
2018
|
+
| `tool_rate_limits` | `None` | Dict mapping tool name → (max_requests, window_seconds). Overrides `global_rate_limit` per tool |
|
|
2019
|
+
|
|
2020
|
+
**Per-tool rate limiting example:**
|
|
2021
|
+
|
|
2022
|
+
```python
|
|
2023
|
+
processor = ToolProcessor(
|
|
2024
|
+
enable_rate_limiting=True,
|
|
2025
|
+
global_rate_limit=100, # 100 requests/minute across all tools
|
|
2026
|
+
tool_rate_limits={
|
|
2027
|
+
"notion.search_pages": (10, 60), # 10 requests per 60 seconds
|
|
2028
|
+
"expensive_api": (5, 60), # 5 requests per minute
|
|
2029
|
+
"local_tool": (1000, 60), # 1000 requests per minute (local is fast)
|
|
2030
|
+
}
|
|
2031
|
+
)
|
|
2032
|
+
```
|
|
2033
|
+
|
|
2034
|
+
### Security Model
|
|
2035
|
+
|
|
2036
|
+
CHUK Tool Processor provides multiple layers of safety:
|
|
2037
|
+
|
|
2038
|
+
| Concern | Protection | Configuration |
|
|
2039
|
+
|---------|------------|---------------|
|
|
2040
|
+
| **Timeouts** | Every tool has a timeout | `default_timeout=30.0` |
|
|
2041
|
+
| **Process Isolation** | Run tools in separate processes | `strategy=IsolatedStrategy()` |
|
|
2042
|
+
| **Rate Limiting** | Prevent abuse and API overuse | `enable_rate_limiting=True` |
|
|
2043
|
+
| **Input Validation** | Pydantic validation on arguments | Use `ValidatedTool` |
|
|
2044
|
+
| **Error Containment** | Failures don't crash the processor | Built-in exception handling |
|
|
2045
|
+
| **Retry Limits** | Prevent infinite retry loops | `max_retries=3` |
|
|
2046
|
+
|
|
2047
|
+
**Important Security Notes:**
|
|
2048
|
+
- **Environment Variables**: Subprocess strategy inherits the parent process environment by default. For stricter isolation, use container-level controls (Docker, cgroups).
|
|
2049
|
+
- **Network Access**: Tools inherit network access from the host. For network isolation, use OS-level sandboxing (containers, network namespaces, firewalls).
|
|
2050
|
+
- **Resource Limits**: For hard CPU/memory caps, use OS-level controls (cgroups on Linux, Job Objects on Windows, or Docker resource limits).
|
|
2051
|
+
- **Secrets**: Never injected automatically. Pass secrets explicitly via tool arguments or environment variables, and prefer scoped env vars for subprocess tools to minimize exposure.
|
|
2052
|
+
|
|
2053
|
+
#### OS-Level Hardening
|
|
2054
|
+
|
|
2055
|
+
For production deployments, add these hardening measures:
|
|
2056
|
+
|
|
2057
|
+
| Concern | Docker/Container Solution | Direct Example |
|
|
2058
|
+
|---------|--------------------------|----------------|
|
|
2059
|
+
| **CPU/RAM caps** | `--cpus`, `--memory` flags | `docker run --cpus="1.5" --memory="512m" myapp` |
|
|
2060
|
+
| **Network egress** | Deny-by-default with firewall rules | `--network=none` or custom network with egress filtering |
|
|
2061
|
+
| **Filesystem** | Read-only root + writable scratch | `--read-only --tmpfs /tmp:rw,size=100m` |
|
|
2062
|
+
|
|
2063
|
+
**Example: Run processor in locked-down container**
|
|
2064
|
+
|
|
2065
|
+
```bash
|
|
2066
|
+
# Dockerfile
|
|
2067
|
+
FROM python:3.11-slim
|
|
2068
|
+
WORKDIR /app
|
|
2069
|
+
COPY requirements.txt .
|
|
2070
|
+
RUN pip install -r requirements.txt --no-cache-dir
|
|
2071
|
+
COPY . .
|
|
2072
|
+
USER nobody # Run as non-root
|
|
2073
|
+
CMD ["python", "app.py"]
|
|
2074
|
+
|
|
2075
|
+
# Run with resource limits and network restrictions
|
|
2076
|
+
docker run \
|
|
2077
|
+
--cpus="2" \
|
|
2078
|
+
--memory="1g" \
|
|
2079
|
+
--memory-swap="1g" \
|
|
2080
|
+
--read-only \
|
|
2081
|
+
--tmpfs /tmp:rw,size=200m,mode=1777 \
|
|
2082
|
+
--network=custom-net \
|
|
2083
|
+
--cap-drop=ALL \
|
|
2084
|
+
myapp:latest
|
|
2085
|
+
```
|
|
2086
|
+
|
|
2087
|
+
**Network egress controls (deny-by-default)**
|
|
2088
|
+
|
|
2089
|
+
```bash
|
|
2090
|
+
# Create restricted network with no internet access (for local-only tools)
|
|
2091
|
+
docker network create --internal restricted-net
|
|
2092
|
+
|
|
2093
|
+
# Or use iptables for per-tool CIDR allowlists
|
|
2094
|
+
iptables -A OUTPUT -d 10.0.0.0/8 -j ACCEPT # Allow private ranges
|
|
2095
|
+
iptables -A OUTPUT -d 172.16.0.0/12 -j ACCEPT
|
|
2096
|
+
iptables -A OUTPUT -d 192.168.0.0/16 -j ACCEPT
|
|
2097
|
+
iptables -A OUTPUT -j DROP # Deny everything else
|
|
2098
|
+
```
|
|
2099
|
+
|
|
2100
|
+
Example security-focused setup for untrusted code:
|
|
2101
|
+
|
|
2102
|
+
```python
|
|
2103
|
+
import asyncio
|
|
2104
|
+
from chuk_tool_processor import ToolProcessor, IsolatedStrategy, get_default_registry
|
|
2105
|
+
|
|
2106
|
+
async def create_secure_processor():
|
|
2107
|
+
# Maximum isolation for untrusted code
|
|
2108
|
+
# Runs each tool in a separate process
|
|
2109
|
+
registry = await get_default_registry()
|
|
2110
|
+
|
|
2111
|
+
processor = ToolProcessor(
|
|
2112
|
+
strategy=IsolatedStrategy(
|
|
2113
|
+
registry=registry,
|
|
2114
|
+
max_workers=4,
|
|
2115
|
+
default_timeout=10.0
|
|
2116
|
+
),
|
|
2117
|
+
default_timeout=10.0,
|
|
2118
|
+
enable_rate_limiting=True,
|
|
2119
|
+
global_rate_limit=50, # 50 requests/minute
|
|
2120
|
+
max_retries=2
|
|
2121
|
+
)
|
|
2122
|
+
return processor
|
|
2123
|
+
|
|
2124
|
+
# For even stricter isolation:
|
|
2125
|
+
# - Run the entire processor inside a Docker container with resource limits
|
|
2126
|
+
# - Use network policies to restrict outbound connections
|
|
2127
|
+
# - Use read-only filesystems where possible
|
|
2128
|
+
```
|
|
2129
|
+
|
|
2130
|
+
## Design Goals & Non-Goals
|
|
2131
|
+
|
|
2132
|
+
**What CHUK Tool Processor does:**
|
|
2133
|
+
- ✅ Parse tool calls from any LLM format (XML, OpenAI, JSON)
|
|
2134
|
+
- ✅ Execute tools with production policies (timeouts, retries, rate limits, caching)
|
|
2135
|
+
- ✅ Isolate untrusted code in subprocesses
|
|
2136
|
+
- ✅ Connect to remote tool servers via MCP (HTTP/STDIO/SSE)
|
|
2137
|
+
- ✅ Provide composable execution layers (strategies + wrappers)
|
|
2138
|
+
- ✅ Export tool schemas for LLM prompting
|
|
2139
|
+
|
|
2140
|
+
**What CHUK Tool Processor explicitly does NOT do:**
|
|
2141
|
+
- ❌ Manage conversations or chat history
|
|
2142
|
+
- ❌ Provide prompt engineering or prompt templates
|
|
2143
|
+
- ❌ Bundle an LLM client (bring your own OpenAI/Anthropic/local)
|
|
2144
|
+
- ❌ Implement agent frameworks or chains
|
|
2145
|
+
- ❌ Make decisions about which tools to call
|
|
2146
|
+
|
|
2147
|
+
**Why this matters:** CHUK Tool Processor stays focused on reliable tool execution. It's a building block, not a framework. This makes it composable with any LLM application architecture.
|
|
2148
|
+
|
|
2149
|
+
## Architecture Principles
|
|
2150
|
+
|
|
2151
|
+
1. **Composability**: Stack strategies and wrappers like middleware
|
|
2152
|
+
2. **Async-First**: Built for `async/await` from the ground up
|
|
2153
|
+
3. **Production-Ready**: Timeouts, retries, caching, rate limiting—all built-in
|
|
2154
|
+
4. **Pluggable**: Parsers, strategies, transports—swap components as needed
|
|
2155
|
+
5. **Observable**: Structured logging and metrics collection throughout
|
|
2156
|
+
|
|
2157
|
+
## Examples
|
|
2158
|
+
|
|
2159
|
+
Check out the [`examples/`](examples/) directory for complete working examples:
|
|
2160
|
+
|
|
2161
|
+
### Getting Started
|
|
2162
|
+
- **60-second hello**: `examples/01_getting_started/hello_tool.py` - Absolute minimal example (copy-paste-run)
|
|
2163
|
+
- **Quick start**: `examples/01_getting_started/quickstart_demo.py` - Basic tool registration and execution
|
|
2164
|
+
- **Execution strategies**: `examples/01_getting_started/execution_strategies_demo.py` - InProcess vs Subprocess
|
|
2165
|
+
- **Production wrappers**: `examples/02_production_features/wrappers_demo.py` - Caching, retries, rate limiting
|
|
2166
|
+
- **Streaming tools**: `examples/03_streaming/streaming_demo.py` - Real-time incremental results
|
|
2167
|
+
- **Streaming tool calls**: `examples/03_streaming/streaming_tool_calls_demo.py` - Handle partial tool calls from streaming LLMs
|
|
2168
|
+
- **Schema helper**: `examples/05_schema_and_types/schema_helper_demo.py` - Auto-generate schemas from typed tools (Pydantic → OpenAI/Anthropic/MCP)
|
|
2169
|
+
- **Observability**: `examples/02_production_features/observability_demo.py` - OpenTelemetry + Prometheus integration
|
|
2170
|
+
|
|
2171
|
+
### MCP Integration (Real-World)
|
|
2172
|
+
- **Notion + OAuth**: `examples/04_mcp_integration/notion_oauth.py` - Complete OAuth 2.1 flow with HTTP Streamable
|
|
2173
|
+
- Shows: Authorization Server discovery, client registration, PKCE flow, token exchange
|
|
2174
|
+
- **SQLite Local**: `examples/04_mcp_integration/stdio_sqlite.py` - Local database access via STDIO
|
|
2175
|
+
- Shows: Command/args passing, environment variables, file paths, initialization timeouts
|
|
2176
|
+
- **Echo Server**: `examples/04_mcp_integration/stdio_echo.py` - Minimal STDIO transport example
|
|
2177
|
+
- Shows: Simplest possible MCP integration for testing
|
|
2178
|
+
- **Atlassian + OAuth**: `examples/04_mcp_integration/atlassian_sse.py` - OAuth with SSE transport (legacy)
|
|
2179
|
+
|
|
2180
|
+
### Advanced MCP
|
|
2181
|
+
- **Plugin system**: `examples/06_plugins/plugins_builtins_demo.py`, `examples/06_plugins/plugins_custom_parser_demo.py`
|
|
2182
|
+
|
|
2183
|
+
## FAQ
|
|
2184
|
+
|
|
2185
|
+
**Q: What happens if a tool takes too long?**
|
|
2186
|
+
A: The tool is cancelled after `default_timeout` seconds and returns an error result. The processor continues with other tools.
|
|
2187
|
+
|
|
2188
|
+
**Q: Can I mix local and remote (MCP) tools?**
|
|
2189
|
+
A: Yes! Register local tools first, then use `setup_mcp_*` to add remote tools. They all work in the same processor.
|
|
2190
|
+
|
|
2191
|
+
**Q: How do I handle malformed LLM outputs?**
|
|
2192
|
+
A: The processor is resilient—invalid tool calls are logged and return error results without crashing.
|
|
2193
|
+
|
|
2194
|
+
**Q: What about API rate limits?**
|
|
2195
|
+
A: Use `enable_rate_limiting=True` and set `tool_rate_limits` per tool or `global_rate_limit` for all tools.
|
|
2196
|
+
|
|
2197
|
+
**Q: Can tools return files or binary data?**
|
|
2198
|
+
A: Yes—tools can return any JSON-serializable data including base64-encoded files, URLs, or structured data.
|
|
2199
|
+
|
|
2200
|
+
**Q: How do I test my tools?**
|
|
2201
|
+
A: Use pytest with `@pytest.mark.asyncio`. See [Testing Tools](#testing-tools) for examples.
|
|
2202
|
+
|
|
2203
|
+
**Q: Does this work with streaming LLM responses?**
|
|
2204
|
+
A: Yes—as tool calls appear in the stream, extract and process them. The processor handles partial/incremental tool call lists.
|
|
2205
|
+
|
|
2206
|
+
**Q: What's the difference between InProcess and Isolated strategies?**
|
|
2207
|
+
A: InProcess is faster (same process), Isolated is safer (separate subprocess). Use InProcess for trusted code, Isolated for untrusted.
|
|
2208
|
+
|
|
2209
|
+
## Comparison with Other Tools
|
|
2210
|
+
|
|
2211
|
+
| Feature | chuk-tool-processor | LangChain Tools | OpenAI Tools | MCP SDK |
|
|
2212
|
+
|---------|-------------------|-----------------|--------------|---------|
|
|
2213
|
+
| **Async-native** | ✅ | ⚠️ Partial | ✅ | ✅ |
|
|
2214
|
+
| **Process isolation** | ✅ IsolatedStrategy | ❌ | ❌ | ⚠️ |
|
|
2215
|
+
| **Built-in retries** | ✅ | ❌ † | ❌ | ❌ |
|
|
2216
|
+
| **Rate limiting** | ✅ | ❌ † | ⚠️ ‡ | ❌ |
|
|
2217
|
+
| **Caching** | ✅ | ⚠️ † | ❌ ‡ | ❌ |
|
|
2218
|
+
| **Idempotency & de-dup** | ✅ SHA256 keys | ❌ | ❌ | ❌ |
|
|
2219
|
+
| **Per-tool policies** | ✅ (timeouts/retries/limits) | ⚠️ | ❌ | ❌ |
|
|
2220
|
+
| **Multiple parsers** | ✅ (XML, OpenAI, JSON) | ⚠️ | ✅ | ✅ |
|
|
2221
|
+
| **Streaming tools** | ✅ | ⚠️ | ⚠️ | ✅ |
|
|
2222
|
+
| **MCP integration** | ✅ All transports | ❌ | ❌ | ✅ (protocol only) |
|
|
2223
|
+
| **Zero-config start** | ✅ | ❌ | ✅ | ⚠️ |
|
|
2224
|
+
| **Production-ready** | ✅ Timeouts, metrics | ⚠️ | ⚠️ | ⚠️ |
|
|
2225
|
+
|
|
2226
|
+
**Notes:**
|
|
2227
|
+
- † LangChain offers caching and rate-limiting through separate libraries (`langchain-cache`, external rate limiters), but they're not core features.
|
|
2228
|
+
- ‡ OpenAI Tools can be combined with external rate limiters and caches, but tool execution itself doesn't include these features.
|
|
2229
|
+
|
|
2230
|
+
**When to use chuk-tool-processor:**
|
|
2231
|
+
- You need production-ready tool execution (timeouts, retries, caching)
|
|
2232
|
+
- You want to connect to MCP servers (local or remote)
|
|
2233
|
+
- You need to run untrusted code safely (subprocess isolation)
|
|
2234
|
+
- You're building a custom LLM application (not using a framework)
|
|
2235
|
+
|
|
2236
|
+
**When to use alternatives:**
|
|
2237
|
+
- **LangChain**: You want a full-featured LLM framework with chains, agents, and memory
|
|
2238
|
+
- **OpenAI Tools**: You only use OpenAI and don't need advanced execution features
|
|
2239
|
+
- **MCP SDK**: You're building an MCP server, not a client
|
|
2240
|
+
|
|
2241
|
+
## Related Projects
|
|
2242
|
+
|
|
2243
|
+
- **[chuk-mcp](https://github.com/chrishayuk/chuk-mcp)**: Low-level Model Context Protocol client
|
|
2244
|
+
- Powers the MCP transport layer in chuk-tool-processor
|
|
2245
|
+
- Use directly if you need protocol-level control
|
|
2246
|
+
- Use chuk-tool-processor if you want high-level tool execution
|
|
2247
|
+
|
|
2248
|
+
## Development & Publishing
|
|
2249
|
+
|
|
2250
|
+
### For Contributors
|
|
2251
|
+
|
|
2252
|
+
Development setup:
|
|
2253
|
+
|
|
2254
|
+
```bash
|
|
2255
|
+
# Clone repository
|
|
2256
|
+
git clone https://github.com/chrishayuk/chuk-tool-processor.git
|
|
2257
|
+
cd chuk-tool-processor
|
|
2258
|
+
|
|
2259
|
+
# Install development dependencies
|
|
2260
|
+
uv sync --dev
|
|
2261
|
+
|
|
2262
|
+
# Run tests
|
|
2263
|
+
make test
|
|
2264
|
+
|
|
2265
|
+
# Run all quality checks
|
|
2266
|
+
make check
|
|
2267
|
+
```
|
|
2268
|
+
|
|
2269
|
+
### For Maintainers: Publishing Releases
|
|
2270
|
+
|
|
2271
|
+
The project uses **fully automated CI/CD** for releases. Publishing is as simple as:
|
|
2272
|
+
|
|
2273
|
+
```bash
|
|
2274
|
+
# 1. Bump version
|
|
2275
|
+
make bump-patch # or bump-minor, bump-major
|
|
2276
|
+
|
|
2277
|
+
# 2. Commit version change
|
|
2278
|
+
git add pyproject.toml
|
|
2279
|
+
git commit -m "version X.Y.Z"
|
|
2280
|
+
git push
|
|
2281
|
+
|
|
2282
|
+
# 3. Create release (automated)
|
|
2283
|
+
make publish
|
|
2284
|
+
```
|
|
2285
|
+
|
|
2286
|
+
This will:
|
|
2287
|
+
- Create and push a git tag
|
|
2288
|
+
- Trigger GitHub Actions to create a release with auto-generated changelog
|
|
2289
|
+
- Run tests across all platforms and Python versions
|
|
2290
|
+
- Build and publish to PyPI automatically
|
|
2291
|
+
|
|
2292
|
+
For detailed release documentation, see:
|
|
2293
|
+
- **[RELEASING.md](RELEASING.md)** - Complete release process guide
|
|
2294
|
+
- **[docs/CI-CD.md](docs/CI-CD.md)** - Full CI/CD pipeline documentation
|
|
2295
|
+
|
|
2296
|
+
## Stability & Versioning
|
|
2297
|
+
|
|
2298
|
+
CHUK Tool Processor follows **[Semantic Versioning 2.0.0](https://semver.org/)** for predictable upgrades:
|
|
2299
|
+
|
|
2300
|
+
* **Breaking changes** = **major** version bump (e.g., 1.x → 2.0)
|
|
2301
|
+
* **New features** (backward-compatible) = **minor** version bump (e.g., 1.2 → 1.3)
|
|
2302
|
+
* **Bug fixes** (backward-compatible) = **patch** version bump (e.g., 1.2.3 → 1.2.4)
|
|
2303
|
+
|
|
2304
|
+
**Public API surface**: Everything exported via the package root (`from chuk_tool_processor import ...`) is considered public API and follows semver guarantees.
|
|
2305
|
+
|
|
2306
|
+
**Deprecation policy**: Deprecated APIs will:
|
|
2307
|
+
1. Log a warning for **one minor release**
|
|
2308
|
+
2. Be removed in the **next major release**
|
|
2309
|
+
|
|
2310
|
+
**Upgrading safely**:
|
|
2311
|
+
* Patch and minor updates are **safe to deploy** without code changes
|
|
2312
|
+
* Major updates may require migration—see release notes
|
|
2313
|
+
* Pin to `chuk-tool-processor~=1.2` for minor updates only, or `chuk-tool-processor==1.2.3` for exact versions
|
|
2314
|
+
|
|
2315
|
+
## Contributing & Support
|
|
2316
|
+
|
|
2317
|
+
- **GitHub**: [chrishayuk/chuk-tool-processor](https://github.com/chrishayuk/chuk-tool-processor)
|
|
2318
|
+
- **Issues**: [Report bugs and request features](https://github.com/chrishayuk/chuk-tool-processor/issues)
|
|
2319
|
+
- **Discussions**: [Community discussions](https://github.com/chrishayuk/chuk-tool-processor/discussions)
|
|
2320
|
+
- **License**: MIT
|
|
2321
|
+
|
|
2322
|
+
---
|
|
2323
|
+
|
|
2324
|
+
**Remember**: CHUK Tool Processor is the missing link between LLM outputs and reliable tool execution. It's not trying to be everything—it's trying to be the best at one thing: processing tool calls in production.
|
|
2325
|
+
|
|
2326
|
+
Built with ❤️ by the CHUK AI team for the LLM tool integration community.
|