chuk-tool-processor 0.6.19__py3-none-any.whl → 0.6.21__py3-none-any.whl

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.

Potentially problematic release.


This version of chuk-tool-processor might be problematic. Click here for more details.

@@ -1,698 +0,0 @@
1
- Metadata-Version: 2.4
2
- Name: chuk-tool-processor
3
- Version: 0.6.19
4
- Summary: Async-native framework for registering, discovering, and executing tools referenced in LLM responses
5
- Author-email: CHUK Team <chrishayuk@somejunkmailbox.com>
6
- Maintainer-email: CHUK Team <chrishayuk@somejunkmailbox.com>
7
- License: MIT
8
- Keywords: llm,tools,async,ai,openai,mcp,model-context-protocol,tool-calling,function-calling
9
- Classifier: Development Status :: 4 - Beta
10
- Classifier: Intended Audience :: Developers
11
- Classifier: License :: OSI Approved :: MIT License
12
- Classifier: Operating System :: OS Independent
13
- Classifier: Programming Language :: Python :: 3
14
- Classifier: Programming Language :: Python :: 3.11
15
- Classifier: Programming Language :: Python :: 3.12
16
- Classifier: Programming Language :: Python :: 3.13
17
- Classifier: Topic :: Software Development :: Libraries :: Python Modules
18
- Classifier: Topic :: Scientific/Engineering :: Artificial Intelligence
19
- Classifier: Framework :: AsyncIO
20
- Classifier: Typing :: Typed
21
- Requires-Python: >=3.11
22
- Description-Content-Type: text/markdown
23
- Requires-Dist: chuk-mcp>=0.5.4
24
- Requires-Dist: dotenv>=0.9.9
25
- Requires-Dist: psutil>=7.0.0
26
- Requires-Dist: pydantic>=2.11.3
27
- Requires-Dist: uuid>=1.30
28
-
29
- # CHUK Tool Processor - Architectural Analysis
30
-
31
- ## Overview
32
- The CHUK Tool Processor is a sophisticated async-native framework for registering, discovering, and executing tools referenced in LLM responses. Built from the ground up for production use with comprehensive error handling, monitoring, and scalability features. It features a modular architecture with multiple transport mechanisms, execution strategies, and comprehensive tooling for production use.
33
-
34
- ## Quick Start Example
35
- ```python
36
- import asyncio
37
- from chuk_tool_processor import ToolProcessor, register_tool, initialize
38
-
39
- # 1. Create a tool
40
- @register_tool(name="calculator", description="Perform basic math operations")
41
- class Calculator:
42
- async def execute(self, operation: str, a: float, b: float) -> dict:
43
- operations = {
44
- "add": a + b,
45
- "subtract": a - b,
46
- "multiply": a * b,
47
- "divide": a / b if b != 0 else None
48
- }
49
-
50
- if operation not in operations:
51
- raise ValueError(f"Unknown operation: {operation}")
52
-
53
- result = operations[operation]
54
- if result is None:
55
- raise ValueError("Cannot divide by zero")
56
-
57
- return {"operation": operation, "operands": [a, b], "result": result}
58
-
59
- async def main():
60
- # 2. Initialize the system
61
- await initialize()
62
-
63
- # 3. Process LLM output containing tool calls
64
- processor = ToolProcessor()
65
- results = await processor.process('''
66
- <tool name="calculator" args='{"operation": "multiply", "a": 15, "b": 23}'/>
67
- ''')
68
-
69
- # 4. Handle results
70
- for result in results:
71
- if result.error:
72
- print(f"❌ Tool '{result.tool}' failed: {result.error}")
73
- else:
74
- print(f"✅ Tool '{result.tool}' result: {result.result}")
75
-
76
- asyncio.run(main())
77
- ```
78
-
79
- ## Key Features & Benefits
80
-
81
- - **🔄 Async-Native**: Built for `async/await` from the ground up for optimal performance
82
- - **🛡️ Production Ready**: Comprehensive error handling, timeouts, retries, and monitoring
83
- - **📦 Multiple Execution Strategies**: In-process for speed, subprocess for isolation
84
- - **🚀 High Performance**: Built-in caching, rate limiting, and concurrency control
85
- - **📊 Observability**: Structured logging, metrics collection, and request tracing
86
- - **🔗 MCP Integration**: Full Model Context Protocol support (STDIO, SSE, HTTP Streamable)
87
- - **📡 Streaming Support**: Real-time incremental results for long-running operations
88
- - **🔧 Extensible Architecture**: Plugin system for custom parsers and execution strategies
89
- - **🎯 Multiple Input Formats**: XML tags, OpenAI tool_calls, JSON, function_call formats
90
- - **⚡ Zero-Config Start**: Works out of the box with sensible defaults
91
-
92
- ## Core Architecture
93
-
94
- ### Installation & Setup
95
-
96
- ```bash
97
- # From source (recommended for development)
98
- git clone https://github.com/chrishayuk/chuk-tool-processor.git
99
- cd chuk-tool-processor
100
- pip install -e .
101
-
102
- # Or install from PyPI (when available)
103
- pip install chuk-tool-processor
104
- ```
105
-
106
- ### Environment Configuration
107
- ```bash
108
- # Optional: Registry provider (default: memory)
109
- export CHUK_TOOL_REGISTRY_PROVIDER=memory
110
-
111
- # Optional: Default timeout (default: 30.0)
112
- export CHUK_DEFAULT_TIMEOUT=30.0
113
-
114
- # Optional: Enable structured JSON logging
115
- export CHUK_STRUCTURED_LOGGING=true
116
-
117
- # MCP Integration (if using external MCP servers)
118
- export MCP_BEARER_TOKEN=your_bearer_token_here
119
- ```
120
- ### 1. Registry System
121
- - **Interface-driven**: `ToolRegistryInterface` protocol defines the contract
122
- - **Async-native**: All registry operations are async
123
- - **Namespace support**: Tools are organized into namespaces (default: "default")
124
- - **Metadata tracking**: Rich metadata with `ToolMetadata` model
125
- - **Provider pattern**: `ToolRegistryProvider` for singleton management
126
-
127
- ```python
128
- # Example registry usage
129
- registry = await ToolRegistryProvider.get_registry()
130
- await registry.register_tool(MyTool(), name="my_tool", namespace="custom")
131
- tool = await registry.get_tool("my_tool", "custom")
132
- ```
133
-
134
- ### 2. Tool Development Patterns
135
-
136
- #### Simple Function-Based Tools
137
- ```python
138
- from chuk_tool_processor.registry.auto_register import register_fn_tool
139
-
140
- async def get_current_time(timezone: str = "UTC") -> str:
141
- """Get the current time in the specified timezone."""
142
- from datetime import datetime
143
- import pytz
144
-
145
- tz = pytz.timezone(timezone)
146
- current_time = datetime.now(tz)
147
- return current_time.strftime("%Y-%m-%d %H:%M:%S %Z")
148
-
149
- # Register the function as a tool
150
- await register_fn_tool(get_current_time, namespace="utilities")
151
- ```
152
-
153
- #### ValidatedTool (Declarative with Pydantic)
154
- ```python
155
- @register_tool(name="weather", namespace="api")
156
- class WeatherTool(ValidatedTool):
157
- class Arguments(BaseModel):
158
- location: str = Field(..., description="City name or coordinates")
159
- units: str = Field("metric", description="Temperature units")
160
-
161
- class Result(BaseModel):
162
- location: str
163
- temperature: float
164
- conditions: str
165
-
166
- async def _execute(self, location: str, units: str) -> Result:
167
- # Implementation here
168
- return self.Result(location=location, temperature=22.5, conditions="Sunny")
169
- ```
170
-
171
- #### StreamingTool (Real-time Results)
172
- ```python
173
- @register_tool(name="file_processor")
174
- class FileProcessorTool(StreamingTool):
175
- class Arguments(BaseModel):
176
- file_path: str
177
- operation: str = "count_lines"
178
-
179
- class Result(BaseModel):
180
- line_number: int
181
- content: str
182
-
183
- async def _stream_execute(self, file_path: str, operation: str):
184
- """Stream results as each line is processed."""
185
- for i in range(1, 100):
186
- await asyncio.sleep(0.01) # Simulate processing
187
- yield self.Result(line_number=i, content=f"Processed line {i}")
188
- ```
189
-
190
- ### 3. Processing LLM Responses
191
-
192
- The processor automatically detects and parses multiple input formats:
193
-
194
- ```python
195
- processor = ToolProcessor()
196
-
197
- # 1. XML Tool Tags (most common)
198
- xml_response = """
199
- <tool name="search" args='{"query": "Python programming", "limit": 5}'/>
200
- <tool name="get_current_time" args='{"timezone": "UTC"}'/>
201
- """
202
-
203
- # 2. OpenAI Chat Completions Format
204
- openai_response = {
205
- "tool_calls": [
206
- {
207
- "id": "call_123",
208
- "type": "function",
209
- "function": {
210
- "name": "search",
211
- "arguments": '{"query": "Python programming", "limit": 5}'
212
- }
213
- }
214
- ]
215
- }
216
-
217
- # 3. Direct ToolCall objects
218
- tool_calls = [
219
- {"tool": "search", "arguments": {"query": "Python programming", "limit": 5}},
220
- {"tool": "get_current_time", "arguments": {"timezone": "UTC"}}
221
- ]
222
-
223
- # Process any format
224
- results1 = await processor.process(xml_response)
225
- results2 = await processor.process(openai_response)
226
- results3 = await processor.process(tool_calls)
227
- ```
228
-
229
- ### 4. Execution Strategies
230
-
231
- #### InProcessStrategy (Default - Fast & Efficient)
232
- - **Concurrent execution**: Uses asyncio for parallelism within the same process
233
- - **Semaphore-based limiting**: Optional max_concurrency control
234
- - **True streaming support**: Direct access to `stream_execute` methods
235
- - **Enhanced tool resolution**: Namespace fallback logic with fuzzy matching
236
- - **Proper timeout handling**: Always applies concrete timeouts
237
-
238
- #### SubprocessStrategy (Isolation & Safety)
239
- - **Process isolation**: Each tool runs in separate OS process for safety
240
- - **Serialization support**: Handles complex objects and Pydantic models properly
241
- - **Worker pool management**: Concurrent futures with automatic cleanup
242
- - **Enhanced error handling**: Broken pool recovery and restart
243
- - **Timeout coordination**: Safety timeouts prevent worker hangs
244
-
245
- ```python
246
- # Configure execution strategy
247
- from chuk_tool_processor.execution.strategies.subprocess_strategy import SubprocessStrategy
248
-
249
- processor = ToolProcessor(
250
- strategy=SubprocessStrategy(
251
- registry=await get_default_registry(),
252
- max_workers=4,
253
- default_timeout=30.0
254
- )
255
- )
256
- ```
257
-
258
- ### 5. Production Features & Wrappers
259
-
260
- #### Caching for Performance
261
- ```python
262
- from chuk_tool_processor.execution.wrappers.caching import cacheable
263
-
264
- @cacheable(ttl=600) # Cache for 10 minutes
265
- @register_tool(name="expensive_api")
266
- class ExpensiveApiTool(ValidatedTool):
267
- # Tool implementation
268
- pass
269
-
270
- # Or configure at processor level
271
- processor = ToolProcessor(
272
- enable_caching=True,
273
- cache_ttl=300 # 5 minutes default
274
- )
275
- ```
276
-
277
- #### Rate Limiting
278
- ```python
279
- from chuk_tool_processor.execution.wrappers.rate_limiting import rate_limited
280
-
281
- @rate_limited(limit=20, period=60.0) # 20 calls per minute
282
- @register_tool(name="api_tool")
283
- class ApiTool(ValidatedTool):
284
- # Tool implementation
285
- pass
286
-
287
- # Or processor-level configuration
288
- processor = ToolProcessor(
289
- enable_rate_limiting=True,
290
- global_rate_limit=100, # 100 requests per minute globally
291
- tool_rate_limits={
292
- "expensive_api": (10, 60), # 10 per minute for specific tool
293
- }
294
- )
295
- ```
296
-
297
- #### Automatic Retries
298
- ```python
299
- from chuk_tool_processor.execution.wrappers.retry import retryable
300
-
301
- @retryable(max_retries=3, base_delay=1.0)
302
- @register_tool(name="unreliable_api")
303
- class UnreliableApiTool(ValidatedTool):
304
- # Tool implementation
305
- pass
306
-
307
- # Processor-level retry configuration
308
- processor = ToolProcessor(
309
- enable_retries=True,
310
- max_retries=3
311
- )
312
- ```
313
-
314
- ### 6. MCP (Model Context Protocol) Integration
315
-
316
- Connect to external tool servers using multiple transport protocols:
317
-
318
- #### Quick MCP Setup with SSE (Server-Sent Events)
319
- ```python
320
- from chuk_tool_processor.mcp import setup_mcp_sse
321
-
322
- # Configure external MCP servers
323
- servers = [
324
- {
325
- "name": "weather-service",
326
- "url": "https://weather-mcp.example.com",
327
- "api_key": "your_weather_api_key"
328
- },
329
- {
330
- "name": "database-service",
331
- "url": "https://db-mcp.example.com",
332
- "api_key": "your_db_api_key"
333
- }
334
- ]
335
-
336
- # Initialize with full production configuration
337
- processor, stream_manager = await setup_mcp_sse(
338
- servers=servers,
339
- namespace="mcp", # Tools available as mcp.tool_name
340
- default_timeout=30.0,
341
- enable_caching=True,
342
- enable_retries=True
343
- )
344
-
345
- # Use external tools through MCP
346
- results = await processor.process('''
347
- <tool name="mcp.weather" args='{"location": "London"}'/>
348
- <tool name="mcp.database_query" args='{"sql": "SELECT COUNT(*) FROM users"}'/>
349
- ''')
350
- ```
351
-
352
- #### STDIO Transport (Process-based)
353
- ```python
354
- from chuk_tool_processor.mcp import setup_mcp_stdio
355
-
356
- # Create MCP config for local processes
357
- mcp_config = {
358
- "weather": {
359
- "command": "python",
360
- "args": ["-m", "weather_mcp_server"],
361
- "env": {"API_KEY": "your_weather_key"}
362
- }
363
- }
364
-
365
- processor, stream_manager = await setup_mcp_stdio(
366
- config_file="mcp_config.json",
367
- servers=["weather"],
368
- namespace="tools"
369
- )
370
- ```
371
-
372
- #### Supported Transports
373
- - **STDIO**: Process-based communication for local MCP servers
374
- - **SSE**: Server-Sent Events for cloud-based MCP services
375
- - **HTTP Streamable**: Modern HTTP-based transport (spec 2025-03-26)
376
-
377
- ### 7. Monitoring & Observability
378
-
379
- #### Structured Logging
380
- ```python
381
- from chuk_tool_processor.logging import setup_logging, get_logger, log_context_span
382
-
383
- # Setup structured logging
384
- await setup_logging(
385
- level=logging.INFO,
386
- structured=True, # JSON output for production
387
- log_file="tool_processor.log"
388
- )
389
-
390
- # Use contextual logging
391
- logger = get_logger("my_app")
392
-
393
- async def process_user_request(user_id: str, request: str):
394
- async with log_context_span("user_request", {"user_id": user_id}):
395
- logger.info("Processing user request", extra={
396
- "request_length": len(request),
397
- "user_id": user_id
398
- })
399
-
400
- results = await processor.process(request)
401
-
402
- logger.info("Request processed successfully", extra={
403
- "num_tools": len(results),
404
- "success_rate": sum(1 for r in results if not r.error) / len(results)
405
- })
406
- ```
407
-
408
- #### Automatic Metrics Collection
409
- ```python
410
- # Metrics are automatically collected for:
411
- # - Tool execution success/failure rates
412
- # - Execution durations and performance
413
- # - Cache hit/miss rates and efficiency
414
- # - Parser performance and accuracy
415
- # - Registry operations and health
416
-
417
- # Access programmatic metrics
418
- from chuk_tool_processor.logging import metrics
419
-
420
- # Custom metrics
421
- await metrics.log_tool_execution(
422
- tool="custom_metric",
423
- success=True,
424
- duration=1.5,
425
- cached=False,
426
- attempts=1
427
- )
428
- ```
429
-
430
- ### 8. Error Handling & Best Practices
431
-
432
- #### Robust Error Handling
433
- ```python
434
- async def robust_tool_processing(llm_response: str):
435
- """Example of production-ready error handling."""
436
- processor = ToolProcessor(
437
- default_timeout=30.0,
438
- enable_retries=True,
439
- max_retries=3
440
- )
441
-
442
- try:
443
- results = await processor.process(llm_response, timeout=60.0)
444
-
445
- successful_results = []
446
- failed_results = []
447
-
448
- for result in results:
449
- if result.error:
450
- failed_results.append(result)
451
- logger.error(f"Tool {result.tool} failed: {result.error}", extra={
452
- "tool": result.tool,
453
- "duration": result.duration,
454
- "attempts": getattr(result, "attempts", 1)
455
- })
456
- else:
457
- successful_results.append(result)
458
- logger.info(f"Tool {result.tool} succeeded", extra={
459
- "tool": result.tool,
460
- "duration": result.duration,
461
- "cached": getattr(result, "cached", False)
462
- })
463
-
464
- return {
465
- "successful": successful_results,
466
- "failed": failed_results,
467
- "success_rate": len(successful_results) / len(results) if results else 0
468
- }
469
-
470
- except Exception as e:
471
- logger.exception("Failed to process LLM response")
472
- raise
473
- ```
474
-
475
- #### Testing Your Tools
476
- ```python
477
- import pytest
478
- from chuk_tool_processor import ToolProcessor, initialize
479
-
480
- @pytest.mark.asyncio
481
- async def test_calculator_tool():
482
- await initialize()
483
- processor = ToolProcessor()
484
-
485
- results = await processor.process(
486
- '<tool name="calculator" args=\'{"operation": "add", "a": 5, "b": 3}\'/>'
487
- )
488
-
489
- assert len(results) == 1
490
- result = results[0]
491
- assert result.error is None
492
- assert result.result["result"] == 8
493
- ```
494
-
495
- ## Advanced Configuration
496
-
497
- ### Production-Ready Setup
498
- ```python
499
- from chuk_tool_processor import ToolProcessor
500
- from chuk_tool_processor.execution.strategies.subprocess_strategy import SubprocessStrategy
501
-
502
- async def create_production_processor():
503
- """Configure processor for high-throughput production use."""
504
-
505
- processor = ToolProcessor(
506
- # Execution settings
507
- default_timeout=30.0,
508
- max_concurrency=20, # Allow 20 concurrent executions
509
-
510
- # Use subprocess strategy for isolation
511
- strategy=SubprocessStrategy(
512
- registry=await get_default_registry(),
513
- max_workers=8, # 8 worker processes
514
- default_timeout=30.0
515
- ),
516
-
517
- # Performance optimizations
518
- enable_caching=True,
519
- cache_ttl=900, # 15-minute cache
520
-
521
- # Rate limiting to prevent abuse
522
- enable_rate_limiting=True,
523
- global_rate_limit=500, # 500 requests per minute globally
524
- tool_rate_limits={
525
- "expensive_api": (10, 60), # 10 per minute
526
- "file_processor": (5, 60), # 5 per minute
527
- },
528
-
529
- # Reliability features
530
- enable_retries=True,
531
- max_retries=3,
532
-
533
- # Input parsing
534
- parser_plugins=["xml_tool", "openai_tool", "json_tool"]
535
- )
536
-
537
- await processor.initialize()
538
- return processor
539
- ```
540
-
541
- ### Performance Optimization
542
- ```python
543
- # Concurrent batch processing
544
- async def process_batch(requests: list[str]):
545
- """Process multiple LLM responses concurrently."""
546
- processor = await create_production_processor()
547
-
548
- tasks = [processor.process(request) for request in requests]
549
- all_results = await asyncio.gather(*tasks, return_exceptions=True)
550
-
551
- successful = []
552
- failed = []
553
-
554
- for i, result in enumerate(all_results):
555
- if isinstance(result, Exception):
556
- failed.append({"request_index": i, "error": str(result)})
557
- else:
558
- successful.append({"request_index": i, "results": result})
559
-
560
- return {"successful": successful, "failed": failed}
561
-
562
- # Memory management for long-running applications
563
- async def maintenance_task():
564
- """Periodic maintenance for production deployments."""
565
- while True:
566
- await asyncio.sleep(3600) # Every hour
567
-
568
- # Clear old cache entries
569
- if hasattr(processor.executor, 'cache'):
570
- await processor.executor.cache.clear()
571
- logger.info("Cache cleared for memory management")
572
- ```
573
-
574
- ## Key Design Patterns
575
-
576
- 1. **Async-First Design**: All core operations use async/await with proper timeout handling, graceful cancellation support, and comprehensive resource cleanup via context managers.
577
-
578
- 2. **Strategy Pattern**: Pluggable execution strategies (InProcess vs Subprocess), composable wrapper chains, and interface-driven design for maximum flexibility.
579
-
580
- 3. **Registry Pattern**: Centralized tool management with namespace isolation, rich metadata tracking, and lazy initialization for optimal resource usage.
581
-
582
- 4. **Plugin Architecture**: Discoverable parsers for different input formats, transport abstractions for MCP integration, and extensible validation systems.
583
-
584
- 5. **Producer-Consumer**: Queue-based streaming architecture for real-time results, with proper backpressure handling and timeout coordination.
585
-
586
- 6. **Decorator Pattern**: Composable execution wrappers (caching, retries, rate limiting) that can be stacked and configured independently.
587
-
588
- ## Configuration Reference
589
-
590
- ### Environment Variables
591
- | Variable | Default | Description |
592
- |----------|---------|-------------|
593
- | `CHUK_TOOL_REGISTRY_PROVIDER` | `memory` | Registry backend (memory, redis, etc.) |
594
- | `CHUK_DEFAULT_TIMEOUT` | `30.0` | Default tool execution timeout (seconds) |
595
- | `CHUK_LOG_LEVEL` | `INFO` | Logging level (DEBUG, INFO, WARNING, ERROR) |
596
- | `CHUK_STRUCTURED_LOGGING` | `true` | Enable JSON structured logging |
597
- | `CHUK_MAX_CONCURRENCY` | `10` | Default max concurrent executions |
598
- | `MCP_BEARER_TOKEN` | - | Bearer token for MCP SSE authentication |
599
-
600
- ### ToolProcessor Options
601
- ```python
602
- processor = ToolProcessor(
603
- # Core execution
604
- default_timeout=30.0, # Default timeout per tool
605
- max_concurrency=10, # Max concurrent executions
606
-
607
- # Strategy selection
608
- strategy=InProcessStrategy(...), # Fast, shared memory
609
- # strategy=SubprocessStrategy(...), # Isolated, safer for untrusted code
610
-
611
- # Performance features
612
- enable_caching=True, # Result caching
613
- cache_ttl=300, # Cache TTL in seconds
614
- enable_rate_limiting=False, # Rate limiting
615
- enable_retries=True, # Automatic retries
616
- max_retries=3, # Max retry attempts
617
-
618
- # Input processing
619
- parser_plugins=["xml_tool", "openai_tool", "json_tool"]
620
- )
621
- ```
622
-
623
- ## Why Choose CHUK Tool Processor?
624
-
625
- ### Built for Production
626
- - **Battle-tested**: Comprehensive error handling, timeout management, and resource cleanup
627
- - **Scalable**: Support for high-throughput concurrent execution with configurable limits
628
- - **Observable**: Built-in structured logging, metrics collection, and request tracing
629
- - **Reliable**: Automatic retries, circuit breakers, and graceful degradation
630
-
631
- ### Developer Experience
632
- - **Zero-config start**: Works out of the box with sensible defaults
633
- - **Type-safe**: Full Pydantic integration for argument and result validation
634
- - **Multiple paradigms**: Support for functions, classes, and streaming tools
635
- - **Flexible inputs**: Handles XML tags, OpenAI format, JSON, and direct objects
636
-
637
- ### Enterprise Ready
638
- - **Process isolation**: Subprocess strategy for running untrusted code safely
639
- - **Rate limiting**: Global and per-tool rate limiting with sliding window algorithm
640
- - **Caching layer**: Intelligent caching with TTL and invalidation strategies
641
- - **MCP integration**: Connect to external tool servers using industry standards
642
-
643
- ### Performance Optimized
644
- - **Async-native**: Built from ground up for `async/await` with proper concurrency
645
- - **Streaming support**: Real-time incremental results for long-running operations
646
- - **Resource efficient**: Lazy initialization, connection pooling, and memory management
647
- - **Configurable strategies**: Choose between speed (in-process) and safety (subprocess)
648
-
649
- ## Getting Started
650
-
651
- ### 1. Installation
652
- ```bash
653
- # From source (recommended for development)
654
- git clone https://github.com/chrishayuk/chuk-tool-processor.git
655
- cd chuk-tool-processor
656
- pip install -e .
657
-
658
- # Or install from PyPI (when available)
659
- pip install chuk-tool-processor
660
- ```
661
-
662
- ### 2. Quick Example
663
- ```python
664
- import asyncio
665
- from chuk_tool_processor import ToolProcessor, register_tool, initialize
666
-
667
- @register_tool(name="hello")
668
- class HelloTool:
669
- async def execute(self, name: str) -> str:
670
- return f"Hello, {name}!"
671
-
672
- async def main():
673
- await initialize()
674
- processor = ToolProcessor()
675
-
676
- results = await processor.process(
677
- '<tool name="hello" args=\'{"name": "World"}\'/>'
678
- )
679
-
680
- print(results[0].result) # Output: Hello, World!
681
-
682
- asyncio.run(main())
683
- ```
684
-
685
- ### 3. Next Steps
686
- - Review the [Architecture Guide](docs/architecture.md) for deeper understanding
687
- - Check out [Tool Development Guide](docs/tools.md) for advanced patterns
688
- - Explore [MCP Integration](docs/mcp.md) for external tool servers
689
- - See [Production Deployment](docs/deployment.md) for scaling considerations
690
-
691
- ## Contributing & Support
692
-
693
- - **GitHub**: [chrishayuk/chuk-tool-processor](https://github.com/chrishayuk/chuk-tool-processor)
694
- - **Issues**: [Report bugs and request features](https://github.com/chrishayuk/chuk-tool-processor/issues)
695
- - **Discussions**: [Community discussions](https://github.com/chrishayuk/chuk-tool-processor/discussions)
696
- - **License**: MIT - see [LICENSE](LICENSE) file for details
697
-
698
- Built with ❤️ by the CHUK AI team for the LLM tool integration community.