dacp 0.3.0__py3-none-any.whl → 0.3.2__py3-none-any.whl

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
@@ -0,0 +1,805 @@
1
+ Metadata-Version: 2.4
2
+ Name: dacp
3
+ Version: 0.3.2
4
+ Summary: Declarative Agent Communication Protocol - A protocol for managing LLM/agent communications and tool function calls
5
+ Author-email: Andrew Whitehouse <andrew.whitehouse@example.com>
6
+ License: MIT
7
+ Project-URL: Homepage, https://github.com/andrewwhitehouse/dacp
8
+ Project-URL: Repository, https://github.com/andrewwhitehouse/dacp
9
+ Project-URL: Documentation, https://github.com/andrewwhitehouse/dacp#readme
10
+ Project-URL: Issues, https://github.com/andrewwhitehouse/dacp/issues
11
+ Keywords: llm,agent,communication,protocol,ai,ml
12
+ Classifier: Development Status :: 4 - Beta
13
+ Classifier: Intended Audience :: Developers
14
+ Classifier: License :: OSI Approved :: MIT License
15
+ Classifier: Operating System :: OS Independent
16
+ Classifier: Programming Language :: Python :: 3
17
+ Classifier: Programming Language :: Python :: 3.8
18
+ Classifier: Programming Language :: Python :: 3.9
19
+ Classifier: Programming Language :: Python :: 3.10
20
+ Classifier: Programming Language :: Python :: 3.11
21
+ Classifier: Programming Language :: Python :: 3.12
22
+ Classifier: Topic :: Software Development :: Libraries :: Python Modules
23
+ Classifier: Topic :: Scientific/Engineering :: Artificial Intelligence
24
+ Requires-Python: >=3.8
25
+ Description-Content-Type: text/markdown
26
+ License-File: LICENSE
27
+ Requires-Dist: requests>=2.25.0
28
+ Requires-Dist: pyyaml>=5.4.0
29
+ Provides-Extra: openai
30
+ Requires-Dist: openai>=1.0.0; extra == "openai"
31
+ Provides-Extra: anthropic
32
+ Requires-Dist: anthropic>=0.18.0; extra == "anthropic"
33
+ Provides-Extra: local
34
+ Requires-Dist: requests>=2.25.0; extra == "local"
35
+ Provides-Extra: all
36
+ Requires-Dist: openai>=1.0.0; extra == "all"
37
+ Requires-Dist: anthropic>=0.18.0; extra == "all"
38
+ Provides-Extra: dev
39
+ Requires-Dist: pytest>=7.0.0; extra == "dev"
40
+ Requires-Dist: pytest-cov>=4.0.0; extra == "dev"
41
+ Requires-Dist: black>=22.0.0; extra == "dev"
42
+ Requires-Dist: flake8>=4.0.0; extra == "dev"
43
+ Requires-Dist: mypy>=1.0.0; extra == "dev"
44
+ Requires-Dist: types-requests>=2.25.0; extra == "dev"
45
+ Requires-Dist: types-PyYAML>=6.0.0; extra == "dev"
46
+ Dynamic: license-file
47
+
48
+ # DACP - Declarative Agent Communication Protocol
49
+
50
+ A Python library for managing LLM/agent communications and tool function calls following the OAS Open Agent Specification.
51
+
52
+ ## Installation
53
+
54
+ ```bash
55
+ pip install -e .
56
+ ```
57
+
58
+ ## Quick Start
59
+
60
+ ```python
61
+ import dacp
62
+
63
+ # Create an orchestrator to manage agents
64
+ orchestrator = dacp.Orchestrator()
65
+
66
+ # Create and register an agent
67
+ class MyAgent:
68
+ def handle_message(self, message):
69
+ return {"response": f"Hello {message.get('name', 'World')}!"}
70
+
71
+ agent = MyAgent()
72
+ orchestrator.register_agent("my-agent", agent)
73
+
74
+ # Send a message to the agent
75
+ response = orchestrator.send_message("my-agent", {"name": "Alice"})
76
+ print(response) # {"response": "Hello Alice!"}
77
+
78
+ # Use built-in tools
79
+ result = dacp.file_writer("./output/greeting.txt", "Hello, World!")
80
+ print(result["message"]) # "Successfully wrote 13 characters to ./output/greeting.txt"
81
+
82
+ # Use intelligence providers (supports multiple LLM providers)
83
+ intelligence_config = {
84
+ "engine": "anthropic",
85
+ "model": "claude-3-haiku-20240307",
86
+ "api_key": "your-api-key" # or set ANTHROPIC_API_KEY env var
87
+ }
88
+ response = dacp.invoke_intelligence("What is the weather like today?", intelligence_config)
89
+
90
+ # Or use the legacy call_llm function for OpenAI
91
+ response = dacp.call_llm("What is the weather like today?")
92
+ ```
93
+
94
+ ## Features
95
+
96
+ - **Agent Orchestration**: Central management of multiple agents with message routing
97
+ - **Tool Registry**: Register and manage custom tools for LLM agents
98
+ - **Built-in Tools**: Includes a `file_writer` tool that automatically creates parent directories
99
+ - **LLM Integration**: Built-in support for OpenAI models (extensible)
100
+ - **Protocol Parsing**: Parse and validate agent responses
101
+ - **Tool Execution**: Safe execution of registered tools
102
+ - **Conversation History**: Track and query agent interactions
103
+ - **OAS Compliance**: Follows Open Agent Specification standards
104
+
105
+ ## API Reference
106
+
107
+ ### Orchestrator
108
+
109
+ - `Orchestrator()`: Create a new orchestrator instance
110
+ - `register_agent(agent_id: str, agent) -> None`: Register an agent
111
+ - `unregister_agent(agent_id: str) -> bool`: Remove an agent
112
+ - `send_message(agent_id: str, message: Dict) -> Dict`: Send message to specific agent
113
+ - `broadcast_message(message: Dict, exclude_agents: List[str] = None) -> Dict`: Send message to all agents
114
+ - `get_conversation_history(agent_id: str = None) -> List[Dict]`: Get conversation history
115
+ - `clear_history() -> None`: Clear conversation history
116
+ - `get_session_info() -> Dict`: Get current session information
117
+
118
+ ### Tools
119
+
120
+ - `register_tool(tool_id: str, func)`: Register a new tool
121
+ - `run_tool(tool_id: str, args: Dict) -> dict`: Execute a registered tool
122
+ - `TOOL_REGISTRY`: Access the current tool registry
123
+ - `file_writer(path: str, content: str) -> dict`: Write content to file, creating directories automatically
124
+
125
+ ### Intelligence (Multi-Provider LLM Support)
126
+
127
+ - `invoke_intelligence(prompt: str, config: dict) -> str`: Call any supported LLM provider
128
+ - `validate_config(config: dict) -> bool`: Validate intelligence configuration
129
+ - `get_supported_engines() -> list`: Get list of supported engines
130
+
131
+ ### LLM (Legacy)
132
+
133
+ - `call_llm(prompt: str, model: str = "gpt-4") -> str`: Call OpenAI (legacy function)
134
+
135
+ ### Logging
136
+
137
+ - `enable_info_logging(log_file: str = None) -> None`: Enable info-level logging with emoji format
138
+ - `enable_debug_logging(log_file: str = None) -> None`: Enable debug logging with detailed format
139
+ - `enable_quiet_logging() -> None`: Enable only error and critical logging
140
+ - `setup_dacp_logging(level, format_style, include_timestamp, log_file) -> None`: Custom logging setup
141
+ - `set_dacp_log_level(level: str) -> None`: Change log level dynamically
142
+ - `disable_dacp_logging() -> None`: Disable all DACP logging
143
+ - `enable_dacp_logging() -> None`: Re-enable DACP logging
144
+
145
+ ### Protocol
146
+
147
+ - `parse_agent_response(response: str | dict) -> dict`: Parse agent response
148
+ - `is_tool_request(msg: dict) -> bool`: Check if message is a tool request
149
+ - `get_tool_request(msg: dict) -> tuple[str, dict]`: Extract tool request details
150
+ - `wrap_tool_result(name: str, result: dict) -> dict`: Wrap tool result for agent
151
+ - `is_final_response(msg: dict) -> bool`: Check if message is a final response
152
+ - `get_final_response(msg: dict) -> dict`: Extract final response
153
+
154
+ ## Agent Development
155
+
156
+ ### Creating an Agent
157
+
158
+ Agents must implement a `handle_message` method:
159
+
160
+ ```python
161
+ import dacp
162
+
163
+ class GreetingAgent:
164
+ def handle_message(self, message):
165
+ name = message.get("name", "World")
166
+ task = message.get("task")
167
+
168
+ if task == "greet":
169
+ return {"response": f"Hello, {name}!"}
170
+ elif task == "farewell":
171
+ return {"response": f"Goodbye, {name}!"}
172
+ else:
173
+ return {"error": f"Unknown task: {task}"}
174
+
175
+ # Register the agent
176
+ orchestrator = dacp.Orchestrator()
177
+ agent = GreetingAgent()
178
+ orchestrator.register_agent("greeter", agent)
179
+
180
+ # Use the agent
181
+ response = orchestrator.send_message("greeter", {
182
+ "task": "greet",
183
+ "name": "Alice"
184
+ })
185
+ print(response) # {"response": "Hello, Alice!"}
186
+ ```
187
+
188
+ ### Agent Base Class
189
+
190
+ You can also inherit from the `Agent` base class:
191
+
192
+ ```python
193
+ import dacp
194
+
195
+ class MyAgent(dacp.Agent):
196
+ def handle_message(self, message):
197
+ return {"processed": message}
198
+ ```
199
+
200
+ ### Tool Requests from Agents
201
+
202
+ Agents can request tool execution by returning properly formatted responses:
203
+
204
+ ```python
205
+ class ToolUsingAgent:
206
+ def handle_message(self, message):
207
+ if message.get("task") == "write_file":
208
+ return {
209
+ "tool_request": {
210
+ "name": "file_writer",
211
+ "args": {
212
+ "path": "./output/agent_file.txt",
213
+ "content": "Hello from agent!"
214
+ }
215
+ }
216
+ }
217
+ return {"response": "Task completed"}
218
+
219
+ # The orchestrator will automatically execute the tool and return results
220
+ orchestrator = dacp.Orchestrator()
221
+ agent = ToolUsingAgent()
222
+ orchestrator.register_agent("file-agent", agent)
223
+
224
+ response = orchestrator.send_message("file-agent", {"task": "write_file"})
225
+ # Tool will be executed automatically
226
+ ```
227
+
228
+ ## Intelligence Configuration
229
+
230
+ DACP supports multiple LLM providers through the `invoke_intelligence` function. Configure different providers using a configuration dictionary:
231
+
232
+ ### OpenAI
233
+
234
+ ```python
235
+ import dacp
236
+
237
+ openai_config = {
238
+ "engine": "openai",
239
+ "model": "gpt-4", # or "gpt-3.5-turbo", "gpt-4-turbo", etc.
240
+ "api_key": "your-openai-key", # or set OPENAI_API_KEY env var
241
+ "endpoint": "https://api.openai.com/v1", # optional, uses default
242
+ "temperature": 0.7, # optional, default 0.7
243
+ "max_tokens": 150 # optional, default 150
244
+ }
245
+
246
+ response = dacp.invoke_intelligence("Explain quantum computing", openai_config)
247
+ ```
248
+
249
+ ### Anthropic (Claude)
250
+
251
+ ```python
252
+ anthropic_config = {
253
+ "engine": "anthropic",
254
+ "model": "claude-3-haiku-20240307", # or other Claude models
255
+ "api_key": "your-anthropic-key", # or set ANTHROPIC_API_KEY env var
256
+ "endpoint": "https://api.anthropic.com", # optional, uses default
257
+ "temperature": 0.7,
258
+ "max_tokens": 150
259
+ }
260
+
261
+ response = dacp.invoke_intelligence("Write a poem about AI", anthropic_config)
262
+ ```
263
+
264
+ ### Azure OpenAI
265
+
266
+ ```python
267
+ azure_config = {
268
+ "engine": "azure",
269
+ "model": "gpt-4", # Your deployed model name
270
+ "api_key": "your-azure-key", # or set AZURE_OPENAI_API_KEY env var
271
+ "endpoint": "https://your-resource.openai.azure.com", # or set AZURE_OPENAI_ENDPOINT env var
272
+ "api_version": "2024-02-01" # optional, default provided
273
+ }
274
+
275
+ response = dacp.invoke_intelligence("Analyze this data", azure_config)
276
+ ```
277
+
278
+ ### Local LLMs (Ollama, etc.)
279
+
280
+ ```python
281
+ # For Ollama (default local setup)
282
+ local_config = {
283
+ "engine": "local",
284
+ "model": "llama2", # or any model available in Ollama
285
+ "endpoint": "http://localhost:11434/api/generate", # Ollama default
286
+ "temperature": 0.7,
287
+ "max_tokens": 150
288
+ }
289
+
290
+ # For custom local APIs
291
+ custom_local_config = {
292
+ "engine": "local",
293
+ "model": "custom-model",
294
+ "endpoint": "http://localhost:8080/generate", # Your API endpoint
295
+ "temperature": 0.7,
296
+ "max_tokens": 150
297
+ }
298
+
299
+ response = dacp.invoke_intelligence("Tell me a story", local_config)
300
+ ```
301
+
302
+ ### Configuration from OAS YAML
303
+
304
+ You can load configuration from OAS (Open Agent Specification) YAML files:
305
+
306
+ ```python
307
+ import yaml
308
+ import dacp
309
+
310
+ # Load config from YAML file
311
+ with open('agent_config.yaml', 'r') as f:
312
+ config = yaml.safe_load(f)
313
+
314
+ intelligence_config = config.get('intelligence', {})
315
+ response = dacp.invoke_intelligence("Hello, AI!", intelligence_config)
316
+ ```
317
+
318
+ ### Installation for Different Providers
319
+
320
+ Install optional dependencies for the providers you need:
321
+
322
+ ```bash
323
+ # For OpenAI
324
+ pip install dacp[openai]
325
+
326
+ # For Anthropic
327
+ pip install dacp[anthropic]
328
+
329
+ # For all providers
330
+ pip install dacp[all]
331
+
332
+ # For local providers (requests is already included in base install)
333
+ pip install dacp[local]
334
+ ```
335
+
336
+ ## Built-in Tools
337
+
338
+ ### file_writer
339
+
340
+ The `file_writer` tool automatically creates parent directories and writes content to files:
341
+
342
+ ```python
343
+ import dacp
344
+
345
+ # This will create the ./output/ directory if it doesn't exist
346
+ result = dacp.file_writer("./output/file.txt", "Hello, World!")
347
+
348
+ if result["success"]:
349
+ print(f"File written: {result['path']}")
350
+ print(f"Message: {result['message']}")
351
+ else:
352
+ print(f"Error: {result['error']}")
353
+ ```
354
+
355
+ **Features:**
356
+ - ✅ Automatically creates parent directories
357
+ - ✅ Handles Unicode content properly
358
+ - ✅ Returns detailed success/error information
359
+ - ✅ Safe error handling
360
+
361
+ ## Logging
362
+
363
+ DACP includes comprehensive logging to help you monitor agent operations, tool executions, and intelligence calls.
364
+
365
+ ### Quick Setup
366
+
367
+ ```python
368
+ import dacp
369
+
370
+ # Enable info-level logging with emoji format (recommended for production)
371
+ dacp.enable_info_logging()
372
+
373
+ # Enable debug logging for development (shows detailed information)
374
+ dacp.enable_debug_logging()
375
+
376
+ # Enable quiet logging (errors only)
377
+ dacp.enable_quiet_logging()
378
+ ```
379
+
380
+ ### Custom Configuration
381
+
382
+ ```python
383
+ # Full control over logging configuration
384
+ dacp.setup_dacp_logging(
385
+ level="INFO", # DEBUG, INFO, WARNING, ERROR, CRITICAL
386
+ format_style="emoji", # "simple", "detailed", "emoji"
387
+ include_timestamp=True, # Include timestamps
388
+ log_file="dacp.log" # Optional: also log to file
389
+ )
390
+
391
+ # Change log level dynamically
392
+ dacp.set_dacp_log_level("DEBUG")
393
+
394
+ # Disable/enable logging
395
+ dacp.disable_dacp_logging()
396
+ dacp.enable_dacp_logging()
397
+ ```
398
+
399
+ ### What Gets Logged
400
+
401
+ With logging enabled, you'll see:
402
+
403
+ - **🎭 Agent Registration**: When agents are registered/unregistered
404
+ - **📨 Message Routing**: Messages sent to agents and broadcast operations
405
+ - **🔧 Tool Execution**: Tool calls, execution time, and results
406
+ - **🧠 Intelligence Calls**: LLM provider calls, configuration, and performance
407
+ - **❌ Errors**: Detailed error information with context
408
+ - **📊 Performance**: Execution times for operations
409
+
410
+ ### Log Format Examples
411
+
412
+ **Emoji Format** (clean, production-friendly):
413
+ ```
414
+ 2025-07-02 09:54:58 - 🎭 Orchestrator initialized with session ID: session_1751414098
415
+ 2025-07-02 09:54:58 - ✅ Agent 'demo-agent' registered successfully (type: MyAgent)
416
+ 2025-07-02 09:54:58 - 📨 Sending message to agent 'demo-agent'
417
+ 2025-07-02 09:54:58 - 🔧 Agent 'demo-agent' requested tool execution
418
+ 2025-07-02 09:54:58 - 🛠️ Executing tool: 'file_writer' with args: {...}
419
+ 2025-07-02 09:54:58 - ✅ Tool 'file_writer' executed successfully in 0.001s
420
+ ```
421
+
422
+ **Detailed Format** (development/debugging):
423
+ ```
424
+ 2025-07-02 09:54:58 - dacp.orchestrator:89 - INFO - 📨 Sending message to agent 'demo-agent'
425
+ 2025-07-02 09:54:58 - dacp.orchestrator:90 - DEBUG - 📋 Message content: {'task': 'greet'}
426
+ 2025-07-02 09:54:58 - dacp.tools:26 - DEBUG - 🛠️ Executing tool 'file_writer' with args: {...}
427
+ ```
428
+
429
+ ### Example Usage
430
+
431
+ ```python
432
+ import dacp
433
+
434
+ # Enable logging
435
+ dacp.enable_info_logging()
436
+
437
+ # Create and use components - logging happens automatically
438
+ orchestrator = dacp.Orchestrator()
439
+ agent = MyAgent()
440
+ orchestrator.register_agent("my-agent", agent)
441
+
442
+ # This will log the message sending, tool execution, etc.
443
+ response = orchestrator.send_message("my-agent", {"task": "process"})
444
+ ```
445
+
446
+ ## Usage Patterns: Open Agent Spec vs Independent Client Usage
447
+
448
+ DACP supports two primary usage patterns: integration with Open Agent Specification (OAS) projects and independent client usage. Both provide full access to DACP's capabilities but with different integration approaches.
449
+
450
+ ### Open Agent Specification (OAS) Integration
451
+
452
+ **For OAS developers:** DACP integrates seamlessly with generated agents through YAML configuration and automatic setup.
453
+
454
+ #### YAML Configuration Pattern
455
+
456
+ ```yaml
457
+ # agent_config.yaml (Open Agent Specification)
458
+ apiVersion: "v1"
459
+ kind: "Agent"
460
+ metadata:
461
+ name: "data-analysis-agent"
462
+ type: "smart_analysis"
463
+
464
+ # DACP automatically configures logging
465
+ logging:
466
+ enabled: true
467
+ level: "INFO"
468
+ format_style: "emoji"
469
+ log_file: "./logs/agent.log"
470
+ env_overrides:
471
+ level: "DACP_LOG_LEVEL"
472
+
473
+ # Multi-provider intelligence configuration
474
+ intelligence:
475
+ engine: "anthropic"
476
+ model: "claude-3-haiku-20240618"
477
+ # API key from environment: ANTHROPIC_API_KEY
478
+
479
+ # Define agent capabilities
480
+ capabilities:
481
+ - name: "analyze_data"
482
+ description: "Analyze datasets and generate insights"
483
+ - name: "generate_report"
484
+ description: "Generate analysis reports"
485
+ ```
486
+
487
+ #### Generated Agent Code (OAS Pattern)
488
+
489
+ ```python
490
+ # Generated by OAS with DACP integration
491
+ import dacp
492
+ import yaml
493
+
494
+ class DataAnalysisAgent(dacp.Agent):
495
+ def __init__(self, config_path="agent_config.yaml"):
496
+ # DACP auto-configures logging from YAML
497
+ with open(config_path, 'r') as f:
498
+ self.config = yaml.safe_load(f)
499
+
500
+ # Automatic logging setup
501
+ self.setup_logging()
502
+
503
+ # Load intelligence configuration
504
+ self.intelligence_config = self.config.get('intelligence', {})
505
+
506
+ def setup_logging(self):
507
+ """Auto-configure DACP logging from YAML config."""
508
+ logging_config = self.config.get('logging', {})
509
+ if logging_config.get('enabled', False):
510
+ dacp.setup_dacp_logging(
511
+ level=logging_config.get('level', 'INFO'),
512
+ format_style=logging_config.get('format_style', 'emoji'),
513
+ log_file=logging_config.get('log_file')
514
+ )
515
+
516
+ def handle_message(self, message):
517
+ """Handle capabilities defined in YAML."""
518
+ task = message.get("task")
519
+
520
+ if task == "analyze_data":
521
+ return self.analyze_data(message)
522
+ elif task == "generate_report":
523
+ return self.generate_report(message)
524
+ else:
525
+ return {"error": f"Unknown task: {task}"}
526
+
527
+ def analyze_data(self, message):
528
+ """Analyze data using configured intelligence provider."""
529
+ data = message.get("data", "No data provided")
530
+
531
+ try:
532
+ result = dacp.invoke_intelligence(
533
+ f"Analyze this data and provide insights: {data}",
534
+ self.intelligence_config
535
+ )
536
+ return {"response": result}
537
+ except Exception as e:
538
+ return {"error": f"Analysis failed: {e}"}
539
+
540
+ def generate_report(self, message):
541
+ """Generate reports using DACP's file_writer tool."""
542
+ subject = message.get("subject", "report")
543
+ data = message.get("data", "No data")
544
+
545
+ return {
546
+ "tool_request": {
547
+ "name": "file_writer",
548
+ "args": {
549
+ "path": f"./reports/{subject}.txt",
550
+ "content": f"# Analysis Report: {subject}\n\nData: {data}\n"
551
+ }
552
+ }
553
+ }
554
+
555
+ # Auto-generated main function
556
+ def main():
557
+ # Zero-configuration setup
558
+ orchestrator = dacp.Orchestrator()
559
+ agent = DataAnalysisAgent()
560
+ orchestrator.register_agent("data-analysis-agent", agent)
561
+
562
+ print("🚀 OAS Agent running with DACP integration!")
563
+ # Agent ready for messages via orchestrator
564
+
565
+ if __name__ == "__main__":
566
+ main()
567
+ ```
568
+
569
+ #### OAS Benefits
570
+
571
+ - ✅ **Zero Configuration**: Logging and intelligence work out of the box
572
+ - ✅ **YAML-Driven**: All configuration in standard OAS YAML format
573
+ - ✅ **Auto-Generated**: Complete agents generated from specifications
574
+ - ✅ **Environment Overrides**: Runtime configuration via environment variables
575
+ - ✅ **Standardized**: Consistent interface across all OAS agents
576
+
577
+ ### Independent Client Usage
578
+
579
+ **For independent developers:** Use DACP directly as a flexible agent router and orchestration platform.
580
+
581
+ #### Direct Integration Pattern
582
+
583
+ ```python
584
+ import dacp
585
+ import os
586
+
587
+ class MyCustomAgent(dacp.Agent):
588
+ """Independent client's custom agent."""
589
+
590
+ def __init__(self):
591
+ # Manual setup - full control
592
+ self.setup_intelligence()
593
+ self.setup_logging()
594
+
595
+ def setup_intelligence(self):
596
+ """Configure intelligence providers manually."""
597
+ self.intelligence_configs = {
598
+ "research": {
599
+ "engine": "openai",
600
+ "model": "gpt-4",
601
+ "api_key": os.getenv("OPENAI_API_KEY")
602
+ },
603
+ "analysis": {
604
+ "engine": "anthropic",
605
+ "model": "claude-3-sonnet-20240229",
606
+ "api_key": os.getenv("ANTHROPIC_API_KEY")
607
+ },
608
+ "local": {
609
+ "engine": "local",
610
+ "model": "llama2",
611
+ "endpoint": "http://localhost:11434/api/generate"
612
+ }
613
+ }
614
+
615
+ def setup_logging(self):
616
+ """Configure logging manually."""
617
+ dacp.enable_info_logging(log_file="./logs/custom_agent.log")
618
+
619
+ def handle_message(self, message):
620
+ """Custom business logic."""
621
+ task = message.get("task")
622
+
623
+ if task == "research_topic":
624
+ return self.research_with_multiple_llms(message)
625
+ elif task == "process_data":
626
+ return self.multi_step_processing(message)
627
+ elif task == "custom_workflow":
628
+ return self.handle_custom_workflow(message)
629
+ else:
630
+ return {"error": f"Unknown task: {task}"}
631
+
632
+ def research_with_multiple_llms(self, message):
633
+ """Use multiple LLM providers for comprehensive research."""
634
+ topic = message.get("topic", "AI Research")
635
+
636
+ # Use different LLMs for different aspects
637
+ research_prompt = f"Research the topic: {topic}"
638
+ analysis_prompt = f"Analyze research findings for: {topic}"
639
+
640
+ try:
641
+ # Research with GPT-4
642
+ research = dacp.invoke_intelligence(
643
+ research_prompt,
644
+ self.intelligence_configs["research"]
645
+ )
646
+
647
+ # Analysis with Claude
648
+ analysis = dacp.invoke_intelligence(
649
+ f"Analyze: {research}",
650
+ self.intelligence_configs["analysis"]
651
+ )
652
+
653
+ return {
654
+ "research": research,
655
+ "analysis": analysis,
656
+ "status": "completed"
657
+ }
658
+ except Exception as e:
659
+ return {"error": f"Research failed: {e}"}
660
+
661
+ def multi_step_processing(self, message):
662
+ """Multi-step workflow with tool chaining."""
663
+ data = message.get("data", "sample data")
664
+
665
+ # Step 1: Process and save data
666
+ return {
667
+ "tool_request": {
668
+ "name": "file_writer",
669
+ "args": {
670
+ "path": "./processing/input_data.txt",
671
+ "content": f"Raw data: {data}\nProcessed at: {dacp.time.time()}"
672
+ }
673
+ }
674
+ }
675
+ # In real implementation, would continue workflow in subsequent messages
676
+
677
+ # Independent client setup
678
+ def main():
679
+ # Manual orchestrator setup
680
+ orchestrator = dacp.Orchestrator()
681
+
682
+ # Register multiple custom agents
683
+ research_agent = MyCustomAgent()
684
+ data_agent = MyCustomAgent()
685
+ workflow_agent = MyCustomAgent()
686
+
687
+ orchestrator.register_agent("researcher", research_agent)
688
+ orchestrator.register_agent("processor", data_agent)
689
+ orchestrator.register_agent("workflow", workflow_agent)
690
+
691
+ # Direct control over routing
692
+ print("🚀 Independent client agents running!")
693
+
694
+ # Example: Route complex task across multiple agents
695
+ research_result = orchestrator.send_message("researcher", {
696
+ "task": "research_topic",
697
+ "topic": "Multi-Agent Systems"
698
+ })
699
+
700
+ processing_result = orchestrator.send_message("processor", {
701
+ "task": "process_data",
702
+ "data": research_result
703
+ })
704
+
705
+ # Broadcast updates to all agents
706
+ orchestrator.broadcast_message({
707
+ "task": "status_update",
708
+ "message": "Workflow completed"
709
+ })
710
+
711
+ if __name__ == "__main__":
712
+ main()
713
+ ```
714
+
715
+ #### Advanced Independent Usage
716
+
717
+ ```python
718
+ # Register custom tools for specialized business logic
719
+ def custom_data_processor(args):
720
+ """Client's proprietary data processing tool."""
721
+ data = args.get("data", [])
722
+ algorithm = args.get("algorithm", "default")
723
+
724
+ # Custom processing logic
725
+ processed = [item * 2 for item in data if isinstance(item, (int, float))]
726
+
727
+ return {
728
+ "success": True,
729
+ "processed_data": processed,
730
+ "algorithm_used": algorithm,
731
+ "count": len(processed)
732
+ }
733
+
734
+ # Register with DACP
735
+ dacp.register_tool("custom_processor", custom_data_processor)
736
+
737
+ # Use in agents
738
+ class SpecializedAgent(dacp.Agent):
739
+ def handle_message(self, message):
740
+ if message.get("task") == "process_with_custom_tool":
741
+ return {
742
+ "tool_request": {
743
+ "name": "custom_processor",
744
+ "args": {
745
+ "data": message.get("data", []),
746
+ "algorithm": "proprietary_v2"
747
+ }
748
+ }
749
+ }
750
+ ```
751
+
752
+ #### Independent Client Benefits
753
+
754
+ - ✅ **Full Control**: Manual configuration of all components
755
+ - ✅ **Flexible Architecture**: Design your own agent interactions
756
+ - ✅ **Custom Tools**: Register proprietary business logic tools
757
+ - ✅ **Multi-Provider**: Use different LLMs for different tasks
758
+ - ✅ **Direct API Access**: Call DACP functions directly when needed
759
+ - ✅ **Complex Workflows**: Build sophisticated multi-agent orchestrations
760
+
761
+ ### Choosing Your Pattern
762
+
763
+ | Feature | OAS Integration | Independent Client |
764
+ |---------|----------------|-------------------|
765
+ | **Setup Complexity** | Minimal (auto-generated) | Manual (full control) |
766
+ | **Configuration** | YAML-driven | Programmatic |
767
+ | **Agent Generation** | Automatic from spec | Manual implementation |
768
+ | **Customization** | Template-based | Unlimited flexibility |
769
+ | **Best For** | Rapid prototyping, standard agents | Complex workflows, custom logic |
770
+ | **Learning Curve** | Low | Medium |
771
+
772
+ ### Getting Started
773
+
774
+ **For OAS Integration:**
775
+ 1. Add DACP logging section to your YAML spec
776
+ 2. Generate agents with DACP base class
777
+ 3. Agents work with zero additional configuration
778
+
779
+ **For Independent Usage:**
780
+ 1. `pip install dacp`
781
+ 2. Create agents inheriting from `dacp.Agent`
782
+ 3. Register with `dacp.Orchestrator()`
783
+ 4. Build your custom workflows
784
+
785
+ Both patterns provide full access to DACP's capabilities: multi-provider LLM routing, tool execution, comprehensive logging, conversation history, and multi-agent orchestration.
786
+
787
+ ## Development
788
+
789
+ ```bash
790
+ # Install development dependencies
791
+ pip install -e .[dev]
792
+
793
+ # Run tests
794
+ pytest
795
+
796
+ # Format code
797
+ black .
798
+
799
+ # Lint code
800
+ flake8
801
+ ```
802
+
803
+ ## License
804
+
805
+ MIT License