dao-ai 0.0.35__py3-none-any.whl → 0.1.0__py3-none-any.whl

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
Files changed (58) hide show
  1. dao_ai/__init__.py +29 -0
  2. dao_ai/cli.py +195 -30
  3. dao_ai/config.py +797 -242
  4. dao_ai/genie/__init__.py +38 -0
  5. dao_ai/genie/cache/__init__.py +43 -0
  6. dao_ai/genie/cache/base.py +72 -0
  7. dao_ai/genie/cache/core.py +75 -0
  8. dao_ai/genie/cache/lru.py +329 -0
  9. dao_ai/genie/cache/semantic.py +919 -0
  10. dao_ai/genie/core.py +35 -0
  11. dao_ai/graph.py +27 -253
  12. dao_ai/hooks/__init__.py +9 -6
  13. dao_ai/hooks/core.py +22 -190
  14. dao_ai/memory/__init__.py +10 -0
  15. dao_ai/memory/core.py +23 -5
  16. dao_ai/memory/databricks.py +389 -0
  17. dao_ai/memory/postgres.py +2 -2
  18. dao_ai/messages.py +6 -4
  19. dao_ai/middleware/__init__.py +125 -0
  20. dao_ai/middleware/assertions.py +778 -0
  21. dao_ai/middleware/base.py +50 -0
  22. dao_ai/middleware/core.py +61 -0
  23. dao_ai/middleware/guardrails.py +415 -0
  24. dao_ai/middleware/human_in_the_loop.py +228 -0
  25. dao_ai/middleware/message_validation.py +554 -0
  26. dao_ai/middleware/summarization.py +192 -0
  27. dao_ai/models.py +1177 -108
  28. dao_ai/nodes.py +118 -161
  29. dao_ai/optimization.py +664 -0
  30. dao_ai/orchestration/__init__.py +52 -0
  31. dao_ai/orchestration/core.py +287 -0
  32. dao_ai/orchestration/supervisor.py +264 -0
  33. dao_ai/orchestration/swarm.py +226 -0
  34. dao_ai/prompts.py +126 -29
  35. dao_ai/providers/databricks.py +126 -381
  36. dao_ai/state.py +139 -21
  37. dao_ai/tools/__init__.py +11 -5
  38. dao_ai/tools/core.py +57 -4
  39. dao_ai/tools/email.py +280 -0
  40. dao_ai/tools/genie.py +108 -35
  41. dao_ai/tools/mcp.py +4 -3
  42. dao_ai/tools/memory.py +50 -0
  43. dao_ai/tools/python.py +4 -12
  44. dao_ai/tools/search.py +14 -0
  45. dao_ai/tools/slack.py +1 -1
  46. dao_ai/tools/unity_catalog.py +8 -6
  47. dao_ai/tools/vector_search.py +16 -9
  48. dao_ai/utils.py +72 -8
  49. dao_ai-0.1.0.dist-info/METADATA +1878 -0
  50. dao_ai-0.1.0.dist-info/RECORD +62 -0
  51. dao_ai/chat_models.py +0 -204
  52. dao_ai/guardrails.py +0 -112
  53. dao_ai/tools/human_in_the_loop.py +0 -100
  54. dao_ai-0.0.35.dist-info/METADATA +0 -1169
  55. dao_ai-0.0.35.dist-info/RECORD +0 -41
  56. {dao_ai-0.0.35.dist-info → dao_ai-0.1.0.dist-info}/WHEEL +0 -0
  57. {dao_ai-0.0.35.dist-info → dao_ai-0.1.0.dist-info}/entry_points.txt +0 -0
  58. {dao_ai-0.0.35.dist-info → dao_ai-0.1.0.dist-info}/licenses/LICENSE +0 -0
@@ -1,1169 +0,0 @@
1
- Metadata-Version: 2.4
2
- Name: dao-ai
3
- Version: 0.0.35
4
- Summary: DAO AI: A modular, multi-agent orchestration framework for complex AI workflows. Supports agent handoff, tool integration, and dynamic configuration via YAML.
5
- Project-URL: Homepage, https://github.com/natefleming/dao-ai
6
- Project-URL: Documentation, https://natefleming.github.io/dao-ai
7
- Project-URL: Repository, https://github.com/natefleming/dao-ai
8
- Project-URL: Issues, https://github.com/natefleming/dao-ai/issues
9
- Project-URL: Changelog, https://github.com/natefleming/dao-ai/blob/main/CHANGELOG.md
10
- Author-email: Nate Fleming <nate.fleming@databricks.com>, Nate Fleming <nate.fleming@gmail.com>
11
- Maintainer-email: Nate Fleming <nate.fleming@databricks.com>
12
- License: MIT
13
- License-File: LICENSE
14
- Keywords: agents,ai,databricks,langchain,langgraph,llm,multi-agent,orchestration,vector-search,workflow
15
- Classifier: Development Status :: 3 - Alpha
16
- Classifier: Intended Audience :: Developers
17
- Classifier: Intended Audience :: Science/Research
18
- Classifier: License :: OSI Approved :: MIT License
19
- Classifier: Operating System :: OS Independent
20
- Classifier: Programming Language :: Python :: 3
21
- Classifier: Programming Language :: Python :: 3.11
22
- Classifier: Programming Language :: Python :: 3.12
23
- Classifier: Programming Language :: Python :: 3.13
24
- Classifier: Topic :: Scientific/Engineering :: Artificial Intelligence
25
- Classifier: Topic :: Software Development :: Libraries :: Python Modules
26
- Classifier: Topic :: System :: Distributed Computing
27
- Requires-Python: >=3.11
28
- Requires-Dist: databricks-agents>=1.8.2
29
- Requires-Dist: databricks-langchain>=0.11.0
30
- Requires-Dist: databricks-mcp>=0.3.0
31
- Requires-Dist: databricks-sdk[openai]>=0.67.0
32
- Requires-Dist: ddgs>=9.9.3
33
- Requires-Dist: flashrank>=0.2.8
34
- Requires-Dist: gepa>=0.0.17
35
- Requires-Dist: grandalf>=0.8
36
- Requires-Dist: langchain-mcp-adapters>=0.2.1
37
- Requires-Dist: langchain-tavily>=0.2.11
38
- Requires-Dist: langchain>=1.1.3
39
- Requires-Dist: langgraph-checkpoint-postgres>=3.0.2
40
- Requires-Dist: langgraph-supervisor>=0.0.31
41
- Requires-Dist: langgraph-swarm>=0.1.0
42
- Requires-Dist: langgraph>=1.0.4
43
- Requires-Dist: langmem>=0.0.30
44
- Requires-Dist: loguru>=0.7.3
45
- Requires-Dist: mcp>=1.23.3
46
- Requires-Dist: mlflow>=3.7.0
47
- Requires-Dist: nest-asyncio>=1.6.0
48
- Requires-Dist: openevals>=0.0.19
49
- Requires-Dist: openpyxl>=3.1.5
50
- Requires-Dist: psycopg[binary,pool]>=3.3.2
51
- Requires-Dist: pydantic>=2.12.0
52
- Requires-Dist: python-dotenv>=1.1.0
53
- Requires-Dist: pyyaml>=6.0.2
54
- Requires-Dist: rich>=14.0.0
55
- Requires-Dist: scipy<=1.15
56
- Requires-Dist: sqlparse>=0.5.3
57
- Requires-Dist: tomli>=2.3.0
58
- Requires-Dist: unitycatalog-ai[databricks]>=0.3.2
59
- Provides-Extra: databricks
60
- Requires-Dist: databricks-connect>=15.0.0; extra == 'databricks'
61
- Requires-Dist: databricks-vectorsearch>=0.63; extra == 'databricks'
62
- Requires-Dist: pyspark>=3.5.0; extra == 'databricks'
63
- Provides-Extra: dev
64
- Requires-Dist: mypy>=1.0.0; extra == 'dev'
65
- Requires-Dist: pre-commit>=3.0.0; extra == 'dev'
66
- Requires-Dist: pytest>=8.3.5; extra == 'dev'
67
- Requires-Dist: ruff>=0.11.11; extra == 'dev'
68
- Provides-Extra: docs
69
- Requires-Dist: mkdocs-material>=9.0.0; extra == 'docs'
70
- Requires-Dist: mkdocs>=1.5.0; extra == 'docs'
71
- Requires-Dist: mkdocstrings[python]>=0.24.0; extra == 'docs'
72
- Provides-Extra: test
73
- Requires-Dist: pytest-cov>=4.0.0; extra == 'test'
74
- Requires-Dist: pytest-mock>=3.10.0; extra == 'test'
75
- Requires-Dist: pytest>=8.3.5; extra == 'test'
76
- Description-Content-Type: text/markdown
77
-
78
- # Declarative Agent Orchestration (DAO) Framework
79
-
80
- A modular, multi-agent orchestration framework for building sophisticated AI workflows on Databricks. While this implementation provides a complete retail AI reference architecture, the framework is designed to support any domain or use case requiring agent coordination, tool integration, and dynamic configuration.
81
-
82
- ## Overview
83
-
84
- This project implements a LangGraph-based multi-agent orchestration framework that can:
85
-
86
- - **Route queries** to specialized agents based on content and context
87
- - **Coordinate multiple AI agents** working together on complex tasks
88
- - **Integrate diverse tools** including databases, APIs, vector search, and external services
89
- - **Support flexible orchestration patterns** (supervisor, swarm, and custom workflows)
90
- - **Provide dynamic configuration** through YAML-based agent and tool definitions
91
- - **Enable domain-specific specialization** while maintaining a unified interface
92
-
93
- **Retail Reference Implementation**: This repository includes a complete retail AI system demonstrating:
94
- - Product inventory management and search
95
- - Customer recommendation engines
96
- - Order tracking and management
97
- - Product classification and information retrieval
98
-
99
- The system uses Databricks Vector Search, Unity Catalog, and LLMs to provide accurate, context-aware responses across any domain.
100
-
101
- ## Key Features
102
-
103
- - **Multi-Modal Interface**: CLI commands and Python API for development and deployment
104
- - **Agent Lifecycle Management**: Create, deploy, and monitor agents programmatically
105
- - **Vector Search Integration**: Built-in support for Databricks Vector Search with retrieval tools
106
- - **Configuration-Driven**: YAML-based configuration with validation and IDE support
107
- - **MLflow Integration**: Automatic model packaging, versioning, and deployment
108
- - **Monitoring & Evaluation**: Built-in assessment and monitoring capabilities
109
-
110
- ## Architecture
111
-
112
- ### Overview
113
-
114
- The Multi-Agent AI system is built as a component-based agent architecture that routes queries to specialized agents based on the nature of the request. This approach enables domain-specific handling while maintaining a unified interface that can be adapted to any industry or use case.
115
-
116
- ![View Architecture Diagram](./docs/hardware_store/retail_supervisor.png)
117
-
118
- ### Core Components
119
-
120
- #### Configuration Components
121
-
122
- All components are defined from the provided [`model_config.yaml`](config/hardware_store/supervisor_postgres.yaml) using a modular approach:
123
-
124
- - **Schemas**: Define database and catalog structures
125
- - **Resources**: Configure infrastructure components like LLMs, vector stores, catalogs, warehouses, and databases
126
- - **Tools**: Define functions that agents can use to perform tasks (dictionary-based with keys as tool names)
127
- - **Agents**: Specialized AI assistants configured for specific domains (dictionary-based with keys as agent names)
128
- - **Guardrails**: Quality control mechanisms to ensure accurate responses
129
- - **Retrievers**: Configuration for vector search and retrieval
130
- - **Evaluation**: Configuration for model evaluation and testing
131
- - **Datasets**: Configuration for training and evaluation datasets
132
- - **App**: Overall application configuration including orchestration and logging
133
-
134
- #### Message Processing Flow
135
-
136
- The system uses a LangGraph-based workflow with the following key nodes:
137
-
138
- - **Message Validation**: Validates incoming requests (`message_validation_node`)
139
- - **Agent Routing**: Routes messages to appropriate specialized agents using supervisor or swarm patterns
140
- - **Agent Execution**: Processes requests using specialized agents with their configured tools
141
- - **Response Generation**: Returns structured responses to users
142
-
143
- #### Specialized Agents
144
-
145
- Agents are dynamically configured from the provided [`model_config.yaml`](config/hardware_store/supervisor_postgres.yaml) file and can include:
146
- - Custom LLM models and parameters
147
- - Specific sets of available tools (Python functions, Unity Catalog functions, factory tools, MCP services)
148
- - Domain-specific system prompts
149
- - Guardrails for response quality
150
- - Handoff prompts for agent coordination
151
-
152
- ### Technical Implementation
153
-
154
- The system is implemented using:
155
-
156
- - **LangGraph**: For workflow orchestration and state management
157
- - **LangChain**: For LLM interactions and tool integration
158
- - **MLflow**: For model tracking and deployment
159
- - **Databricks**: LLM APIs, Vector Search, Unity Catalog, and Model Serving
160
- - **Pydantic**: For configuration validation and schema management
161
-
162
- ## Prerequisites
163
-
164
- - Python 3.12+
165
- - Databricks workspace with access to:
166
- - Unity Catalog
167
- - Model Serving
168
- - Vector Search
169
- - Genie (optional)
170
- - Databricks CLI configured with appropriate permissions
171
- - Databricks model endpoints for LLMs and embeddings
172
-
173
- ## Setup
174
-
175
- 1. Clone this repository
176
- 2. Install dependencies:
177
-
178
- ```bash
179
- # Create and activate a Python virtual environment
180
- uv venv
181
- source .venv/bin/activate # On Windows: .venv\Scripts\activate
182
-
183
- # Install dependencies using Makefile
184
- make install
185
- ```
186
-
187
- 3. Configure Databricks CLI with appropriate workspace access
188
-
189
- ## Quick Start
190
-
191
- ### Option 1: Using Python API (Recommended for Development)
192
-
193
- ```python
194
- from retail_ai.config import AppConfig
195
-
196
- # Load your configuration
197
- config = AppConfig.from_file("config/hardware_store/supervisor_postgres.yaml")
198
-
199
- # Create vector search infrastructure
200
- for name, vector_store in config.resources.vector_stores.items():
201
- vector_store.create()
202
-
203
- # Create and deploy your agent
204
- config.create_agent()
205
- config.deploy_agent()
206
-
207
- ```
208
-
209
- ### Option 2: Using CLI Commands
210
-
211
- ```bash
212
- # Validate configuration
213
- dao-ai validate -c config/hardware_store/supervisor_postgres.yaml
214
-
215
- # Generate workflow diagram
216
- dao-ai graph -o architecture.png
217
-
218
- # Deploy using Databricks Asset Bundles
219
- dao-ai bundle --deploy --run
220
-
221
- # Deploy using Databricks Asset Bundles with specific configuration
222
- dao-ai -vvvv bundle --deploy --run --target dev --config config/hardware_store/supervisor_postgres.yaml --profile DEFAULT
223
- ```
224
-
225
- See the [Python API](#python-api) section for detailed programmatic usage, or [Command Line Interface](#command-line-interface) for CLI usage.
226
-
227
- ## Command Line Interface
228
-
229
- The framework includes a comprehensive CLI for managing, validating, and visualizing your multi-agent system:
230
-
231
- ### Schema Generation
232
- Generate JSON schema for configuration validation and IDE autocompletion:
233
- ```bash
234
- dao-ai schema > schema.json
235
- ```
236
-
237
- ### Configuration Validation
238
- Validate your configuration file for syntax and semantic correctness:
239
- ```bash
240
- # Validate default configuration (config/hardware_store/supervisor_postgres.yaml)
241
- dao-ai validate
242
-
243
- # Validate specific configuration file
244
- dao-ai validate -c config/production.yaml
245
- ```
246
-
247
- ### Graph Visualization
248
- Generate visual representations of your agent workflow:
249
- ```bash
250
- # Generate architecture diagram (using default config/hardware_store/supervisor_postgres.yaml)
251
- dao-ai graph -o architecture.png
252
-
253
- # Generate diagram from specific config
254
- dao-ai graph -o workflow.png -c config/custom.yaml
255
- ```
256
-
257
- ### Deployment
258
- Deploy your multi-agent system using Databricks Asset Bundles:
259
- ```bash
260
- # Deploy the system
261
- dao-ai bundle --deploy
262
-
263
- # Run the deployed system
264
- dao-ai bundle --run
265
-
266
- # Use specific Databricks profile
267
- dao-ai bundle --deploy --run --profile my-profile
268
- ```
269
-
270
- ### Verbose Output
271
- Add `-v`, `-vv`, `-vvv`, or `-vvvv` flags for increasing levels of verbosity (ERROR, WARNING, INFO, DEBUG, TRACE).
272
-
273
- ## Python API
274
-
275
- The framework provides a comprehensive Python API for programmatic access to all functionality. The main entry point is the `AppConfig` class, which provides methods for agent lifecycle management, vector search operations, and configuration utilities.
276
-
277
- ### Quick Start
278
-
279
- ```python
280
- from retail_ai.config import AppConfig
281
-
282
- # Load configuration from file
283
- config = AppConfig.from_file(path="config/hardware_store/supervisor_postgres.yaml")
284
- ```
285
-
286
- ### Agent Lifecycle Management
287
-
288
- #### Creating Agents
289
- Package and register your multi-agent system as an MLflow model:
290
-
291
- ```python
292
- # Create agent with default settings
293
- config.create_agent()
294
-
295
- # Create agent with additional requirements and code paths
296
- config.create_agent(
297
- additional_pip_reqs=["custom-package==1.0.0"],
298
- additional_code_paths=["./custom_modules"]
299
- )
300
- ```
301
-
302
- #### Deploying Agents
303
- Deploy your registered agent to a Databricks serving endpoint:
304
-
305
- ```python
306
- # Deploy agent to serving endpoint
307
- config.deploy_agent()
308
- ```
309
-
310
- The deployment process:
311
- 1. Retrieves the latest model version from MLflow
312
- 2. Creates or updates a Databricks model serving endpoint
313
- 3. Configures scaling, environment variables, and permissions
314
- 4. Sets up proper authentication and resource access
315
-
316
- ### Vector Search Operations
317
-
318
- #### Creating Vector Search Infrastructure
319
- Create vector search endpoints and indexes from your configuration:
320
-
321
- ```python
322
- # Access vector stores from configuration
323
- vector_stores = config.resources.vector_stores
324
-
325
- # Create all vector stores
326
- for name, vector_store in vector_stores.items():
327
- print(f"Creating vector store: {name}")
328
- vector_store.create()
329
- ```
330
-
331
- #### Using Vector Search
332
- Query your vector search indexes for retrieval-augmented generation:
333
-
334
- ```python
335
- # Method 1: Direct index access
336
- from retail_ai.config import RetrieverModel
337
-
338
- question = "What products do you have in stock?"
339
-
340
- for name, retriever in config.retrievers.items():
341
- # Get the vector search index
342
- index = retriever.vector_store.as_index()
343
-
344
- # Perform similarity search
345
- results = index.similarity_search(
346
- query_text=question,
347
- columns=retriever.columns,
348
- **retriever.search_parameters.model_dump()
349
- )
350
-
351
- chunks = results.get('result', {}).get('data_array', [])
352
- print(f"Found {len(chunks)} relevant results")
353
- ```
354
-
355
- ```python
356
- # Method 2: LangChain integration
357
- from databricks_langchain import DatabricksVectorSearch
358
-
359
- for name, retriever in config.retrievers.items():
360
- # Create LangChain vector store
361
- vector_search = DatabricksVectorSearch(
362
- endpoint=retriever.vector_store.endpoint.name,
363
- index_name=retriever.vector_store.index.full_name,
364
- columns=retriever.columns,
365
- )
366
-
367
- # Search using LangChain interface
368
- documents = vector_search.similarity_search(
369
- query=question,
370
- **retriever.search_parameters.model_dump()
371
- )
372
-
373
- print(f"Found {len(documents)} documents")
374
- ```
375
-
376
- ### Configuration Utilities
377
-
378
- The `AppConfig` class provides helper methods to find and filter configuration components:
379
-
380
- #### Finding Agents
381
- ```python
382
- # Get all agents
383
- all_agents = config.find_agents()
384
-
385
- # Find agents with specific criteria
386
- def has_vector_search(agent):
387
- return any("vector_search" in tool.name.lower() for tool in agent.tools)
388
-
389
- vector_agents = config.find_agents(predicate=has_vector_search)
390
- ```
391
-
392
- #### Finding Tools and Guardrails
393
- ```python
394
- # Get all tools
395
- all_tools = config.find_tools()
396
-
397
- # Get all guardrails
398
- all_guardrails = config.find_guardrails()
399
-
400
- # Find tools by type
401
- def is_python_tool(tool):
402
- return tool.function.type == "python"
403
-
404
- python_tools = config.find_tools(predicate=is_python_tool)
405
- ```
406
-
407
- ### Visualization
408
-
409
- Generate and save workflow diagrams:
410
-
411
- ```python
412
- # Display graph in notebook
413
- config.display_graph()
414
-
415
- # Save architecture diagram
416
- config.save_image("docs/my_architecture.png")
417
- ```
418
-
419
- ### Complete Example
420
-
421
- See [`notebooks/05_agent_as_code_driver.py`](notebooks/05_agent_as_code_driver.py) for a complete example:
422
-
423
- ```python
424
- from retail_ai.config import AppConfig
425
- from pathlib import Path
426
-
427
- # Load configuration
428
- config = AppConfig.from_file("config/hardware_store/supervisor_postgres.yaml")
429
-
430
- # Visualize the workflow
431
- config.display_graph()
432
-
433
- # Save architecture diagram
434
- path = Path("docs") / f"{config.app.name}_architecture.png"
435
- config.save_image(path)
436
-
437
- # Create and deploy the agent
438
- config.create_agent()
439
- config.deploy_agent()
440
- ```
441
-
442
- For vector search examples, see [`notebooks/02_provision_vector_search.py`](notebooks/02_provision_vector_search.py).
443
-
444
- ### Available Notebooks
445
-
446
- The framework includes several example notebooks demonstrating different aspects:
447
-
448
- | Notebook | Description | Key Methods Demonstrated |
449
- |----------|-------------|-------------------------|
450
- | [`01_ingest_and_transform.py`](notebooks/01_ingest_and_transform.py) | Data ingestion and transformation | Dataset creation and SQL execution |
451
- | [`02_provision_vector_search.py`](notebooks/02_provision_vector_search.py) | Vector search setup and usage | `vector_store.create()`, `as_index()` |
452
- | [`03_generate_evaluation_data.py`](notebooks/03_generate_evaluation_data.py) | Generate synthetic evaluation datasets | Data generation and evaluation setup |
453
- | [`04_unity_catalog_tools.py`](notebooks/04_unity_catalog_tools.py) | Unity Catalog function deployment | SQL function creation and testing |
454
- | [`05_agent_as_code_driver.py`](notebooks/05_agent_as_code_driver.py) | **Complete agent lifecycle** | `create_agent()`, `deploy_agent()` |
455
- | [`06_run_evaluation.py`](notebooks/06_run_evaluation.py) | Agent evaluation and testing | Evaluation framework usage |
456
- | [`08_run_examples.py`](notebooks/08_run_examples.py) | End-to-end example queries | Agent interaction and testing |
457
-
458
- ## Configuration
459
-
460
- Configuration is managed through [`model_config.yaml`](config/hardware_store/supervisor_postgres.yaml). This file defines all components of the Retail AI system, including resources, tools, agents, and the overall application setup.
461
-
462
- **Note**: The configuration file location is configurable throughout the framework. You can specify a different configuration file using the `-c` or `--config` flag in CLI commands, or by setting the appropriate parameters in the Python API.
463
-
464
- ### Basic Structure of [`model_config.yaml`](config/hardware_store/supervisor_postgres.yaml)
465
-
466
- The [`model_config.yaml`](config/hardware_store/supervisor_postgres.yaml) is organized into several top-level keys:
467
-
468
- ```yaml
469
- # filepath: /Users/nate/development/dao-ai/config/hardware_store/supervisor_postgres.yaml
470
- schemas:
471
- # ... schema definitions ...
472
-
473
- resources:
474
- # ... resource definitions (LLMs, vector stores, etc.) ...
475
-
476
- tools:
477
- # ... tool definitions ...
478
-
479
- agents:
480
- # ... agent definitions ...
481
-
482
- app:
483
- # ... application configuration ...
484
-
485
- # Other sections like guardrails, retrievers, evaluation, datasets
486
- ```
487
-
488
- ### Loading and Using Configuration
489
-
490
- The configuration can be loaded and used programmatically through the `AppConfig` class:
491
-
492
- ```python
493
- from retail_ai.config import AppConfig
494
-
495
- # Load configuration from file
496
- config = AppConfig.from_file("config/hardware_store/supervisor_postgres.yaml")
497
-
498
- # Access different configuration sections
499
- print(f"Available agents: {list(config.agents.keys())}")
500
- print(f"Available tools: {list(config.tools.keys())}")
501
- print(f"Vector stores: {list(config.resources.vector_stores.keys())}")
502
-
503
- # Use configuration methods for deployment
504
- config.create_agent() # Package as MLflow model
505
- config.deploy_agent() # Deploy to serving endpoint
506
- ```
507
-
508
- The configuration supports both CLI and programmatic workflows, with the Python API providing more flexibility for complex deployment scenarios.
509
-
510
- ### Developing and Configuring Tools
511
-
512
- Tools are functions that agents can use to interact with external systems or perform specific tasks. They are defined under the `tools` key in [`model_config.yaml`](config/hardware_store/supervisor_postgres.yaml). Each tool has a unique name and contains a `function` specification.
513
-
514
- There are four types of tools supported:
515
-
516
- #### 1. Python Tools (`type: python`)
517
- These tools directly map to Python functions. The `name` field should correspond to a function that can be imported and called directly.
518
-
519
- **Configuration Example:**
520
- ```yaml
521
- tools:
522
- my_python_tool:
523
- name: my_python_tool
524
- function:
525
- type: python
526
- name: retail_ai.tools.my_function_name
527
- schema: *retail_schema # Optional schema definition
528
- ```
529
- **Development:**
530
- Implement the Python function in the specified module (e.g., `retail_ai/tools.py`). The function will be imported and called directly when the tool is invoked.
531
-
532
- #### 2. Factory Tools (`type: factory`)
533
- Factory tools use factory functions that return initialized LangChain `BaseTool` instances. This is useful for tools requiring complex initialization or configuration.
534
-
535
- **Configuration Example:**
536
- ```yaml
537
- tools:
538
- vector_search_tool:
539
- name: vector_search
540
- function:
541
- type: factory
542
- name: retail_ai.tools.create_vector_search_tool
543
- args:
544
- retriever: *products_retriever
545
- name: product_vector_search_tool
546
- description: "Search for products using vector search"
547
- ```
548
- **Development:**
549
- Implement the factory function (e.g., `create_vector_search_tool`) in `retail_ai/tools.py`. This function should accept the specified `args` and return a fully configured `BaseTool` object.
550
-
551
- #### 3. Unity Catalog Tools (`type: unity_catalog`)
552
- These tools represent SQL functions registered in Databricks Unity Catalog. They reference functions by their Unity Catalog schema and name.
553
-
554
- **Configuration Example:**
555
- ```yaml
556
- tools:
557
- find_product_by_sku_uc_tool:
558
- name: find_product_by_sku_uc
559
- function:
560
- type: unity_catalog
561
- name: find_product_by_sku
562
- schema: *retail_schema
563
- ```
564
- **Development:**
565
- Create the corresponding SQL function in your Databricks Unity Catalog using the specified schema and function name. The tool will automatically generate the appropriate function signature and documentation.
566
-
567
- ### Developing Unity Catalog Functions
568
-
569
- Unity Catalog functions provide the backbone for data access in the multi-agent system. The framework automatically deploys these functions from SQL DDL files during system initialization.
570
-
571
- #### Function Deployment Configuration
572
-
573
- Unity Catalog functions are defined in the `unity_catalog_functions` section of [`model_config.yaml`](config/hardware_store/supervisor_postgres.yaml). Each function specification includes:
574
-
575
- - **Function metadata**: Schema and name for Unity Catalog registration
576
- - **DDL file path**: Location of the SQL file containing the function definition
577
- - **Test parameters**: Optional test data for function validation
578
-
579
- **Configuration Example from [`model_config.yaml`](config/hardware_store/supervisor_postgres.yaml):**
580
- ```yaml
581
- unity_catalog_functions:
582
- - function:
583
- schema: *retail_schema # Reference to schema configuration
584
- name: find_product_by_sku # Function name in Unity Catalog
585
- ddl: ../functions/retail/find_product_by_sku.sql # Path to SQL DDL file
586
- test: # Optional test configuration
587
- parameters:
588
- sku: ["00176279"] # Test parameters for validation
589
- - function:
590
- schema: *retail_schema
591
- name: find_store_inventory_by_sku
592
- ddl: ../functions/retail/find_store_inventory_by_sku.sql
593
- test:
594
- parameters:
595
- store: "35048" # Multiple parameters for complex functions
596
- sku: ["00176279"]
597
- ```
598
-
599
- #### SQL Function Structure
600
-
601
- SQL files should follow this structure for proper deployment:
602
-
603
- **File Structure Example** (`functions/retail/find_product_by_sku.sql`):
604
- ```sql
605
- -- Function to find product details by SKU
606
- CREATE OR REPLACE FUNCTION {catalog_name}.{schema_name}.find_product_by_sku(
607
- sku ARRAY<STRING> COMMENT 'One or more unique identifiers for retrieve. SKU values are between 5-8 alpha numeric characters'
608
- )
609
- RETURNS TABLE(
610
- product_id BIGINT COMMENT 'Unique identifier for each product in the catalog',
611
- sku STRING COMMENT 'Stock Keeping Unit - unique internal product identifier code',
612
- upc STRING COMMENT 'Universal Product Code - standardized barcode number for product identification',
613
- brand_name STRING COMMENT 'Name of the manufacturer or brand that produces the product',
614
- product_name STRING COMMENT 'Display name of the product as shown to customers',
615
- -- ... additional columns
616
- )
617
- READS SQL DATA
618
- COMMENT 'Retrieves detailed information about a specific product by its SKU. This function is designed for product information retrieval in retail applications.'
619
- RETURN
620
- SELECT
621
- product_id,
622
- sku,
623
- upc,
624
- brand_name,
625
- product_name
626
- -- ... additional columns
627
- FROM products
628
- WHERE ARRAY_CONTAINS(find_product_by_sku.sku, products.sku);
629
- ```
630
-
631
- **Key Requirements:**
632
- - Use `{catalog_name}.{schema_name}` placeholders - these are automatically replaced during deployment
633
- - Include comprehensive `COMMENT` attributes for all parameters and return columns
634
- - Provide a clear function-level comment describing purpose and use cases
635
- - Use `READS SQL DATA` for functions that query data
636
- - Follow consistent naming conventions for parameters and return values
637
-
638
- #### Test Configuration
639
-
640
- The optional `test` section allows you to define test parameters for automatic function validation:
641
-
642
- ```yaml
643
- test:
644
- parameters:
645
- sku: ["00176279"] # Single parameter
646
- # OR for multi-parameter functions:
647
- store: "35048" # Multiple parameters
648
- sku: ["00176279"]
649
- ```
650
-
651
- **Test Benefits:**
652
- - **Validation**: Ensures functions work correctly after deployment
653
- - **Documentation**: Provides example usage for other developers
654
- - **CI/CD Integration**: Enables automated testing in deployment pipelines
655
-
656
- **Note**: Test parameters should use realistic data from your datasets to ensure meaningful validation. The framework will execute these tests automatically during deployment to verify function correctness.
657
-
658
- #### 4. MCP (Model Context Protocol) Tools (`type: mcp`)
659
- MCP tools allow interaction with external services that implement the Model Context Protocol, supporting both HTTP and stdio transports.
660
-
661
- **Configuration Example (Direct URL):**
662
- ```yaml
663
- tools:
664
- weather_tool_mcp:
665
- name: weather
666
- function:
667
- type: mcp
668
- name: weather
669
- transport: streamable_http
670
- url: http://localhost:8000/mcp
671
- ```
672
-
673
- **Configuration Example (Unity Catalog Connection):**
674
- MCP tools can also use Unity Catalog Connections for secure, governed access with on-behalf-of-user capabilities. The connection provides OAuth authentication, while the URL specifies the endpoint:
675
- ```yaml
676
- resources:
677
- connections:
678
- github_connection:
679
- name: github_u2m_connection # UC Connection name
680
-
681
- tools:
682
- github_mcp:
683
- name: github_mcp
684
- function:
685
- type: mcp
686
- name: github_mcp
687
- transport: streamable_http
688
- url: https://workspace.databricks.com/api/2.0/mcp/external/github_u2m_connection # MCP endpoint URL
689
- connection: *github_connection # UC Connection provides OAuth authentication
690
- ```
691
-
692
- **Development:**
693
- - **For direct URL connections**: Ensure the MCP service is running and accessible at the specified URL or command. Provide OAuth credentials (client_id, client_secret) or PAT for authentication.
694
- - **For UC Connection**: URL is required to specify the endpoint. The connection provides OAuth authentication via the workspace client. Ensure the connection is configured in Unity Catalog with appropriate MCP scopes (`mcp.genie`, `mcp.functions`, `mcp.vectorsearch`, `mcp.external`).
695
- - The framework will handle the MCP protocol communication automatically, including session management and authentication.
696
-
697
- ### Configuring New Agents
698
-
699
- Agents are specialized AI assistants defined under the `agents` key in [`model_config.yaml`](config/hardware_store/supervisor_postgres.yaml). Each agent has a unique name and specific configuration.
700
-
701
- **Configuration Example:**
702
- ```yaml
703
- agents:
704
- general:
705
- name: general
706
- description: "General retail store assistant for home improvement and hardware store inquiries"
707
- model: *tool_calling_llm
708
- tools:
709
- - *find_product_details_by_description_tool
710
- - *vector_search_tool
711
- guardrails: []
712
- checkpointer: *checkpointer
713
- prompt: |
714
- You are a helpful retail store assistant for a home improvement and hardware store.
715
- You have access to search tools to find current information about products, pricing, and store policies.
716
-
717
- #### CRITICAL INSTRUCTION: ALWAYS USE SEARCH TOOLS FIRST
718
- Before answering ANY question:
719
- - ALWAYS use your available search tools to find the most current and accurate information
720
- - Search for specific details about store policies, product availability, pricing, and services
721
- ```
722
-
723
- **Agent Configuration Fields:**
724
- - `name`: Unique identifier for the agent
725
- - `description`: Human-readable description of the agent's purpose
726
- - `model`: Reference to an LLM model (using YAML anchors like `*tool_calling_llm`)
727
- - `tools`: Array of tool references (using YAML anchors like `*search_tool`)
728
- - `guardrails`: Array of guardrail references (can be empty `[]`)
729
- - `checkpointer`: Reference to a checkpointer for conversation state (optional)
730
- - `prompt`: System prompt that defines the agent's behavior and instructions
731
-
732
- **To configure a new agent:**
733
- 1. Add a new entry under the `agents` section with a unique key
734
- 2. Define the required fields: `name`, `description`, `model`, `tools`, and `prompt`
735
- 3. Optionally configure `guardrails` and `checkpointer`
736
- 4. Reference the agent in the application configuration using YAML anchors
737
-
738
- ### Assigning Tools to Agents
739
-
740
- Tools are assigned to agents by referencing them using YAML anchors in the agent's `tools` array. Each tool must be defined in the `tools` section with an anchor (using `&tool_name`), then referenced in the agent configuration (using `*tool_name`).
741
-
742
- **Example:**
743
- ```yaml
744
- tools:
745
- search_tool: &search_tool
746
- name: search
747
- function:
748
- type: factory
749
- name: retail_ai.tools.search_tool
750
- args: {}
751
-
752
- genie_tool: &genie_tool
753
- name: genie
754
- function:
755
- type: factory
756
- name: retail_ai.tools.create_genie_tool
757
- args:
758
- genie_room: *retail_genie_room
759
-
760
- agents:
761
- general:
762
- name: general
763
- description: "General retail store assistant"
764
- model: *tool_calling_llm
765
- tools:
766
- - *search_tool # Reference to the search_tool anchor
767
- - *genie_tool # Reference to the genie_tool anchor
768
- # ... other agent configuration
769
- ```
770
-
771
- This YAML anchor system allows for:
772
- - **Reusability**: The same tool can be assigned to multiple agents
773
- - **Maintainability**: Tool configuration is centralized in one place
774
- - **Consistency**: Tools are guaranteed to have the same configuration across agents
775
-
776
- ### Assigning Agents to the Application and Configuring Orchestration
777
-
778
- Agents are made available to the application by listing their YAML anchors (defined in the `agents:` section) within the `agents` array under the `app` section. The `app.orchestration` section defines how these agents interact.
779
-
780
- **Orchestration Configuration:**
781
-
782
- The `orchestration` block within the `app` section allows you to define the interaction pattern. Your current configuration primarily uses a **Supervisor** pattern.
783
-
784
- ```yaml
785
- # filepath: /Users/nate/development/dao-ai/config/hardware_store/supervisor_postgres.yaml
786
- # ...
787
- # app:
788
- # ...
789
- # agents:
790
- # - *orders
791
- # - *diy
792
- # - *product
793
- # # ... other agents referenced by their anchors
794
- # - *general
795
- # orchestration:
796
- # supervisor:
797
- # model: *tool_calling_llm # LLM for the supervisor agent
798
- # default_agent: *general # Agent to handle tasks if no specific agent is chosen
799
- # # swarm: # Example of how a swarm might be configured if activated
800
- # # model: *tool_calling_llm
801
- # ...
802
- ```
803
-
804
- **Orchestration Patterns:**
805
-
806
- 1. **Supervisor Pattern (Currently Active)**
807
- * Your configuration defines a `supervisor` block under `app.orchestration`.
808
- * `model`: Specifies the LLM (e.g., `*tool_calling_llm`) that the supervisor itself will use for its decision-making and routing logic.
809
- * `default_agent`: Specifies an agent (e.g., `*general`) that the supervisor will delegate to if it cannot determine a more specialized agent from the `app.agents` list or if the query is general.
810
- * The supervisor is responsible for receiving the initial user query, deciding which specialized agent (from the `app.agents` list) is best suited to handle it, and then passing the query to that agent. If no specific agent is a clear match, or if the query is general, it falls back to the `default_agent`.
811
-
812
- 2. **Swarm Pattern (Commented Out)**
813
- * Your configuration includes a commented-out `swarm` block. If activated, this would imply a different interaction model.
814
- * In a swarm, agents might collaborate more directly or work in parallel on different aspects of a query. The `model` under `swarm` would likely define the LLM used by the agents within the swarm or by a coordinating element of the swarm.
815
- * The specific implementation of how a swarm pattern behaves would be defined in your `retail_ai/graph.py` and `retail_ai/nodes.py`.
816
-
817
- ## Integration Hooks
818
-
819
- The DAO framework provides several hook integration points that allow you to customize agent behavior and application lifecycle. These hooks enable you to inject custom logic at key points in the system without modifying the core framework code.
820
-
821
- ### Hook Types
822
-
823
- #### Agent-Level Hooks
824
-
825
- **Agent hooks** are defined at the individual agent level and allow you to customize specific agent behavior:
826
-
827
- ##### `create_agent_hook`
828
- Used to provide a completely custom agent implementation. When this is provided all other configuration is ignored. See: **Hook Implementation**
829
-
830
- ```yaml
831
- agents:
832
- custom_agent:
833
- name: custom_agent
834
- description: "Agent with custom initialization"
835
- model: *tool_calling_llm
836
- create_agent_hook: my_package.hooks.initialize_custom_agent
837
- # ... other agent configuration
838
- ```
839
-
840
- ##### `pre_agent_hook`
841
- Executed before an agent processes a message. Ideal for request preprocessing, logging, validation, or context injection. See: **Hook Implementation**
842
-
843
- ```yaml
844
- agents:
845
- logging_agent:
846
- name: logging_agent
847
- description: "Agent with request logging"
848
- model: *tool_calling_llm
849
- pre_agent_hook: my_package.hooks.log_incoming_request
850
- # ... other agent configuration
851
- ```
852
-
853
- ##### `post_agent_hook`
854
- Executed after an agent completes processing a message. Perfect for response post-processing, logging, metrics collection, or cleanup operations. See: **Hook Implementation**
855
-
856
- ```yaml
857
- agents:
858
- analytics_agent:
859
- name: analytics_agent
860
- description: "Agent with response analytics"
861
- model: *tool_calling_llm
862
- post_agent_hook: my_package.hooks.collect_response_metrics
863
- # ... other agent configuration
864
- ```
865
-
866
- #### Application-Level Hooks
867
-
868
- **Application hooks** operate at the global application level and affect the entire system lifecycle:
869
-
870
- ##### `initialization_hooks`
871
- Executed when the application starts up via `AppConfig.from_file()`. Use these for system initialization, resource setup, database connections, or external service configuration. See: **Hook Implementation**
872
-
873
- ```yaml
874
- app:
875
- name: my_retail_app
876
- initialization_hooks:
877
- - my_package.hooks.setup_database_connections
878
- - my_package.hooks.initialize_external_apis
879
- - my_package.hooks.setup_monitoring
880
- # ... other app configuration
881
- ```
882
-
883
- ##### `shutdown_hooks`
884
- Executed when the application shuts down (registered via `atexit`). Essential for cleanup operations, closing connections, saving state, or performing final logging. See: **Hook Implementation**
885
-
886
- ```yaml
887
- app:
888
- name: my_retail_app
889
- shutdown_hooks:
890
- - my_package.hooks.cleanup_database_connections
891
- - my_package.hooks.save_session_data
892
- - my_package.hooks.send_shutdown_metrics
893
- # ... other app configuration
894
- ```
895
-
896
- ##### `message_hooks`
897
- Executed for every message processed by the system. Useful for global logging, authentication, rate limiting, or message transformation. See: **Hook Implementation**
898
-
899
- ```yaml
900
- app:
901
- name: my_retail_app
902
- message_hooks:
903
- - my_package.hooks.authenticate_user
904
- - my_package.hooks.apply_rate_limiting
905
- - my_package.hooks.transform_message_format
906
- # ... other app configuration
907
- ```
908
-
909
- ### Hook Implementation
910
-
911
- Hooks can be implemented as either:
912
-
913
- 1. **Python Functions**: Direct function references
914
- ```yaml
915
- initialization_hooks: my_package.hooks.setup_function
916
- ```
917
-
918
- 2. **Factory Functions**: Functions that return configured tools or handlers
919
- ```yaml
920
- initialization_hooks:
921
- type: factory
922
- name: my_package.hooks.create_setup_handler
923
- args:
924
- config_param: "value"
925
- ```
926
-
927
- 3. **Hook Lists**: Multiple hooks executed in sequence
928
- ```yaml
929
- initialization_hooks:
930
- - my_package.hooks.setup_database
931
- - my_package.hooks.setup_cache
932
- - my_package.hooks.setup_monitoring
933
- ```
934
-
935
- ### Hook Function Signatures
936
-
937
- Each hook type expects specific function signatures:
938
-
939
- #### Agent Hooks
940
- ```python
941
- # create_agent_hook
942
- def initialize_custom_agent(state: dict, config: dict) -> dict:
943
- """Custom agent initialization logic"""
944
- pass
945
-
946
- # pre_agent_hook
947
- def log_incoming_request(state: dict, config: dict) -> dict:
948
- """Pre-process incoming request"""
949
- return state
950
-
951
- # post_agent_hook
952
- def collect_response_metrics(state: dict, config: dict) -> dict:
953
- """Post-process agent response"""
954
- return state
955
- ```
956
-
957
- #### Application Hooks
958
- ```python
959
- # initialization_hooks
960
- def setup_database_connections(config: AppConfig) -> None:
961
- """Initialize database connections"""
962
- pass
963
-
964
- # shutdown_hooks
965
- def cleanup_resources(config: AppConfig) -> None:
966
- """Clean up resources on shutdown"""
967
- pass
968
-
969
- # message_hooks
970
- def authenticate_user(state: dict, config: dict) -> dict:
971
- """Authenticate and authorize user requests"""
972
- return state
973
- ```
974
-
975
- ### Use Cases and Examples
976
-
977
- #### Common Hook Patterns
978
-
979
- **Logging and Monitoring**:
980
- ```python
981
- def log_agent_performance(state: dict, config: AppConfig) -> dict:
982
- """Log agent response times and quality metrics"""
983
- start_time = state.get('start_time')
984
- if start_time:
985
- duration = time.time() - start_time
986
- logger.info(f"Agent response time: {duration:.2f}s")
987
- return state
988
- ```
989
-
990
- **Authentication and Authorization**:
991
- ```python
992
- def validate_user_permissions(state: dict, config: AppConfig) -> dict:
993
- """Validate user has permission for requested operation"""
994
- user_id = state.get('user_id')
995
- if not has_permission(user_id, state.get('operation')):
996
- raise UnauthorizedError("Insufficient permissions")
997
- return state
998
- ```
999
-
1000
- **Resource Management**:
1001
- ```python
1002
- def initialize_vector_search(config: AppConfig) -> None:
1003
- """Initialize vector search connections during startup"""
1004
- for vs_name, vs_config in config.resources.vector_stores.items():
1005
- vs_config.create()
1006
- logger.info(f"Vector store {vs_name} initialized")
1007
- ```
1008
-
1009
- **State Enrichment**:
1010
- ```python
1011
- def enrich_user_context(state: dict, config: AppConfig) -> dict:
1012
- """Add user profile and preferences to state"""
1013
- user_id = state.get('user_id')
1014
- if user_id:
1015
- user_profile = get_user_profile(user_id)
1016
- state['user_context'] = user_profile
1017
- return state
1018
- ```
1019
-
1020
- ### Best Practices
1021
-
1022
- 1. **Keep hooks lightweight**: Avoid heavy computations that could slow down message processing
1023
- 2. **Handle errors gracefully**: Use try-catch blocks to prevent hook failures from breaking the system
1024
- 3. **Use appropriate hook types**: Choose agent-level vs application-level hooks based on scope
1025
- 4. **Maintain state immutability**: Return modified copies of state rather than mutating in-place
1026
- 5. **Log hook execution**: Include logging for troubleshooting and monitoring
1027
- 6. **Test hooks independently**: Write unit tests for hook functions separate from the main application
1028
-
1029
-
1030
- ## Development
1031
-
1032
- ### Project Structure
1033
-
1034
- - `retail_ai/`: Core package
1035
- - `config.py`: Pydantic configuration models with full validation
1036
- - `graph.py`: LangGraph workflow definition
1037
- - `nodes.py`: Agent node factories and implementations
1038
- - `tools.py`: Tool creation and factory functions, implementations for Python tools
1039
- - `vector_search.py`: Vector search utilities
1040
- - `state.py`: State management for conversations
1041
- - `tests/`: Test suite with configuration fixtures
1042
- - `schemas/`: JSON schemas for configuration validation
1043
- - `notebooks/`: Jupyter notebooks for setup and experimentation
1044
- - `docs/`: Documentation files, including architecture diagrams.
1045
- - `config/`: Contains [`model_config.yaml`](config/hardware_store/supervisor_postgres.yaml).
1046
-
1047
- ### Building the Package
1048
-
1049
- ```bash
1050
- # Install development dependencies
1051
- make depends
1052
-
1053
- # Build the package
1054
- make install
1055
-
1056
- # Run tests
1057
- make test
1058
-
1059
- # Format code
1060
- make format
1061
- ```
1062
-
1063
- ## Deployment with Databricks Bundle CLI
1064
-
1065
- The agent can be deployed using the existing Databricks Bundle CLI configuration:
1066
-
1067
- 1. Ensure Databricks CLI is installed and configured:
1068
- ```bash
1069
- pip install databricks-cli
1070
- databricks configure
1071
- ```
1072
-
1073
- 2. Deploy using the existing `databricks.yml`:
1074
- ```bash
1075
- databricks bundle deploy
1076
- ```
1077
-
1078
- 3. Check deployment status:
1079
- ```bash
1080
- databricks bundle status
1081
- ```
1082
-
1083
- ## Usage
1084
-
1085
- Once deployed, interact with the agent:
1086
-
1087
- ```python
1088
- from mlflow.deployments import get_deploy_client
1089
-
1090
- client = get_deploy_client("databricks")
1091
- response = client.predict(
1092
- endpoint="retail_ai_agent", # Matches endpoint_name in model_config.yaml
1093
- inputs={
1094
- "messages": [
1095
- {"role": "user", "content": "Can you recommend a lamp for my oak side tables?"}
1096
- ]
1097
- }
1098
- )
1099
-
1100
- print(response["message"]["content"])
1101
- ```
1102
-
1103
- ### Advanced Configuration
1104
-
1105
- You can also pass additional configuration parameters to customize the agent's behavior:
1106
-
1107
- ```python
1108
- response = client.predict(
1109
- endpoint="retail_ai_agent",
1110
- inputs={
1111
- "messages": [
1112
- {"role": "user", "content": "Can you recommend a lamp for my oak side tables?"}
1113
- ],
1114
- "configurable": {
1115
- "thread_id": "1",
1116
- "user_id": "my_user_id",
1117
- "store_num": 87887
1118
- }
1119
- }
1120
- )
1121
- ```
1122
-
1123
- The `configurable` section supports:
1124
- - **`thread_id`**: Unique identifier for conversation threading and state management
1125
- - **`user_id`**: User identifier for personalization and tracking
1126
- - **`store_num`**: Store number for location-specific recommendations and inventory
1127
-
1128
- ## Customization
1129
-
1130
- To customize the agent:
1131
-
1132
- 1. **Update [`model_config.yaml`](config/hardware_store/supervisor_postgres.yaml)**:
1133
- - Add tools in the `tools` section
1134
- - Create agents in the `agents` section
1135
- - Configure resources (LLMs, vector stores, etc.)
1136
- - Adjust orchestration patterns as described above.
1137
-
1138
- 2. **Implement new tools** in `retail_ai/tools.py` (for Python and Factory tools) or in Unity Catalog (for UC tools).
1139
-
1140
- 3. **Extend workflows** in `retail_ai/graph.py` to support the chosen orchestration patterns and agent interactions.
1141
-
1142
- ## Testing
1143
-
1144
- ```bash
1145
- # Run all tests
1146
- make test
1147
- ```
1148
-
1149
- ## Logging
1150
-
1151
- The primary log level for the application is configured in [`model_config.yaml`](config/hardware_store/supervisor_postgres.yaml) under the `app.log_level` field.
1152
-
1153
- **Configuration Example:**
1154
- ```yaml
1155
- # filepath: /Users/nate/development/dao-ai/config/hardware_store/supervisor_postgres.yaml
1156
- app:
1157
- log_level: INFO # Supported levels: DEBUG, INFO, WARNING, ERROR, CRITICAL
1158
- # ... other app configurations ...
1159
- ```
1160
-
1161
- This setting controls the verbosity of logs produced by the `retail_ai` package.
1162
-
1163
- The system also includes:
1164
- - **MLflow tracing** for request tracking.
1165
- - **Structured logging** is used internally.
1166
-
1167
- ## License
1168
-
1169
- This project is licensed under the MIT License - see the [LICENSE](LICENSE) file for details.