dao-ai 0.0.25__py3-none-any.whl → 0.1.2__py3-none-any.whl

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
Files changed (63) hide show
  1. dao_ai/__init__.py +29 -0
  2. dao_ai/agent_as_code.py +5 -5
  3. dao_ai/cli.py +245 -40
  4. dao_ai/config.py +1863 -338
  5. dao_ai/genie/__init__.py +38 -0
  6. dao_ai/genie/cache/__init__.py +43 -0
  7. dao_ai/genie/cache/base.py +72 -0
  8. dao_ai/genie/cache/core.py +79 -0
  9. dao_ai/genie/cache/lru.py +347 -0
  10. dao_ai/genie/cache/semantic.py +970 -0
  11. dao_ai/genie/core.py +35 -0
  12. dao_ai/graph.py +27 -228
  13. dao_ai/hooks/__init__.py +9 -6
  14. dao_ai/hooks/core.py +27 -195
  15. dao_ai/logging.py +56 -0
  16. dao_ai/memory/__init__.py +10 -0
  17. dao_ai/memory/core.py +65 -30
  18. dao_ai/memory/databricks.py +402 -0
  19. dao_ai/memory/postgres.py +79 -38
  20. dao_ai/messages.py +6 -4
  21. dao_ai/middleware/__init__.py +125 -0
  22. dao_ai/middleware/assertions.py +806 -0
  23. dao_ai/middleware/base.py +50 -0
  24. dao_ai/middleware/core.py +67 -0
  25. dao_ai/middleware/guardrails.py +420 -0
  26. dao_ai/middleware/human_in_the_loop.py +232 -0
  27. dao_ai/middleware/message_validation.py +586 -0
  28. dao_ai/middleware/summarization.py +197 -0
  29. dao_ai/models.py +1306 -114
  30. dao_ai/nodes.py +261 -166
  31. dao_ai/optimization.py +674 -0
  32. dao_ai/orchestration/__init__.py +52 -0
  33. dao_ai/orchestration/core.py +294 -0
  34. dao_ai/orchestration/supervisor.py +278 -0
  35. dao_ai/orchestration/swarm.py +271 -0
  36. dao_ai/prompts.py +128 -31
  37. dao_ai/providers/databricks.py +645 -172
  38. dao_ai/state.py +157 -21
  39. dao_ai/tools/__init__.py +13 -5
  40. dao_ai/tools/agent.py +1 -3
  41. dao_ai/tools/core.py +64 -11
  42. dao_ai/tools/email.py +232 -0
  43. dao_ai/tools/genie.py +144 -295
  44. dao_ai/tools/mcp.py +220 -133
  45. dao_ai/tools/memory.py +50 -0
  46. dao_ai/tools/python.py +9 -14
  47. dao_ai/tools/search.py +14 -0
  48. dao_ai/tools/slack.py +22 -10
  49. dao_ai/tools/sql.py +202 -0
  50. dao_ai/tools/time.py +30 -7
  51. dao_ai/tools/unity_catalog.py +165 -88
  52. dao_ai/tools/vector_search.py +360 -40
  53. dao_ai/utils.py +218 -16
  54. dao_ai-0.1.2.dist-info/METADATA +455 -0
  55. dao_ai-0.1.2.dist-info/RECORD +64 -0
  56. {dao_ai-0.0.25.dist-info → dao_ai-0.1.2.dist-info}/WHEEL +1 -1
  57. dao_ai/chat_models.py +0 -204
  58. dao_ai/guardrails.py +0 -112
  59. dao_ai/tools/human_in_the_loop.py +0 -100
  60. dao_ai-0.0.25.dist-info/METADATA +0 -1165
  61. dao_ai-0.0.25.dist-info/RECORD +0 -41
  62. {dao_ai-0.0.25.dist-info → dao_ai-0.1.2.dist-info}/entry_points.txt +0 -0
  63. {dao_ai-0.0.25.dist-info → dao_ai-0.1.2.dist-info}/licenses/LICENSE +0 -0
@@ -1,1165 +0,0 @@
1
- Metadata-Version: 2.4
2
- Name: dao-ai
3
- Version: 0.0.25
4
- Summary: DAO AI: A modular, multi-agent orchestration framework for complex AI workflows. Supports agent handoff, tool integration, and dynamic configuration via YAML.
5
- Project-URL: Homepage, https://github.com/natefleming/dao-ai
6
- Project-URL: Documentation, https://natefleming.github.io/dao-ai
7
- Project-URL: Repository, https://github.com/natefleming/dao-ai
8
- Project-URL: Issues, https://github.com/natefleming/dao-ai/issues
9
- Project-URL: Changelog, https://github.com/natefleming/dao-ai/blob/main/CHANGELOG.md
10
- Author-email: Nate Fleming <nate.fleming@databricks.com>, Nate Fleming <nate.fleming@gmail.com>
11
- Maintainer-email: Nate Fleming <nate.fleming@databricks.com>
12
- License: MIT
13
- License-File: LICENSE
14
- Keywords: agents,ai,databricks,langchain,langgraph,llm,multi-agent,orchestration,vector-search,workflow
15
- Classifier: Development Status :: 3 - Alpha
16
- Classifier: Intended Audience :: Developers
17
- Classifier: Intended Audience :: Science/Research
18
- Classifier: License :: OSI Approved :: MIT License
19
- Classifier: Operating System :: OS Independent
20
- Classifier: Programming Language :: Python :: 3
21
- Classifier: Programming Language :: Python :: 3.12
22
- Classifier: Programming Language :: Python :: 3.13
23
- Classifier: Topic :: Scientific/Engineering :: Artificial Intelligence
24
- Classifier: Topic :: Software Development :: Libraries :: Python Modules
25
- Classifier: Topic :: System :: Distributed Computing
26
- Requires-Python: >=3.12
27
- Requires-Dist: databricks-agents>=1.7.0
28
- Requires-Dist: databricks-langchain>=0.8.1
29
- Requires-Dist: databricks-mcp>=0.3.0
30
- Requires-Dist: databricks-sdk[openai]>=0.67.0
31
- Requires-Dist: duckduckgo-search>=8.0.2
32
- Requires-Dist: grandalf>=0.8
33
- Requires-Dist: langchain-mcp-adapters>=0.1.10
34
- Requires-Dist: langchain-tavily>=0.2.11
35
- Requires-Dist: langchain>=0.3.27
36
- Requires-Dist: langgraph-checkpoint-postgres>=2.0.25
37
- Requires-Dist: langgraph-supervisor>=0.0.29
38
- Requires-Dist: langgraph-swarm>=0.0.14
39
- Requires-Dist: langgraph>=0.6.10
40
- Requires-Dist: langmem>=0.0.29
41
- Requires-Dist: loguru>=0.7.3
42
- Requires-Dist: mcp>=1.17.0
43
- Requires-Dist: mlflow>=3.4.0
44
- Requires-Dist: nest-asyncio>=1.6.0
45
- Requires-Dist: openevals>=0.0.19
46
- Requires-Dist: openpyxl>=3.1.5
47
- Requires-Dist: psycopg[binary,pool]>=3.2.9
48
- Requires-Dist: pydantic>=2.12.0
49
- Requires-Dist: python-dotenv>=1.1.0
50
- Requires-Dist: pyyaml>=6.0.2
51
- Requires-Dist: rich>=14.0.0
52
- Requires-Dist: scipy<=1.15
53
- Requires-Dist: sqlparse>=0.5.3
54
- Requires-Dist: unitycatalog-ai[databricks]>=0.3.0
55
- Provides-Extra: databricks
56
- Requires-Dist: databricks-connect>=15.0.0; extra == 'databricks'
57
- Requires-Dist: databricks-vectorsearch>=0.56; extra == 'databricks'
58
- Requires-Dist: pyspark>=3.5.0; extra == 'databricks'
59
- Provides-Extra: dev
60
- Requires-Dist: mypy>=1.0.0; extra == 'dev'
61
- Requires-Dist: pre-commit>=3.0.0; extra == 'dev'
62
- Requires-Dist: pytest>=8.3.5; extra == 'dev'
63
- Requires-Dist: ruff>=0.11.11; extra == 'dev'
64
- Provides-Extra: docs
65
- Requires-Dist: mkdocs-material>=9.0.0; extra == 'docs'
66
- Requires-Dist: mkdocs>=1.5.0; extra == 'docs'
67
- Requires-Dist: mkdocstrings[python]>=0.24.0; extra == 'docs'
68
- Provides-Extra: test
69
- Requires-Dist: pytest-cov>=4.0.0; extra == 'test'
70
- Requires-Dist: pytest-mock>=3.10.0; extra == 'test'
71
- Requires-Dist: pytest>=8.3.5; extra == 'test'
72
- Description-Content-Type: text/markdown
73
-
74
- # Declarative Agent Orchestration (DAO) Framework
75
-
76
- A modular, multi-agent orchestration framework for building sophisticated AI workflows on Databricks. While this implementation provides a complete retail AI reference architecture, the framework is designed to support any domain or use case requiring agent coordination, tool integration, and dynamic configuration.
77
-
78
- ## Overview
79
-
80
- This project implements a LangGraph-based multi-agent orchestration framework that can:
81
-
82
- - **Route queries** to specialized agents based on content and context
83
- - **Coordinate multiple AI agents** working together on complex tasks
84
- - **Integrate diverse tools** including databases, APIs, vector search, and external services
85
- - **Support flexible orchestration patterns** (supervisor, swarm, and custom workflows)
86
- - **Provide dynamic configuration** through YAML-based agent and tool definitions
87
- - **Enable domain-specific specialization** while maintaining a unified interface
88
-
89
- **Retail Reference Implementation**: This repository includes a complete retail AI system demonstrating:
90
- - Product inventory management and search
91
- - Customer recommendation engines
92
- - Order tracking and management
93
- - Product classification and information retrieval
94
-
95
- The system uses Databricks Vector Search, Unity Catalog, and LLMs to provide accurate, context-aware responses across any domain.
96
-
97
- ## Key Features
98
-
99
- - **Multi-Modal Interface**: CLI commands and Python API for development and deployment
100
- - **Agent Lifecycle Management**: Create, deploy, and monitor agents programmatically
101
- - **Vector Search Integration**: Built-in support for Databricks Vector Search with retrieval tools
102
- - **Configuration-Driven**: YAML-based configuration with validation and IDE support
103
- - **MLflow Integration**: Automatic model packaging, versioning, and deployment
104
- - **Monitoring & Evaluation**: Built-in assessment and monitoring capabilities
105
-
106
- ## Architecture
107
-
108
- ### Overview
109
-
110
- The Multi-Agent AI system is built as a component-based agent architecture that routes queries to specialized agents based on the nature of the request. This approach enables domain-specific handling while maintaining a unified interface that can be adapted to any industry or use case.
111
-
112
- ![View Architecture Diagram](./docs/hardware_store/retail_supervisor.png)
113
-
114
- ### Core Components
115
-
116
- #### Configuration Components
117
-
118
- All components are defined from the provided [`model_config.yaml`](config/hardware_store/supervisor_postgres.yaml) using a modular approach:
119
-
120
- - **Schemas**: Define database and catalog structures
121
- - **Resources**: Configure infrastructure components like LLMs, vector stores, catalogs, warehouses, and databases
122
- - **Tools**: Define functions that agents can use to perform tasks (dictionary-based with keys as tool names)
123
- - **Agents**: Specialized AI assistants configured for specific domains (dictionary-based with keys as agent names)
124
- - **Guardrails**: Quality control mechanisms to ensure accurate responses
125
- - **Retrievers**: Configuration for vector search and retrieval
126
- - **Evaluation**: Configuration for model evaluation and testing
127
- - **Datasets**: Configuration for training and evaluation datasets
128
- - **App**: Overall application configuration including orchestration and logging
129
-
130
- #### Message Processing Flow
131
-
132
- The system uses a LangGraph-based workflow with the following key nodes:
133
-
134
- - **Message Validation**: Validates incoming requests (`message_validation_node`)
135
- - **Agent Routing**: Routes messages to appropriate specialized agents using supervisor or swarm patterns
136
- - **Agent Execution**: Processes requests using specialized agents with their configured tools
137
- - **Response Generation**: Returns structured responses to users
138
-
139
- #### Specialized Agents
140
-
141
- Agents are dynamically configured from the provided [`model_config.yaml`](config/hardware_store/supervisor_postgres.yaml) file and can include:
142
- - Custom LLM models and parameters
143
- - Specific sets of available tools (Python functions, Unity Catalog functions, factory tools, MCP services)
144
- - Domain-specific system prompts
145
- - Guardrails for response quality
146
- - Handoff prompts for agent coordination
147
-
148
- ### Technical Implementation
149
-
150
- The system is implemented using:
151
-
152
- - **LangGraph**: For workflow orchestration and state management
153
- - **LangChain**: For LLM interactions and tool integration
154
- - **MLflow**: For model tracking and deployment
155
- - **Databricks**: LLM APIs, Vector Search, Unity Catalog, and Model Serving
156
- - **Pydantic**: For configuration validation and schema management
157
-
158
- ## Prerequisites
159
-
160
- - Python 3.12+
161
- - Databricks workspace with access to:
162
- - Unity Catalog
163
- - Model Serving
164
- - Vector Search
165
- - Genie (optional)
166
- - Databricks CLI configured with appropriate permissions
167
- - Databricks model endpoints for LLMs and embeddings
168
-
169
- ## Setup
170
-
171
- 1. Clone this repository
172
- 2. Install dependencies:
173
-
174
- ```bash
175
- # Create and activate a Python virtual environment
176
- uv venv
177
- source .venv/bin/activate # On Windows: .venv\Scripts\activate
178
-
179
- # Install dependencies using Makefile
180
- make install
181
- ```
182
-
183
- 3. Configure Databricks CLI with appropriate workspace access
184
-
185
- ## Quick Start
186
-
187
- ### Option 1: Using Python API (Recommended for Development)
188
-
189
- ```python
190
- from retail_ai.config import AppConfig
191
-
192
- # Load your configuration
193
- config = AppConfig.from_file("config/hardware_store/supervisor_postgres.yaml")
194
-
195
- # Create vector search infrastructure
196
- for name, vector_store in config.resources.vector_stores.items():
197
- vector_store.create()
198
-
199
- # Create and deploy your agent
200
- config.create_agent()
201
- config.deploy_agent()
202
-
203
- ```
204
-
205
- ### Option 2: Using CLI Commands
206
-
207
- ```bash
208
- # Validate configuration
209
- dao-ai validate -c config/hardware_store/supervisor_postgres.yaml
210
-
211
- # Generate workflow diagram
212
- dao-ai graph -o architecture.png
213
-
214
- # Deploy using Databricks Asset Bundles
215
- dao-ai bundle --deploy --run
216
-
217
- # Deploy using Databricks Asset Bundles with specific configuration
218
- dao-ai -vvvv bundle --deploy --run --target dev --config config/hardware_store/supervisor_postgres.yaml --profile DEFAULT
219
- ```
220
-
221
- See the [Python API](#python-api) section for detailed programmatic usage, or [Command Line Interface](#command-line-interface) for CLI usage.
222
-
223
- ## Command Line Interface
224
-
225
- The framework includes a comprehensive CLI for managing, validating, and visualizing your multi-agent system:
226
-
227
- ### Schema Generation
228
- Generate JSON schema for configuration validation and IDE autocompletion:
229
- ```bash
230
- dao-ai schema > schema.json
231
- ```
232
-
233
- ### Configuration Validation
234
- Validate your configuration file for syntax and semantic correctness:
235
- ```bash
236
- # Validate default configuration (config/hardware_store/supervisor_postgres.yaml)
237
- dao-ai validate
238
-
239
- # Validate specific configuration file
240
- dao-ai validate -c config/production.yaml
241
- ```
242
-
243
- ### Graph Visualization
244
- Generate visual representations of your agent workflow:
245
- ```bash
246
- # Generate architecture diagram (using default config/hardware_store/supervisor_postgres.yaml)
247
- dao-ai graph -o architecture.png
248
-
249
- # Generate diagram from specific config
250
- dao-ai graph -o workflow.png -c config/custom.yaml
251
- ```
252
-
253
- ### Deployment
254
- Deploy your multi-agent system using Databricks Asset Bundles:
255
- ```bash
256
- # Deploy the system
257
- dao-ai bundle --deploy
258
-
259
- # Run the deployed system
260
- dao-ai bundle --run
261
-
262
- # Use specific Databricks profile
263
- dao-ai bundle --deploy --run --profile my-profile
264
- ```
265
-
266
- ### Verbose Output
267
- Add `-v`, `-vv`, `-vvv`, or `-vvvv` flags for increasing levels of verbosity (ERROR, WARNING, INFO, DEBUG, TRACE).
268
-
269
- ## Python API
270
-
271
- The framework provides a comprehensive Python API for programmatic access to all functionality. The main entry point is the `AppConfig` class, which provides methods for agent lifecycle management, vector search operations, and configuration utilities.
272
-
273
- ### Quick Start
274
-
275
- ```python
276
- from retail_ai.config import AppConfig
277
-
278
- # Load configuration from file
279
- config = AppConfig.from_file(path="config/hardware_store/supervisor_postgres.yaml")
280
- ```
281
-
282
- ### Agent Lifecycle Management
283
-
284
- #### Creating Agents
285
- Package and register your multi-agent system as an MLflow model:
286
-
287
- ```python
288
- # Create agent with default settings
289
- config.create_agent()
290
-
291
- # Create agent with additional requirements and code paths
292
- config.create_agent(
293
- additional_pip_reqs=["custom-package==1.0.0"],
294
- additional_code_paths=["./custom_modules"]
295
- )
296
- ```
297
-
298
- #### Deploying Agents
299
- Deploy your registered agent to a Databricks serving endpoint:
300
-
301
- ```python
302
- # Deploy agent to serving endpoint
303
- config.deploy_agent()
304
- ```
305
-
306
- The deployment process:
307
- 1. Retrieves the latest model version from MLflow
308
- 2. Creates or updates a Databricks model serving endpoint
309
- 3. Configures scaling, environment variables, and permissions
310
- 4. Sets up proper authentication and resource access
311
-
312
- ### Vector Search Operations
313
-
314
- #### Creating Vector Search Infrastructure
315
- Create vector search endpoints and indexes from your configuration:
316
-
317
- ```python
318
- # Access vector stores from configuration
319
- vector_stores = config.resources.vector_stores
320
-
321
- # Create all vector stores
322
- for name, vector_store in vector_stores.items():
323
- print(f"Creating vector store: {name}")
324
- vector_store.create()
325
- ```
326
-
327
- #### Using Vector Search
328
- Query your vector search indexes for retrieval-augmented generation:
329
-
330
- ```python
331
- # Method 1: Direct index access
332
- from retail_ai.config import RetrieverModel
333
-
334
- question = "What products do you have in stock?"
335
-
336
- for name, retriever in config.retrievers.items():
337
- # Get the vector search index
338
- index = retriever.vector_store.as_index()
339
-
340
- # Perform similarity search
341
- results = index.similarity_search(
342
- query_text=question,
343
- columns=retriever.columns,
344
- **retriever.search_parameters.model_dump()
345
- )
346
-
347
- chunks = results.get('result', {}).get('data_array', [])
348
- print(f"Found {len(chunks)} relevant results")
349
- ```
350
-
351
- ```python
352
- # Method 2: LangChain integration
353
- from databricks_langchain import DatabricksVectorSearch
354
-
355
- for name, retriever in config.retrievers.items():
356
- # Create LangChain vector store
357
- vector_search = DatabricksVectorSearch(
358
- endpoint=retriever.vector_store.endpoint.name,
359
- index_name=retriever.vector_store.index.full_name,
360
- columns=retriever.columns,
361
- )
362
-
363
- # Search using LangChain interface
364
- documents = vector_search.similarity_search(
365
- query=question,
366
- **retriever.search_parameters.model_dump()
367
- )
368
-
369
- print(f"Found {len(documents)} documents")
370
- ```
371
-
372
- ### Configuration Utilities
373
-
374
- The `AppConfig` class provides helper methods to find and filter configuration components:
375
-
376
- #### Finding Agents
377
- ```python
378
- # Get all agents
379
- all_agents = config.find_agents()
380
-
381
- # Find agents with specific criteria
382
- def has_vector_search(agent):
383
- return any("vector_search" in tool.name.lower() for tool in agent.tools)
384
-
385
- vector_agents = config.find_agents(predicate=has_vector_search)
386
- ```
387
-
388
- #### Finding Tools and Guardrails
389
- ```python
390
- # Get all tools
391
- all_tools = config.find_tools()
392
-
393
- # Get all guardrails
394
- all_guardrails = config.find_guardrails()
395
-
396
- # Find tools by type
397
- def is_python_tool(tool):
398
- return tool.function.type == "python"
399
-
400
- python_tools = config.find_tools(predicate=is_python_tool)
401
- ```
402
-
403
- ### Visualization
404
-
405
- Generate and save workflow diagrams:
406
-
407
- ```python
408
- # Display graph in notebook
409
- config.display_graph()
410
-
411
- # Save architecture diagram
412
- config.save_image("docs/my_architecture.png")
413
- ```
414
-
415
- ### Complete Example
416
-
417
- See [`notebooks/05_agent_as_code_driver.py`](notebooks/05_agent_as_code_driver.py) for a complete example:
418
-
419
- ```python
420
- from retail_ai.config import AppConfig
421
- from pathlib import Path
422
-
423
- # Load configuration
424
- config = AppConfig.from_file("config/hardware_store/supervisor_postgres.yaml")
425
-
426
- # Visualize the workflow
427
- config.display_graph()
428
-
429
- # Save architecture diagram
430
- path = Path("docs") / f"{config.app.name}_architecture.png"
431
- config.save_image(path)
432
-
433
- # Create and deploy the agent
434
- config.create_agent()
435
- config.deploy_agent()
436
- ```
437
-
438
- For vector search examples, see [`notebooks/02_provision_vector_search.py`](notebooks/02_provision_vector_search.py).
439
-
440
- ### Available Notebooks
441
-
442
- The framework includes several example notebooks demonstrating different aspects:
443
-
444
- | Notebook | Description | Key Methods Demonstrated |
445
- |----------|-------------|-------------------------|
446
- | [`01_ingest_and_transform.py`](notebooks/01_ingest_and_transform.py) | Data ingestion and transformation | Dataset creation and SQL execution |
447
- | [`02_provision_vector_search.py`](notebooks/02_provision_vector_search.py) | Vector search setup and usage | `vector_store.create()`, `as_index()` |
448
- | [`03_generate_evaluation_data.py`](notebooks/03_generate_evaluation_data.py) | Generate synthetic evaluation datasets | Data generation and evaluation setup |
449
- | [`04_unity_catalog_tools.py`](notebooks/04_unity_catalog_tools.py) | Unity Catalog function deployment | SQL function creation and testing |
450
- | [`05_agent_as_code_driver.py`](notebooks/05_agent_as_code_driver.py) | **Complete agent lifecycle** | `create_agent()`, `deploy_agent()` |
451
- | [`06_run_evaluation.py`](notebooks/06_run_evaluation.py) | Agent evaluation and testing | Evaluation framework usage |
452
- | [`08_run_examples.py`](notebooks/08_run_examples.py) | End-to-end example queries | Agent interaction and testing |
453
-
454
- ## Configuration
455
-
456
- Configuration is managed through [`model_config.yaml`](config/hardware_store/supervisor_postgres.yaml). This file defines all components of the Retail AI system, including resources, tools, agents, and the overall application setup.
457
-
458
- **Note**: The configuration file location is configurable throughout the framework. You can specify a different configuration file using the `-c` or `--config` flag in CLI commands, or by setting the appropriate parameters in the Python API.
459
-
460
- ### Basic Structure of [`model_config.yaml`](config/hardware_store/supervisor_postgres.yaml)
461
-
462
- The [`model_config.yaml`](config/hardware_store/supervisor_postgres.yaml) is organized into several top-level keys:
463
-
464
- ```yaml
465
- # filepath: /Users/nate/development/dao-ai/config/hardware_store/supervisor_postgres.yaml
466
- schemas:
467
- # ... schema definitions ...
468
-
469
- resources:
470
- # ... resource definitions (LLMs, vector stores, etc.) ...
471
-
472
- tools:
473
- # ... tool definitions ...
474
-
475
- agents:
476
- # ... agent definitions ...
477
-
478
- app:
479
- # ... application configuration ...
480
-
481
- # Other sections like guardrails, retrievers, evaluation, datasets
482
- ```
483
-
484
- ### Loading and Using Configuration
485
-
486
- The configuration can be loaded and used programmatically through the `AppConfig` class:
487
-
488
- ```python
489
- from retail_ai.config import AppConfig
490
-
491
- # Load configuration from file
492
- config = AppConfig.from_file("config/hardware_store/supervisor_postgres.yaml")
493
-
494
- # Access different configuration sections
495
- print(f"Available agents: {list(config.agents.keys())}")
496
- print(f"Available tools: {list(config.tools.keys())}")
497
- print(f"Vector stores: {list(config.resources.vector_stores.keys())}")
498
-
499
- # Use configuration methods for deployment
500
- config.create_agent() # Package as MLflow model
501
- config.deploy_agent() # Deploy to serving endpoint
502
- ```
503
-
504
- The configuration supports both CLI and programmatic workflows, with the Python API providing more flexibility for complex deployment scenarios.
505
-
506
- ### Developing and Configuring Tools
507
-
508
- Tools are functions that agents can use to interact with external systems or perform specific tasks. They are defined under the `tools` key in [`model_config.yaml`](config/hardware_store/supervisor_postgres.yaml). Each tool has a unique name and contains a `function` specification.
509
-
510
- There are four types of tools supported:
511
-
512
- #### 1. Python Tools (`type: python`)
513
- These tools directly map to Python functions. The `name` field should correspond to a function that can be imported and called directly.
514
-
515
- **Configuration Example:**
516
- ```yaml
517
- tools:
518
- my_python_tool:
519
- name: my_python_tool
520
- function:
521
- type: python
522
- name: retail_ai.tools.my_function_name
523
- schema: *retail_schema # Optional schema definition
524
- ```
525
- **Development:**
526
- Implement the Python function in the specified module (e.g., `retail_ai/tools.py`). The function will be imported and called directly when the tool is invoked.
527
-
528
- #### 2. Factory Tools (`type: factory`)
529
- Factory tools use factory functions that return initialized LangChain `BaseTool` instances. This is useful for tools requiring complex initialization or configuration.
530
-
531
- **Configuration Example:**
532
- ```yaml
533
- tools:
534
- vector_search_tool:
535
- name: vector_search
536
- function:
537
- type: factory
538
- name: retail_ai.tools.create_vector_search_tool
539
- args:
540
- retriever: *products_retriever
541
- name: product_vector_search_tool
542
- description: "Search for products using vector search"
543
- ```
544
- **Development:**
545
- Implement the factory function (e.g., `create_vector_search_tool`) in `retail_ai/tools.py`. This function should accept the specified `args` and return a fully configured `BaseTool` object.
546
-
547
- #### 3. Unity Catalog Tools (`type: unity_catalog`)
548
- These tools represent SQL functions registered in Databricks Unity Catalog. They reference functions by their Unity Catalog schema and name.
549
-
550
- **Configuration Example:**
551
- ```yaml
552
- tools:
553
- find_product_by_sku_uc_tool:
554
- name: find_product_by_sku_uc
555
- function:
556
- type: unity_catalog
557
- name: find_product_by_sku
558
- schema: *retail_schema
559
- ```
560
- **Development:**
561
- Create the corresponding SQL function in your Databricks Unity Catalog using the specified schema and function name. The tool will automatically generate the appropriate function signature and documentation.
562
-
563
- ### Developing Unity Catalog Functions
564
-
565
- Unity Catalog functions provide the backbone for data access in the multi-agent system. The framework automatically deploys these functions from SQL DDL files during system initialization.
566
-
567
- #### Function Deployment Configuration
568
-
569
- Unity Catalog functions are defined in the `unity_catalog_functions` section of [`model_config.yaml`](config/hardware_store/supervisor_postgres.yaml). Each function specification includes:
570
-
571
- - **Function metadata**: Schema and name for Unity Catalog registration
572
- - **DDL file path**: Location of the SQL file containing the function definition
573
- - **Test parameters**: Optional test data for function validation
574
-
575
- **Configuration Example from [`model_config.yaml`](config/hardware_store/supervisor_postgres.yaml):**
576
- ```yaml
577
- unity_catalog_functions:
578
- - function:
579
- schema: *retail_schema # Reference to schema configuration
580
- name: find_product_by_sku # Function name in Unity Catalog
581
- ddl: ../functions/retail/find_product_by_sku.sql # Path to SQL DDL file
582
- test: # Optional test configuration
583
- parameters:
584
- sku: ["00176279"] # Test parameters for validation
585
- - function:
586
- schema: *retail_schema
587
- name: find_store_inventory_by_sku
588
- ddl: ../functions/retail/find_store_inventory_by_sku.sql
589
- test:
590
- parameters:
591
- store: "35048" # Multiple parameters for complex functions
592
- sku: ["00176279"]
593
- ```
594
-
595
- #### SQL Function Structure
596
-
597
- SQL files should follow this structure for proper deployment:
598
-
599
- **File Structure Example** (`functions/retail/find_product_by_sku.sql`):
600
- ```sql
601
- -- Function to find product details by SKU
602
- CREATE OR REPLACE FUNCTION {catalog_name}.{schema_name}.find_product_by_sku(
603
- sku ARRAY<STRING> COMMENT 'One or more unique identifiers for retrieve. SKU values are between 5-8 alpha numeric characters'
604
- )
605
- RETURNS TABLE(
606
- product_id BIGINT COMMENT 'Unique identifier for each product in the catalog',
607
- sku STRING COMMENT 'Stock Keeping Unit - unique internal product identifier code',
608
- upc STRING COMMENT 'Universal Product Code - standardized barcode number for product identification',
609
- brand_name STRING COMMENT 'Name of the manufacturer or brand that produces the product',
610
- product_name STRING COMMENT 'Display name of the product as shown to customers',
611
- -- ... additional columns
612
- )
613
- READS SQL DATA
614
- COMMENT 'Retrieves detailed information about a specific product by its SKU. This function is designed for product information retrieval in retail applications.'
615
- RETURN
616
- SELECT
617
- product_id,
618
- sku,
619
- upc,
620
- brand_name,
621
- product_name
622
- -- ... additional columns
623
- FROM products
624
- WHERE ARRAY_CONTAINS(find_product_by_sku.sku, products.sku);
625
- ```
626
-
627
- **Key Requirements:**
628
- - Use `{catalog_name}.{schema_name}` placeholders - these are automatically replaced during deployment
629
- - Include comprehensive `COMMENT` attributes for all parameters and return columns
630
- - Provide a clear function-level comment describing purpose and use cases
631
- - Use `READS SQL DATA` for functions that query data
632
- - Follow consistent naming conventions for parameters and return values
633
-
634
- #### Test Configuration
635
-
636
- The optional `test` section allows you to define test parameters for automatic function validation:
637
-
638
- ```yaml
639
- test:
640
- parameters:
641
- sku: ["00176279"] # Single parameter
642
- # OR for multi-parameter functions:
643
- store: "35048" # Multiple parameters
644
- sku: ["00176279"]
645
- ```
646
-
647
- **Test Benefits:**
648
- - **Validation**: Ensures functions work correctly after deployment
649
- - **Documentation**: Provides example usage for other developers
650
- - **CI/CD Integration**: Enables automated testing in deployment pipelines
651
-
652
- **Note**: Test parameters should use realistic data from your datasets to ensure meaningful validation. The framework will execute these tests automatically during deployment to verify function correctness.
653
-
654
- #### 4. MCP (Model Context Protocol) Tools (`type: mcp`)
655
- MCP tools allow interaction with external services that implement the Model Context Protocol, supporting both HTTP and stdio transports.
656
-
657
- **Configuration Example (Direct URL):**
658
- ```yaml
659
- tools:
660
- weather_tool_mcp:
661
- name: weather
662
- function:
663
- type: mcp
664
- name: weather
665
- transport: streamable_http
666
- url: http://localhost:8000/mcp
667
- ```
668
-
669
- **Configuration Example (Unity Catalog Connection):**
670
- MCP tools can also use Unity Catalog Connections for secure, governed access with on-behalf-of-user capabilities. The connection provides OAuth authentication, while the URL specifies the endpoint:
671
- ```yaml
672
- resources:
673
- connections:
674
- github_connection:
675
- name: github_u2m_connection # UC Connection name
676
-
677
- tools:
678
- github_mcp:
679
- name: github_mcp
680
- function:
681
- type: mcp
682
- name: github_mcp
683
- transport: streamable_http
684
- url: https://workspace.databricks.com/api/2.0/mcp/external/github_u2m_connection # MCP endpoint URL
685
- connection: *github_connection # UC Connection provides OAuth authentication
686
- ```
687
-
688
- **Development:**
689
- - **For direct URL connections**: Ensure the MCP service is running and accessible at the specified URL or command. Provide OAuth credentials (client_id, client_secret) or PAT for authentication.
690
- - **For UC Connection**: URL is required to specify the endpoint. The connection provides OAuth authentication via the workspace client. Ensure the connection is configured in Unity Catalog with appropriate MCP scopes (`mcp.genie`, `mcp.functions`, `mcp.vectorsearch`, `mcp.external`).
691
- - The framework will handle the MCP protocol communication automatically, including session management and authentication.
692
-
693
- ### Configuring New Agents
694
-
695
- Agents are specialized AI assistants defined under the `agents` key in [`model_config.yaml`](config/hardware_store/supervisor_postgres.yaml). Each agent has a unique name and specific configuration.
696
-
697
- **Configuration Example:**
698
- ```yaml
699
- agents:
700
- general:
701
- name: general
702
- description: "General retail store assistant for home improvement and hardware store inquiries"
703
- model: *tool_calling_llm
704
- tools:
705
- - *find_product_details_by_description_tool
706
- - *vector_search_tool
707
- guardrails: []
708
- checkpointer: *checkpointer
709
- prompt: |
710
- You are a helpful retail store assistant for a home improvement and hardware store.
711
- You have access to search tools to find current information about products, pricing, and store policies.
712
-
713
- #### CRITICAL INSTRUCTION: ALWAYS USE SEARCH TOOLS FIRST
714
- Before answering ANY question:
715
- - ALWAYS use your available search tools to find the most current and accurate information
716
- - Search for specific details about store policies, product availability, pricing, and services
717
- ```
718
-
719
- **Agent Configuration Fields:**
720
- - `name`: Unique identifier for the agent
721
- - `description`: Human-readable description of the agent's purpose
722
- - `model`: Reference to an LLM model (using YAML anchors like `*tool_calling_llm`)
723
- - `tools`: Array of tool references (using YAML anchors like `*search_tool`)
724
- - `guardrails`: Array of guardrail references (can be empty `[]`)
725
- - `checkpointer`: Reference to a checkpointer for conversation state (optional)
726
- - `prompt`: System prompt that defines the agent's behavior and instructions
727
-
728
- **To configure a new agent:**
729
- 1. Add a new entry under the `agents` section with a unique key
730
- 2. Define the required fields: `name`, `description`, `model`, `tools`, and `prompt`
731
- 3. Optionally configure `guardrails` and `checkpointer`
732
- 4. Reference the agent in the application configuration using YAML anchors
733
-
734
- ### Assigning Tools to Agents
735
-
736
- Tools are assigned to agents by referencing them using YAML anchors in the agent's `tools` array. Each tool must be defined in the `tools` section with an anchor (using `&tool_name`), then referenced in the agent configuration (using `*tool_name`).
737
-
738
- **Example:**
739
- ```yaml
740
- tools:
741
- search_tool: &search_tool
742
- name: search
743
- function:
744
- type: factory
745
- name: retail_ai.tools.search_tool
746
- args: {}
747
-
748
- genie_tool: &genie_tool
749
- name: genie
750
- function:
751
- type: factory
752
- name: retail_ai.tools.create_genie_tool
753
- args:
754
- genie_room: *retail_genie_room
755
-
756
- agents:
757
- general:
758
- name: general
759
- description: "General retail store assistant"
760
- model: *tool_calling_llm
761
- tools:
762
- - *search_tool # Reference to the search_tool anchor
763
- - *genie_tool # Reference to the genie_tool anchor
764
- # ... other agent configuration
765
- ```
766
-
767
- This YAML anchor system allows for:
768
- - **Reusability**: The same tool can be assigned to multiple agents
769
- - **Maintainability**: Tool configuration is centralized in one place
770
- - **Consistency**: Tools are guaranteed to have the same configuration across agents
771
-
772
- ### Assigning Agents to the Application and Configuring Orchestration
773
-
774
- Agents are made available to the application by listing their YAML anchors (defined in the `agents:` section) within the `agents` array under the `app` section. The `app.orchestration` section defines how these agents interact.
775
-
776
- **Orchestration Configuration:**
777
-
778
- The `orchestration` block within the `app` section allows you to define the interaction pattern. Your current configuration primarily uses a **Supervisor** pattern.
779
-
780
- ```yaml
781
- # filepath: /Users/nate/development/dao-ai/config/hardware_store/supervisor_postgres.yaml
782
- # ...
783
- # app:
784
- # ...
785
- # agents:
786
- # - *orders
787
- # - *diy
788
- # - *product
789
- # # ... other agents referenced by their anchors
790
- # - *general
791
- # orchestration:
792
- # supervisor:
793
- # model: *tool_calling_llm # LLM for the supervisor agent
794
- # default_agent: *general # Agent to handle tasks if no specific agent is chosen
795
- # # swarm: # Example of how a swarm might be configured if activated
796
- # # model: *tool_calling_llm
797
- # ...
798
- ```
799
-
800
- **Orchestration Patterns:**
801
-
802
- 1. **Supervisor Pattern (Currently Active)**
803
- * Your configuration defines a `supervisor` block under `app.orchestration`.
804
- * `model`: Specifies the LLM (e.g., `*tool_calling_llm`) that the supervisor itself will use for its decision-making and routing logic.
805
- * `default_agent`: Specifies an agent (e.g., `*general`) that the supervisor will delegate to if it cannot determine a more specialized agent from the `app.agents` list or if the query is general.
806
- * The supervisor is responsible for receiving the initial user query, deciding which specialized agent (from the `app.agents` list) is best suited to handle it, and then passing the query to that agent. If no specific agent is a clear match, or if the query is general, it falls back to the `default_agent`.
807
-
808
- 2. **Swarm Pattern (Commented Out)**
809
- * Your configuration includes a commented-out `swarm` block. If activated, this would imply a different interaction model.
810
- * In a swarm, agents might collaborate more directly or work in parallel on different aspects of a query. The `model` under `swarm` would likely define the LLM used by the agents within the swarm or by a coordinating element of the swarm.
811
- * The specific implementation of how a swarm pattern behaves would be defined in your `retail_ai/graph.py` and `retail_ai/nodes.py`.
812
-
813
- ## Integration Hooks
814
-
815
- The DAO framework provides several hook integration points that allow you to customize agent behavior and application lifecycle. These hooks enable you to inject custom logic at key points in the system without modifying the core framework code.
816
-
817
- ### Hook Types
818
-
819
- #### Agent-Level Hooks
820
-
821
- **Agent hooks** are defined at the individual agent level and allow you to customize specific agent behavior:
822
-
823
- ##### `create_agent_hook`
824
- Used to provide a completely custom agent implementation. When this is provided all other configuration is ignored. See: **Hook Implementation**
825
-
826
- ```yaml
827
- agents:
828
- custom_agent:
829
- name: custom_agent
830
- description: "Agent with custom initialization"
831
- model: *tool_calling_llm
832
- create_agent_hook: my_package.hooks.initialize_custom_agent
833
- # ... other agent configuration
834
- ```
835
-
836
- ##### `pre_agent_hook`
837
- Executed before an agent processes a message. Ideal for request preprocessing, logging, validation, or context injection. See: **Hook Implementation**
838
-
839
- ```yaml
840
- agents:
841
- logging_agent:
842
- name: logging_agent
843
- description: "Agent with request logging"
844
- model: *tool_calling_llm
845
- pre_agent_hook: my_package.hooks.log_incoming_request
846
- # ... other agent configuration
847
- ```
848
-
849
- ##### `post_agent_hook`
850
- Executed after an agent completes processing a message. Perfect for response post-processing, logging, metrics collection, or cleanup operations. See: **Hook Implementation**
851
-
852
- ```yaml
853
- agents:
854
- analytics_agent:
855
- name: analytics_agent
856
- description: "Agent with response analytics"
857
- model: *tool_calling_llm
858
- post_agent_hook: my_package.hooks.collect_response_metrics
859
- # ... other agent configuration
860
- ```
861
-
862
- #### Application-Level Hooks
863
-
864
- **Application hooks** operate at the global application level and affect the entire system lifecycle:
865
-
866
- ##### `initialization_hooks`
867
- Executed when the application starts up via `AppConfig.from_file()`. Use these for system initialization, resource setup, database connections, or external service configuration. See: **Hook Implementation**
868
-
869
- ```yaml
870
- app:
871
- name: my_retail_app
872
- initialization_hooks:
873
- - my_package.hooks.setup_database_connections
874
- - my_package.hooks.initialize_external_apis
875
- - my_package.hooks.setup_monitoring
876
- # ... other app configuration
877
- ```
878
-
879
- ##### `shutdown_hooks`
880
- Executed when the application shuts down (registered via `atexit`). Essential for cleanup operations, closing connections, saving state, or performing final logging. See: **Hook Implementation**
881
-
882
- ```yaml
883
- app:
884
- name: my_retail_app
885
- shutdown_hooks:
886
- - my_package.hooks.cleanup_database_connections
887
- - my_package.hooks.save_session_data
888
- - my_package.hooks.send_shutdown_metrics
889
- # ... other app configuration
890
- ```
891
-
892
- ##### `message_hooks`
893
- Executed for every message processed by the system. Useful for global logging, authentication, rate limiting, or message transformation. See: **Hook Implementation**
894
-
895
- ```yaml
896
- app:
897
- name: my_retail_app
898
- message_hooks:
899
- - my_package.hooks.authenticate_user
900
- - my_package.hooks.apply_rate_limiting
901
- - my_package.hooks.transform_message_format
902
- # ... other app configuration
903
- ```
904
-
905
- ### Hook Implementation
906
-
907
- Hooks can be implemented as either:
908
-
909
- 1. **Python Functions**: Direct function references
910
- ```yaml
911
- initialization_hooks: my_package.hooks.setup_function
912
- ```
913
-
914
- 2. **Factory Functions**: Functions that return configured tools or handlers
915
- ```yaml
916
- initialization_hooks:
917
- type: factory
918
- name: my_package.hooks.create_setup_handler
919
- args:
920
- config_param: "value"
921
- ```
922
-
923
- 3. **Hook Lists**: Multiple hooks executed in sequence
924
- ```yaml
925
- initialization_hooks:
926
- - my_package.hooks.setup_database
927
- - my_package.hooks.setup_cache
928
- - my_package.hooks.setup_monitoring
929
- ```
930
-
931
- ### Hook Function Signatures
932
-
933
- Each hook type expects specific function signatures:
934
-
935
- #### Agent Hooks
936
- ```python
937
- # create_agent_hook
938
- def initialize_custom_agent(state: dict, config: dict) -> dict:
939
- """Custom agent initialization logic"""
940
- pass
941
-
942
- # pre_agent_hook
943
- def log_incoming_request(state: dict, config: dict) -> dict:
944
- """Pre-process incoming request"""
945
- return state
946
-
947
- # post_agent_hook
948
- def collect_response_metrics(state: dict, config: dict) -> dict:
949
- """Post-process agent response"""
950
- return state
951
- ```
952
-
953
- #### Application Hooks
954
- ```python
955
- # initialization_hooks
956
- def setup_database_connections(config: AppConfig) -> None:
957
- """Initialize database connections"""
958
- pass
959
-
960
- # shutdown_hooks
961
- def cleanup_resources(config: AppConfig) -> None:
962
- """Clean up resources on shutdown"""
963
- pass
964
-
965
- # message_hooks
966
- def authenticate_user(state: dict, config: dict) -> dict:
967
- """Authenticate and authorize user requests"""
968
- return state
969
- ```
970
-
971
- ### Use Cases and Examples
972
-
973
- #### Common Hook Patterns
974
-
975
- **Logging and Monitoring**:
976
- ```python
977
- def log_agent_performance(state: dict, config: AppConfig) -> dict:
978
- """Log agent response times and quality metrics"""
979
- start_time = state.get('start_time')
980
- if start_time:
981
- duration = time.time() - start_time
982
- logger.info(f"Agent response time: {duration:.2f}s")
983
- return state
984
- ```
985
-
986
- **Authentication and Authorization**:
987
- ```python
988
- def validate_user_permissions(state: dict, config: AppConfig) -> dict:
989
- """Validate user has permission for requested operation"""
990
- user_id = state.get('user_id')
991
- if not has_permission(user_id, state.get('operation')):
992
- raise UnauthorizedError("Insufficient permissions")
993
- return state
994
- ```
995
-
996
- **Resource Management**:
997
- ```python
998
- def initialize_vector_search(config: AppConfig) -> None:
999
- """Initialize vector search connections during startup"""
1000
- for vs_name, vs_config in config.resources.vector_stores.items():
1001
- vs_config.create()
1002
- logger.info(f"Vector store {vs_name} initialized")
1003
- ```
1004
-
1005
- **State Enrichment**:
1006
- ```python
1007
- def enrich_user_context(state: dict, config: AppConfig) -> dict:
1008
- """Add user profile and preferences to state"""
1009
- user_id = state.get('user_id')
1010
- if user_id:
1011
- user_profile = get_user_profile(user_id)
1012
- state['user_context'] = user_profile
1013
- return state
1014
- ```
1015
-
1016
- ### Best Practices
1017
-
1018
- 1. **Keep hooks lightweight**: Avoid heavy computations that could slow down message processing
1019
- 2. **Handle errors gracefully**: Use try-catch blocks to prevent hook failures from breaking the system
1020
- 3. **Use appropriate hook types**: Choose agent-level vs application-level hooks based on scope
1021
- 4. **Maintain state immutability**: Return modified copies of state rather than mutating in-place
1022
- 5. **Log hook execution**: Include logging for troubleshooting and monitoring
1023
- 6. **Test hooks independently**: Write unit tests for hook functions separate from the main application
1024
-
1025
-
1026
- ## Development
1027
-
1028
- ### Project Structure
1029
-
1030
- - `retail_ai/`: Core package
1031
- - `config.py`: Pydantic configuration models with full validation
1032
- - `graph.py`: LangGraph workflow definition
1033
- - `nodes.py`: Agent node factories and implementations
1034
- - `tools.py`: Tool creation and factory functions, implementations for Python tools
1035
- - `vector_search.py`: Vector search utilities
1036
- - `state.py`: State management for conversations
1037
- - `tests/`: Test suite with configuration fixtures
1038
- - `schemas/`: JSON schemas for configuration validation
1039
- - `notebooks/`: Jupyter notebooks for setup and experimentation
1040
- - `docs/`: Documentation files, including architecture diagrams.
1041
- - `config/`: Contains [`model_config.yaml`](config/hardware_store/supervisor_postgres.yaml).
1042
-
1043
- ### Building the Package
1044
-
1045
- ```bash
1046
- # Install development dependencies
1047
- make depends
1048
-
1049
- # Build the package
1050
- make install
1051
-
1052
- # Run tests
1053
- make test
1054
-
1055
- # Format code
1056
- make format
1057
- ```
1058
-
1059
- ## Deployment with Databricks Bundle CLI
1060
-
1061
- The agent can be deployed using the existing Databricks Bundle CLI configuration:
1062
-
1063
- 1. Ensure Databricks CLI is installed and configured:
1064
- ```bash
1065
- pip install databricks-cli
1066
- databricks configure
1067
- ```
1068
-
1069
- 2. Deploy using the existing `databricks.yml`:
1070
- ```bash
1071
- databricks bundle deploy
1072
- ```
1073
-
1074
- 3. Check deployment status:
1075
- ```bash
1076
- databricks bundle status
1077
- ```
1078
-
1079
- ## Usage
1080
-
1081
- Once deployed, interact with the agent:
1082
-
1083
- ```python
1084
- from mlflow.deployments import get_deploy_client
1085
-
1086
- client = get_deploy_client("databricks")
1087
- response = client.predict(
1088
- endpoint="retail_ai_agent", # Matches endpoint_name in model_config.yaml
1089
- inputs={
1090
- "messages": [
1091
- {"role": "user", "content": "Can you recommend a lamp for my oak side tables?"}
1092
- ]
1093
- }
1094
- )
1095
-
1096
- print(response["message"]["content"])
1097
- ```
1098
-
1099
- ### Advanced Configuration
1100
-
1101
- You can also pass additional configuration parameters to customize the agent's behavior:
1102
-
1103
- ```python
1104
- response = client.predict(
1105
- endpoint="retail_ai_agent",
1106
- inputs={
1107
- "messages": [
1108
- {"role": "user", "content": "Can you recommend a lamp for my oak side tables?"}
1109
- ],
1110
- "configurable": {
1111
- "thread_id": "1",
1112
- "user_id": "my_user_id",
1113
- "store_num": 87887
1114
- }
1115
- }
1116
- )
1117
- ```
1118
-
1119
- The `configurable` section supports:
1120
- - **`thread_id`**: Unique identifier for conversation threading and state management
1121
- - **`user_id`**: User identifier for personalization and tracking
1122
- - **`store_num`**: Store number for location-specific recommendations and inventory
1123
-
1124
- ## Customization
1125
-
1126
- To customize the agent:
1127
-
1128
- 1. **Update [`model_config.yaml`](config/hardware_store/supervisor_postgres.yaml)**:
1129
- - Add tools in the `tools` section
1130
- - Create agents in the `agents` section
1131
- - Configure resources (LLMs, vector stores, etc.)
1132
- - Adjust orchestration patterns as described above.
1133
-
1134
- 2. **Implement new tools** in `retail_ai/tools.py` (for Python and Factory tools) or in Unity Catalog (for UC tools).
1135
-
1136
- 3. **Extend workflows** in `retail_ai/graph.py` to support the chosen orchestration patterns and agent interactions.
1137
-
1138
- ## Testing
1139
-
1140
- ```bash
1141
- # Run all tests
1142
- make test
1143
- ```
1144
-
1145
- ## Logging
1146
-
1147
- The primary log level for the application is configured in [`model_config.yaml`](config/hardware_store/supervisor_postgres.yaml) under the `app.log_level` field.
1148
-
1149
- **Configuration Example:**
1150
- ```yaml
1151
- # filepath: /Users/nate/development/dao-ai/config/hardware_store/supervisor_postgres.yaml
1152
- app:
1153
- log_level: INFO # Supported levels: DEBUG, INFO, WARNING, ERROR, CRITICAL
1154
- # ... other app configurations ...
1155
- ```
1156
-
1157
- This setting controls the verbosity of logs produced by the `retail_ai` package.
1158
-
1159
- The system also includes:
1160
- - **MLflow tracing** for request tracking.
1161
- - **Structured logging** is used internally.
1162
-
1163
- ## License
1164
-
1165
- This project is licensed under the MIT License - see the [LICENSE](LICENSE) file for details.