hdsp-jupyter-extension 2.0.11__py3-none-any.whl → 2.0.13__py3-none-any.whl
This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
- agent_server/langchain/MULTI_AGENT_ARCHITECTURE.md +1114 -0
- agent_server/langchain/__init__.py +2 -2
- agent_server/langchain/agent.py +72 -33
- agent_server/langchain/agent_factory.py +400 -0
- agent_server/langchain/agent_prompts/__init__.py +25 -0
- agent_server/langchain/agent_prompts/athena_query_prompt.py +71 -0
- agent_server/langchain/agent_prompts/planner_prompt.py +85 -0
- agent_server/langchain/agent_prompts/python_developer_prompt.py +123 -0
- agent_server/langchain/agent_prompts/researcher_prompt.py +38 -0
- agent_server/langchain/custom_middleware.py +652 -195
- agent_server/langchain/hitl_config.py +34 -10
- agent_server/langchain/middleware/__init__.py +24 -0
- agent_server/langchain/middleware/code_history_middleware.py +412 -0
- agent_server/langchain/middleware/description_injector.py +150 -0
- agent_server/langchain/middleware/skill_middleware.py +298 -0
- agent_server/langchain/middleware/subagent_events.py +171 -0
- agent_server/langchain/middleware/subagent_middleware.py +329 -0
- agent_server/langchain/prompts.py +96 -101
- agent_server/langchain/skills/data_analysis.md +236 -0
- agent_server/langchain/skills/data_loading.md +158 -0
- agent_server/langchain/skills/inference.md +392 -0
- agent_server/langchain/skills/model_training.md +318 -0
- agent_server/langchain/skills/pyspark.md +352 -0
- agent_server/langchain/subagents/__init__.py +20 -0
- agent_server/langchain/subagents/base.py +173 -0
- agent_server/langchain/tools/__init__.py +3 -0
- agent_server/langchain/tools/jupyter_tools.py +58 -20
- agent_server/langchain/tools/lsp_tools.py +1 -1
- agent_server/langchain/tools/shared/__init__.py +26 -0
- agent_server/langchain/tools/shared/qdrant_search.py +175 -0
- agent_server/langchain/tools/tool_registry.py +219 -0
- agent_server/langchain/tools/workspace_tools.py +197 -0
- agent_server/routers/config.py +40 -1
- agent_server/routers/langchain_agent.py +818 -337
- {hdsp_jupyter_extension-2.0.11.data → hdsp_jupyter_extension-2.0.13.data}/data/share/jupyter/labextensions/hdsp-agent/build_log.json +1 -1
- {hdsp_jupyter_extension-2.0.11.data → hdsp_jupyter_extension-2.0.13.data}/data/share/jupyter/labextensions/hdsp-agent/package.json +7 -2
- hdsp_jupyter_extension-2.0.11.data/data/share/jupyter/labextensions/hdsp-agent/static/frontend_styles_index_js.2d9fb488c82498c45c2d.js → hdsp_jupyter_extension-2.0.13.data/data/share/jupyter/labextensions/hdsp-agent/static/frontend_styles_index_js.037b3c8e5d6a92b63b16.js +1108 -179
- hdsp_jupyter_extension-2.0.13.data/data/share/jupyter/labextensions/hdsp-agent/static/frontend_styles_index_js.037b3c8e5d6a92b63b16.js.map +1 -0
- jupyter_ext/labextension/static/lib_index_js.58c1e128ba0b76f41f04.js → hdsp_jupyter_extension-2.0.13.data/data/share/jupyter/labextensions/hdsp-agent/static/lib_index_js.5449ba3c7e25177d2987.js +3916 -8128
- hdsp_jupyter_extension-2.0.13.data/data/share/jupyter/labextensions/hdsp-agent/static/lib_index_js.5449ba3c7e25177d2987.js.map +1 -0
- hdsp_jupyter_extension-2.0.11.data/data/share/jupyter/labextensions/hdsp-agent/static/remoteEntry.9da31d1134a53b0c4af5.js → hdsp_jupyter_extension-2.0.13.data/data/share/jupyter/labextensions/hdsp-agent/static/remoteEntry.a8e0b064eb9b1c1ff463.js +17 -17
- hdsp_jupyter_extension-2.0.13.data/data/share/jupyter/labextensions/hdsp-agent/static/remoteEntry.a8e0b064eb9b1c1ff463.js.map +1 -0
- {hdsp_jupyter_extension-2.0.11.dist-info → hdsp_jupyter_extension-2.0.13.dist-info}/METADATA +1 -1
- {hdsp_jupyter_extension-2.0.11.dist-info → hdsp_jupyter_extension-2.0.13.dist-info}/RECORD +75 -51
- jupyter_ext/_version.py +1 -1
- jupyter_ext/handlers.py +59 -8
- jupyter_ext/labextension/build_log.json +1 -1
- jupyter_ext/labextension/package.json +7 -2
- jupyter_ext/labextension/static/{frontend_styles_index_js.2d9fb488c82498c45c2d.js → frontend_styles_index_js.037b3c8e5d6a92b63b16.js} +1108 -179
- jupyter_ext/labextension/static/frontend_styles_index_js.037b3c8e5d6a92b63b16.js.map +1 -0
- hdsp_jupyter_extension-2.0.11.data/data/share/jupyter/labextensions/hdsp-agent/static/lib_index_js.58c1e128ba0b76f41f04.js → jupyter_ext/labextension/static/lib_index_js.5449ba3c7e25177d2987.js +3916 -8128
- jupyter_ext/labextension/static/lib_index_js.5449ba3c7e25177d2987.js.map +1 -0
- jupyter_ext/labextension/static/{remoteEntry.9da31d1134a53b0c4af5.js → remoteEntry.a8e0b064eb9b1c1ff463.js} +17 -17
- jupyter_ext/labextension/static/remoteEntry.a8e0b064eb9b1c1ff463.js.map +1 -0
- hdsp_jupyter_extension-2.0.11.data/data/share/jupyter/labextensions/hdsp-agent/static/frontend_styles_index_js.2d9fb488c82498c45c2d.js.map +0 -1
- hdsp_jupyter_extension-2.0.11.data/data/share/jupyter/labextensions/hdsp-agent/static/lib_index_js.58c1e128ba0b76f41f04.js.map +0 -1
- hdsp_jupyter_extension-2.0.11.data/data/share/jupyter/labextensions/hdsp-agent/static/remoteEntry.9da31d1134a53b0c4af5.js.map +0 -1
- jupyter_ext/labextension/static/frontend_styles_index_js.2d9fb488c82498c45c2d.js.map +0 -1
- jupyter_ext/labextension/static/lib_index_js.58c1e128ba0b76f41f04.js.map +0 -1
- jupyter_ext/labextension/static/remoteEntry.9da31d1134a53b0c4af5.js.map +0 -1
- {hdsp_jupyter_extension-2.0.11.data → hdsp_jupyter_extension-2.0.13.data}/data/etc/jupyter/jupyter_server_config.d/hdsp_jupyter_extension.json +0 -0
- {hdsp_jupyter_extension-2.0.11.data → hdsp_jupyter_extension-2.0.13.data}/data/share/jupyter/labextensions/hdsp-agent/install.json +0 -0
- {hdsp_jupyter_extension-2.0.11.data → hdsp_jupyter_extension-2.0.13.data}/data/share/jupyter/labextensions/hdsp-agent/static/node_modules_emotion_use-insertion-effect-with-fallbacks_dist_emotion-use-insertion-effect-wi-3ba6b80.c095373419d05e6f141a.js +0 -0
- {hdsp_jupyter_extension-2.0.11.data → hdsp_jupyter_extension-2.0.13.data}/data/share/jupyter/labextensions/hdsp-agent/static/node_modules_emotion_use-insertion-effect-with-fallbacks_dist_emotion-use-insertion-effect-wi-3ba6b80.c095373419d05e6f141a.js.map +0 -0
- {hdsp_jupyter_extension-2.0.11.data → hdsp_jupyter_extension-2.0.13.data}/data/share/jupyter/labextensions/hdsp-agent/static/node_modules_emotion_use-insertion-effect-with-fallbacks_dist_emotion-use-insertion-effect-wi-3ba6b81.61e75fb98ecff46cf836.js +0 -0
- {hdsp_jupyter_extension-2.0.11.data → hdsp_jupyter_extension-2.0.13.data}/data/share/jupyter/labextensions/hdsp-agent/static/node_modules_emotion_use-insertion-effect-with-fallbacks_dist_emotion-use-insertion-effect-wi-3ba6b81.61e75fb98ecff46cf836.js.map +0 -0
- {hdsp_jupyter_extension-2.0.11.data → hdsp_jupyter_extension-2.0.13.data}/data/share/jupyter/labextensions/hdsp-agent/static/style.js +0 -0
- {hdsp_jupyter_extension-2.0.11.data → hdsp_jupyter_extension-2.0.13.data}/data/share/jupyter/labextensions/hdsp-agent/static/vendors-node_modules_babel_runtime_helpers_esm_extends_js-node_modules_emotion_serialize_dist-051195.e2553aab0c3963b83dd7.js +0 -0
- {hdsp_jupyter_extension-2.0.11.data → hdsp_jupyter_extension-2.0.13.data}/data/share/jupyter/labextensions/hdsp-agent/static/vendors-node_modules_babel_runtime_helpers_esm_extends_js-node_modules_emotion_serialize_dist-051195.e2553aab0c3963b83dd7.js.map +0 -0
- {hdsp_jupyter_extension-2.0.11.data → hdsp_jupyter_extension-2.0.13.data}/data/share/jupyter/labextensions/hdsp-agent/static/vendors-node_modules_emotion_cache_dist_emotion-cache_browser_development_esm_js.24edcc52a1c014a8a5f0.js +0 -0
- {hdsp_jupyter_extension-2.0.11.data → hdsp_jupyter_extension-2.0.13.data}/data/share/jupyter/labextensions/hdsp-agent/static/vendors-node_modules_emotion_cache_dist_emotion-cache_browser_development_esm_js.24edcc52a1c014a8a5f0.js.map +0 -0
- {hdsp_jupyter_extension-2.0.11.data → hdsp_jupyter_extension-2.0.13.data}/data/share/jupyter/labextensions/hdsp-agent/static/vendors-node_modules_emotion_react_dist_emotion-react_browser_development_esm_js.19ecf6babe00caff6b8a.js +0 -0
- {hdsp_jupyter_extension-2.0.11.data → hdsp_jupyter_extension-2.0.13.data}/data/share/jupyter/labextensions/hdsp-agent/static/vendors-node_modules_emotion_react_dist_emotion-react_browser_development_esm_js.19ecf6babe00caff6b8a.js.map +0 -0
- {hdsp_jupyter_extension-2.0.11.data → hdsp_jupyter_extension-2.0.13.data}/data/share/jupyter/labextensions/hdsp-agent/static/vendors-node_modules_emotion_styled_dist_emotion-styled_browser_development_esm_js.661fb5836f4978a7c6e1.js +0 -0
- {hdsp_jupyter_extension-2.0.11.data → hdsp_jupyter_extension-2.0.13.data}/data/share/jupyter/labextensions/hdsp-agent/static/vendors-node_modules_emotion_styled_dist_emotion-styled_browser_development_esm_js.661fb5836f4978a7c6e1.js.map +0 -0
- {hdsp_jupyter_extension-2.0.11.data → hdsp_jupyter_extension-2.0.13.data}/data/share/jupyter/labextensions/hdsp-agent/static/vendors-node_modules_mui_material_index_js.985697e0162d8d088ca2.js +0 -0
- {hdsp_jupyter_extension-2.0.11.data → hdsp_jupyter_extension-2.0.13.data}/data/share/jupyter/labextensions/hdsp-agent/static/vendors-node_modules_mui_material_index_js.985697e0162d8d088ca2.js.map +0 -0
- {hdsp_jupyter_extension-2.0.11.data → hdsp_jupyter_extension-2.0.13.data}/data/share/jupyter/labextensions/hdsp-agent/static/vendors-node_modules_mui_material_utils_createSvgIcon_js.1f5038488cdfd8b3a85d.js +0 -0
- {hdsp_jupyter_extension-2.0.11.data → hdsp_jupyter_extension-2.0.13.data}/data/share/jupyter/labextensions/hdsp-agent/static/vendors-node_modules_mui_material_utils_createSvgIcon_js.1f5038488cdfd8b3a85d.js.map +0 -0
- {hdsp_jupyter_extension-2.0.11.dist-info → hdsp_jupyter_extension-2.0.13.dist-info}/WHEEL +0 -0
- {hdsp_jupyter_extension-2.0.11.dist-info → hdsp_jupyter_extension-2.0.13.dist-info}/licenses/LICENSE +0 -0
|
@@ -12,7 +12,7 @@ Architecture:
|
|
|
12
12
|
- state.py: Agent state management
|
|
13
13
|
"""
|
|
14
14
|
|
|
15
|
-
from agent_server.langchain.agent import create_simple_chat_agent
|
|
15
|
+
from agent_server.langchain.agent import create_simple_chat_agent, create_agent_system
|
|
16
16
|
from agent_server.langchain.state import AgentState
|
|
17
17
|
|
|
18
|
-
__all__ = ["create_simple_chat_agent", "AgentState"]
|
|
18
|
+
__all__ = ["create_simple_chat_agent", "create_agent_system", "AgentState"]
|
agent_server/langchain/agent.py
CHANGED
|
@@ -2,10 +2,11 @@
|
|
|
2
2
|
LangChain Agent
|
|
3
3
|
|
|
4
4
|
Main agent creation module for tool-driven chat execution.
|
|
5
|
+
Supports both single-agent and multi-agent modes.
|
|
5
6
|
"""
|
|
6
7
|
|
|
7
8
|
import logging
|
|
8
|
-
from typing import Any, Dict, Optional
|
|
9
|
+
from typing import Any, Dict, Optional, Literal
|
|
9
10
|
|
|
10
11
|
from agent_server.langchain.custom_middleware import (
|
|
11
12
|
create_handle_empty_response_middleware,
|
|
@@ -22,6 +23,7 @@ from agent_server.langchain.prompts import (
|
|
|
22
23
|
TODO_LIST_TOOL_DESCRIPTION,
|
|
23
24
|
)
|
|
24
25
|
from agent_server.langchain.tools import (
|
|
26
|
+
ask_user_tool,
|
|
25
27
|
check_resource_tool,
|
|
26
28
|
diagnostics_tool,
|
|
27
29
|
edit_file_tool,
|
|
@@ -43,6 +45,7 @@ def _get_all_tools():
|
|
|
43
45
|
return [
|
|
44
46
|
jupyter_cell_tool,
|
|
45
47
|
markdown_tool,
|
|
48
|
+
ask_user_tool, # HITL - waits for user response
|
|
46
49
|
read_file_tool,
|
|
47
50
|
write_file_tool,
|
|
48
51
|
edit_file_tool,
|
|
@@ -134,9 +137,10 @@ def create_simple_chat_agent(
|
|
|
134
137
|
middleware.append(patch_tool_calls)
|
|
135
138
|
|
|
136
139
|
# Add TodoListMiddleware for task planning
|
|
140
|
+
# NOTE: system_prompt removed to avoid multi-part content array
|
|
141
|
+
# that causes Gemini MALFORMED_FUNCTION_CALL error
|
|
137
142
|
if enable_todo_list:
|
|
138
143
|
todo_middleware = TodoListMiddleware(
|
|
139
|
-
system_prompt=TODO_LIST_SYSTEM_PROMPT,
|
|
140
144
|
tool_description=TODO_LIST_TOOL_DESCRIPTION,
|
|
141
145
|
)
|
|
142
146
|
middleware.append(todo_middleware)
|
|
@@ -192,42 +196,16 @@ def create_simple_chat_agent(
|
|
|
192
196
|
else:
|
|
193
197
|
system_prompt = DEFAULT_SYSTEM_PROMPT
|
|
194
198
|
|
|
195
|
-
# Add
|
|
196
|
-
gemini_model = llm_config.get("gemini", {}).get("model", "")
|
|
197
|
-
if "gemini-2.5-flash" in gemini_model:
|
|
198
|
-
gemini_content_prompt = """
|
|
199
|
-
## 🔴 IMPORTANT: Always include explanation text
|
|
200
|
-
When calling any tool, you MUST include a brief explanation in your response content.
|
|
201
|
-
NEVER produce an empty content when making tool calls.
|
|
202
|
-
Before each tool call, write Korean explanations of what you're about to do.
|
|
203
|
-
Example: "데이터를 로드하겠습니다." then call jupyter_cell_tool.
|
|
204
|
-
"""
|
|
205
|
-
system_prompt = system_prompt + "\n" + gemini_content_prompt
|
|
206
|
-
logger.info("Added Gemini 2.5 Flash specific prompt for content inclusion")
|
|
207
|
-
|
|
208
|
-
# Add vLLM/gpt-oss specific prompt for Korean responses and proper todo structure
|
|
199
|
+
# Add vLLM/gpt-oss specific prompt (minimal - main rules are in DEFAULT_SYSTEM_PROMPT)
|
|
209
200
|
provider = llm_config.get("provider", "")
|
|
210
201
|
if provider == "vllm":
|
|
211
202
|
vllm_prompt = """
|
|
212
|
-
## 🔴
|
|
213
|
-
-
|
|
214
|
-
-
|
|
215
|
-
- 영어로 응답하지 마세요.
|
|
216
|
-
|
|
217
|
-
## 🔴 MANDATORY: Todo List Structure
|
|
218
|
-
When creating todos with write_todos, you MUST:
|
|
219
|
-
1. Write all todo items in Korean
|
|
220
|
-
2. ALWAYS include "작업 요약 및 다음단계 제시" as the LAST todo item
|
|
221
|
-
3. Example structure:
|
|
222
|
-
- 데이터 로드 및 확인
|
|
223
|
-
- 데이터 분석 수행
|
|
224
|
-
- 작업 요약 및 다음단계 제시 ← 반드시 마지막에 포함!
|
|
225
|
-
|
|
226
|
-
## 🔴 IMPORTANT: Never return empty responses
|
|
227
|
-
If you have nothing to say, call a tool instead. NEVER return an empty response.
|
|
203
|
+
## 🔴 추가 강조
|
|
204
|
+
- 반드시 한국어로 응답
|
|
205
|
+
- 빈 응답 절대 금지 - 항상 도구 호출 필수
|
|
228
206
|
"""
|
|
229
207
|
system_prompt = system_prompt + "\n" + vllm_prompt
|
|
230
|
-
logger.info("Added vLLM
|
|
208
|
+
logger.info("Added vLLM-specific prompt")
|
|
231
209
|
|
|
232
210
|
logger.info("SimpleChatAgent system_prompt: %s", system_prompt)
|
|
233
211
|
|
|
@@ -241,3 +219,64 @@ If you have nothing to say, call a tool instead. NEVER return an empty response.
|
|
|
241
219
|
)
|
|
242
220
|
|
|
243
221
|
return agent
|
|
222
|
+
|
|
223
|
+
|
|
224
|
+
# =============================================================================
|
|
225
|
+
# Multi-Agent System Support
|
|
226
|
+
# =============================================================================
|
|
227
|
+
|
|
228
|
+
|
|
229
|
+
def create_agent_system(
|
|
230
|
+
llm_config: Dict[str, Any],
|
|
231
|
+
workspace_root: str = ".",
|
|
232
|
+
enable_hitl: bool = True,
|
|
233
|
+
enable_todo_list: bool = True,
|
|
234
|
+
checkpointer: Optional[object] = None,
|
|
235
|
+
system_prompt_override: Optional[str] = None,
|
|
236
|
+
agent_mode: Literal["single", "multi"] = "single",
|
|
237
|
+
agent_prompts: Optional[Dict[str, str]] = None,
|
|
238
|
+
):
|
|
239
|
+
"""
|
|
240
|
+
Create an agent system based on the specified mode.
|
|
241
|
+
|
|
242
|
+
This is the main entry point for creating agents. It supports both:
|
|
243
|
+
- "single": Traditional single-agent with all tools (backward compatible)
|
|
244
|
+
- "multi": Multi-agent system with Planner supervising subagents
|
|
245
|
+
|
|
246
|
+
Args:
|
|
247
|
+
llm_config: LLM configuration
|
|
248
|
+
workspace_root: Root directory for file operations
|
|
249
|
+
enable_hitl: Enable Human-in-the-Loop for code execution
|
|
250
|
+
enable_todo_list: Enable TodoListMiddleware for task planning
|
|
251
|
+
checkpointer: Optional checkpointer for state persistence
|
|
252
|
+
system_prompt_override: Optional custom system prompt
|
|
253
|
+
agent_mode: "single" for traditional mode, "multi" for multi-agent
|
|
254
|
+
agent_prompts: Optional dict of per-agent prompts (for multi-agent mode)
|
|
255
|
+
|
|
256
|
+
Returns:
|
|
257
|
+
Configured agent (single or Planner for multi-agent)
|
|
258
|
+
"""
|
|
259
|
+
if agent_mode == "multi":
|
|
260
|
+
# Use multi-agent system with Planner + Subagents
|
|
261
|
+
from agent_server.langchain.agent_factory import create_multi_agent_system
|
|
262
|
+
|
|
263
|
+
logger.info("Creating multi-agent system (Planner + Subagents)")
|
|
264
|
+
return create_multi_agent_system(
|
|
265
|
+
llm_config=llm_config,
|
|
266
|
+
checkpointer=checkpointer,
|
|
267
|
+
enable_hitl=enable_hitl,
|
|
268
|
+
enable_todo_list=enable_todo_list,
|
|
269
|
+
system_prompt_override=system_prompt_override,
|
|
270
|
+
agent_prompts=agent_prompts,
|
|
271
|
+
)
|
|
272
|
+
else:
|
|
273
|
+
# Use traditional single-agent mode
|
|
274
|
+
logger.info("Creating single-agent system (all tools in one agent)")
|
|
275
|
+
return create_simple_chat_agent(
|
|
276
|
+
llm_config=llm_config,
|
|
277
|
+
workspace_root=workspace_root,
|
|
278
|
+
enable_hitl=enable_hitl,
|
|
279
|
+
enable_todo_list=enable_todo_list,
|
|
280
|
+
checkpointer=checkpointer,
|
|
281
|
+
system_prompt_override=system_prompt_override,
|
|
282
|
+
)
|
|
@@ -0,0 +1,400 @@
|
|
|
1
|
+
"""
|
|
2
|
+
Agent Factory
|
|
3
|
+
|
|
4
|
+
Main module for creating agents in the multi-agent architecture.
|
|
5
|
+
Based on Deep Agents library pattern (benchmarked, not installed).
|
|
6
|
+
|
|
7
|
+
Provides:
|
|
8
|
+
- create_main_agent(): Main supervisor agent (executes code, orchestrates)
|
|
9
|
+
- create_subagent(): Specialized subagents (python_developer, researcher, athena_query)
|
|
10
|
+
- create_multi_agent_system(): Complete multi-agent setup
|
|
11
|
+
|
|
12
|
+
Architecture:
|
|
13
|
+
- Main Agent: Executes code (jupyter_cell_tool, write_file_tool) and orchestrates
|
|
14
|
+
- Subagents: Generate code/analysis but do NOT execute (like Athena Query generates SQL)
|
|
15
|
+
"""
|
|
16
|
+
|
|
17
|
+
import logging
|
|
18
|
+
from typing import Any, Dict, Optional
|
|
19
|
+
|
|
20
|
+
from agent_server.langchain.agent_prompts.planner_prompt import (
|
|
21
|
+
PLANNER_SYSTEM_PROMPT,
|
|
22
|
+
)
|
|
23
|
+
from agent_server.langchain.custom_middleware import (
|
|
24
|
+
create_continuation_control_middleware,
|
|
25
|
+
create_handle_empty_response_middleware,
|
|
26
|
+
create_limit_tool_calls_middleware,
|
|
27
|
+
create_normalize_tool_args_middleware,
|
|
28
|
+
create_patch_tool_calls_middleware,
|
|
29
|
+
)
|
|
30
|
+
from agent_server.langchain.hitl_config import get_hitl_interrupt_config
|
|
31
|
+
from agent_server.langchain.llm_factory import create_llm, create_summarization_llm
|
|
32
|
+
from agent_server.langchain.middleware.skill_middleware import get_skill_middleware
|
|
33
|
+
from agent_server.langchain.middleware.subagent_middleware import (
|
|
34
|
+
create_task_tool,
|
|
35
|
+
set_subagent_factory,
|
|
36
|
+
)
|
|
37
|
+
from agent_server.langchain.subagents.base import (
|
|
38
|
+
get_available_subagents_for_planner,
|
|
39
|
+
get_subagent_config,
|
|
40
|
+
)
|
|
41
|
+
from agent_server.langchain.tools.tool_registry import get_tools_for_agent
|
|
42
|
+
|
|
43
|
+
logger = logging.getLogger(__name__)
|
|
44
|
+
|
|
45
|
+
|
|
46
|
+
def create_subagent(
|
|
47
|
+
agent_type: str,
|
|
48
|
+
llm_config: Dict[str, Any],
|
|
49
|
+
checkpointer: Optional[object] = None,
|
|
50
|
+
enable_hitl: bool = True,
|
|
51
|
+
system_prompt_override: Optional[str] = None,
|
|
52
|
+
) -> Any:
|
|
53
|
+
"""
|
|
54
|
+
Create a specialized subagent.
|
|
55
|
+
|
|
56
|
+
Subagents are stateless and run in isolated context.
|
|
57
|
+
They generate code/analysis but do NOT execute - Main Agent handles execution.
|
|
58
|
+
|
|
59
|
+
Args:
|
|
60
|
+
agent_type: Type of subagent (python_developer, researcher, athena_query)
|
|
61
|
+
llm_config: LLM configuration
|
|
62
|
+
checkpointer: Optional checkpointer (usually None for subagents)
|
|
63
|
+
enable_hitl: Enable Human-in-the-Loop for tools that need approval
|
|
64
|
+
system_prompt_override: Optional custom system prompt for this subagent
|
|
65
|
+
|
|
66
|
+
Returns:
|
|
67
|
+
Compiled agent graph
|
|
68
|
+
"""
|
|
69
|
+
try:
|
|
70
|
+
from langchain.agents import create_agent
|
|
71
|
+
from langchain.agents.middleware import (
|
|
72
|
+
HumanInTheLoopMiddleware,
|
|
73
|
+
ModelCallLimitMiddleware,
|
|
74
|
+
)
|
|
75
|
+
from langgraph.checkpoint.memory import InMemorySaver
|
|
76
|
+
except ImportError as e:
|
|
77
|
+
logger.error(f"Failed to import LangChain components: {e}")
|
|
78
|
+
raise
|
|
79
|
+
|
|
80
|
+
# Get subagent configuration
|
|
81
|
+
config = get_subagent_config(agent_type)
|
|
82
|
+
|
|
83
|
+
# Create LLM (may use override from config)
|
|
84
|
+
llm = create_llm(llm_config)
|
|
85
|
+
|
|
86
|
+
# Get tools for this agent type
|
|
87
|
+
tools = get_tools_for_agent(agent_type)
|
|
88
|
+
|
|
89
|
+
# Configure middleware
|
|
90
|
+
middleware = []
|
|
91
|
+
|
|
92
|
+
# Add HITL middleware if enabled (for tools that need approval)
|
|
93
|
+
if enable_hitl:
|
|
94
|
+
hitl_middleware = HumanInTheLoopMiddleware(
|
|
95
|
+
interrupt_on=get_hitl_interrupt_config(),
|
|
96
|
+
description_prefix=f"[{agent_type}] Tool execution pending approval",
|
|
97
|
+
)
|
|
98
|
+
middleware.append(hitl_middleware)
|
|
99
|
+
|
|
100
|
+
# Add nested subagent support for python_developer
|
|
101
|
+
if config.can_call_subagents:
|
|
102
|
+
# Python Developer can call athena_query
|
|
103
|
+
task_tool = create_task_tool(
|
|
104
|
+
caller_name=agent_type,
|
|
105
|
+
allowed_subagents=config.can_call_subagents,
|
|
106
|
+
)
|
|
107
|
+
tools = tools + [task_tool]
|
|
108
|
+
logger.info(f"Subagent '{agent_type}' can call: {config.can_call_subagents}")
|
|
109
|
+
|
|
110
|
+
# Add SkillMiddleware for python_developer (code generation agent)
|
|
111
|
+
skill_prompt_section = ""
|
|
112
|
+
if agent_type == "python_developer":
|
|
113
|
+
skill_middleware = get_skill_middleware()
|
|
114
|
+
tools = tools + skill_middleware.get_tools()
|
|
115
|
+
skill_prompt_section = skill_middleware.get_prompt_section()
|
|
116
|
+
logger.info(
|
|
117
|
+
f"SkillMiddleware added to '{agent_type}': "
|
|
118
|
+
f"{len(skill_middleware.skills)} skills available"
|
|
119
|
+
)
|
|
120
|
+
|
|
121
|
+
# Add model call limit to prevent infinite loops
|
|
122
|
+
model_limit = ModelCallLimitMiddleware(
|
|
123
|
+
run_limit=15, # Subagents have lower limit
|
|
124
|
+
exit_behavior="end",
|
|
125
|
+
)
|
|
126
|
+
middleware.append(model_limit)
|
|
127
|
+
|
|
128
|
+
# Determine system prompt (use override if provided)
|
|
129
|
+
base_prompt = (
|
|
130
|
+
system_prompt_override if system_prompt_override else config.system_prompt
|
|
131
|
+
)
|
|
132
|
+
|
|
133
|
+
# Inject skill prompt section for python_developer
|
|
134
|
+
if skill_prompt_section:
|
|
135
|
+
system_prompt = f"{base_prompt}\n\n{skill_prompt_section}"
|
|
136
|
+
logger.info(
|
|
137
|
+
f"Injected skills prompt section ({len(skill_prompt_section)} chars)"
|
|
138
|
+
)
|
|
139
|
+
else:
|
|
140
|
+
system_prompt = base_prompt
|
|
141
|
+
|
|
142
|
+
# Create the subagent
|
|
143
|
+
agent = create_agent(
|
|
144
|
+
model=llm,
|
|
145
|
+
tools=tools,
|
|
146
|
+
middleware=middleware,
|
|
147
|
+
checkpointer=checkpointer or InMemorySaver(), # Needed for HITL
|
|
148
|
+
system_prompt=system_prompt,
|
|
149
|
+
)
|
|
150
|
+
|
|
151
|
+
logger.info(
|
|
152
|
+
f"Created subagent '{agent_type}' with {len(tools)} tools, "
|
|
153
|
+
f"HITL={enable_hitl}, nested_calls={config.can_call_subagents}"
|
|
154
|
+
)
|
|
155
|
+
|
|
156
|
+
return agent
|
|
157
|
+
|
|
158
|
+
|
|
159
|
+
def create_main_agent(
|
|
160
|
+
llm_config: Dict[str, Any],
|
|
161
|
+
checkpointer: Optional[object] = None,
|
|
162
|
+
enable_hitl: bool = True,
|
|
163
|
+
enable_todo_list: bool = True,
|
|
164
|
+
system_prompt_override: Optional[str] = None,
|
|
165
|
+
agent_prompts: Optional[Dict[str, str]] = None,
|
|
166
|
+
) -> Any:
|
|
167
|
+
"""
|
|
168
|
+
Create the Main Agent (Supervisor).
|
|
169
|
+
|
|
170
|
+
The Main Agent is the central agent that:
|
|
171
|
+
- Analyzes user requests
|
|
172
|
+
- Creates todo lists
|
|
173
|
+
- Executes code directly (jupyter_cell_tool, write_file_tool, etc.)
|
|
174
|
+
- Delegates analysis/generation tasks to subagents via task()
|
|
175
|
+
- Synthesizes results
|
|
176
|
+
|
|
177
|
+
Args:
|
|
178
|
+
llm_config: LLM configuration
|
|
179
|
+
checkpointer: Checkpointer for state persistence
|
|
180
|
+
enable_hitl: Enable Human-in-the-Loop
|
|
181
|
+
enable_todo_list: Enable TodoListMiddleware
|
|
182
|
+
system_prompt_override: Optional custom system prompt for Main Agent
|
|
183
|
+
agent_prompts: Optional dict of per-agent system prompts
|
|
184
|
+
|
|
185
|
+
Returns:
|
|
186
|
+
Compiled Main Agent graph
|
|
187
|
+
"""
|
|
188
|
+
try:
|
|
189
|
+
from langchain.agents import create_agent
|
|
190
|
+
from langchain.agents.middleware import (
|
|
191
|
+
AgentMiddleware,
|
|
192
|
+
HumanInTheLoopMiddleware,
|
|
193
|
+
ModelCallLimitMiddleware,
|
|
194
|
+
SummarizationMiddleware,
|
|
195
|
+
TodoListMiddleware,
|
|
196
|
+
ToolCallLimitMiddleware,
|
|
197
|
+
wrap_model_call,
|
|
198
|
+
)
|
|
199
|
+
from langchain_core.messages import ToolMessage as LCToolMessage
|
|
200
|
+
from langgraph.checkpoint.memory import InMemorySaver
|
|
201
|
+
from langgraph.types import Overwrite
|
|
202
|
+
except ImportError as e:
|
|
203
|
+
logger.error(f"Failed to import LangChain components: {e}")
|
|
204
|
+
raise
|
|
205
|
+
|
|
206
|
+
# Initialize the subagent factory for nested calls with custom prompts
|
|
207
|
+
# NOTE: Subagents run with HITL disabled because:
|
|
208
|
+
# 1. They're already authorized by the Main Agent's task delegation
|
|
209
|
+
# 2. HITL interrupts from subagents don't bubble up properly in synchronous invoke()
|
|
210
|
+
# 3. The Main Agent's task() call is the authorization checkpoint
|
|
211
|
+
def subagent_factory_with_prompts(name: str, cfg: Dict[str, Any]) -> Any:
|
|
212
|
+
prompt_override = agent_prompts.get(name) if agent_prompts else None
|
|
213
|
+
return create_subagent(
|
|
214
|
+
name,
|
|
215
|
+
cfg,
|
|
216
|
+
enable_hitl=False, # Subagents auto-approve (Main Agent authorizes via task delegation)
|
|
217
|
+
system_prompt_override=prompt_override,
|
|
218
|
+
)
|
|
219
|
+
|
|
220
|
+
set_subagent_factory(
|
|
221
|
+
factory_func=subagent_factory_with_prompts,
|
|
222
|
+
llm_config=llm_config,
|
|
223
|
+
)
|
|
224
|
+
|
|
225
|
+
# Create LLM
|
|
226
|
+
llm = create_llm(llm_config)
|
|
227
|
+
|
|
228
|
+
# Get Main Agent's direct tools (including code execution tools)
|
|
229
|
+
# Using "planner" for backward compatibility with tool_registry
|
|
230
|
+
tools = get_tools_for_agent("planner")
|
|
231
|
+
|
|
232
|
+
# Create task tool for calling subagents
|
|
233
|
+
available_subagents = [
|
|
234
|
+
config.name for config in get_available_subagents_for_planner()
|
|
235
|
+
]
|
|
236
|
+
task_tool = create_task_tool(
|
|
237
|
+
caller_name="planner",
|
|
238
|
+
allowed_subagents=None, # Main Agent (Planner) can call all non-restricted subagents
|
|
239
|
+
)
|
|
240
|
+
tools = tools + [task_tool]
|
|
241
|
+
|
|
242
|
+
# Configure middleware
|
|
243
|
+
middleware = []
|
|
244
|
+
|
|
245
|
+
# Add empty response handler middleware (critical for Gemini compatibility)
|
|
246
|
+
handle_empty_response = create_handle_empty_response_middleware(wrap_model_call)
|
|
247
|
+
middleware.append(handle_empty_response)
|
|
248
|
+
|
|
249
|
+
# Add tool call limiter middleware
|
|
250
|
+
limit_tool_calls = create_limit_tool_calls_middleware(wrap_model_call)
|
|
251
|
+
middleware.append(limit_tool_calls)
|
|
252
|
+
|
|
253
|
+
# Add tool args normalization middleware
|
|
254
|
+
normalize_tool_args = create_normalize_tool_args_middleware(
|
|
255
|
+
wrap_model_call, tools=tools
|
|
256
|
+
)
|
|
257
|
+
middleware.append(normalize_tool_args)
|
|
258
|
+
|
|
259
|
+
# Add unified continuation control middleware
|
|
260
|
+
# - Injects continuation prompts after non-HITL tool execution
|
|
261
|
+
# - Strips write_todos from responses containing summary JSON
|
|
262
|
+
continuation_control = create_continuation_control_middleware(wrap_model_call)
|
|
263
|
+
middleware.append(continuation_control)
|
|
264
|
+
|
|
265
|
+
# Add patch tool calls middleware
|
|
266
|
+
patch_tool_calls = create_patch_tool_calls_middleware(
|
|
267
|
+
AgentMiddleware, LCToolMessage, Overwrite
|
|
268
|
+
)
|
|
269
|
+
middleware.append(patch_tool_calls)
|
|
270
|
+
|
|
271
|
+
# Add TodoListMiddleware for task planning
|
|
272
|
+
# NOTE: system_prompt removed to avoid multi-part content array
|
|
273
|
+
if enable_todo_list:
|
|
274
|
+
todo_middleware = TodoListMiddleware(
|
|
275
|
+
tool_description="Create and manage a todo list for tracking tasks. All items in Korean.",
|
|
276
|
+
)
|
|
277
|
+
middleware.append(todo_middleware)
|
|
278
|
+
|
|
279
|
+
# Add HITL middleware
|
|
280
|
+
if enable_hitl:
|
|
281
|
+
hitl_middleware = HumanInTheLoopMiddleware(
|
|
282
|
+
interrupt_on=get_hitl_interrupt_config(),
|
|
283
|
+
description_prefix="[Main Agent] Tool execution pending approval",
|
|
284
|
+
)
|
|
285
|
+
middleware.append(hitl_middleware)
|
|
286
|
+
|
|
287
|
+
# Add model call limit
|
|
288
|
+
model_limit = ModelCallLimitMiddleware(
|
|
289
|
+
run_limit=50, # Higher limit for Main Agent (orchestrates multiple subagents)
|
|
290
|
+
exit_behavior="end",
|
|
291
|
+
)
|
|
292
|
+
middleware.append(model_limit)
|
|
293
|
+
logger.info("Added ModelCallLimitMiddleware with run_limit=50")
|
|
294
|
+
|
|
295
|
+
# ToolCallLimitMiddleware: Prevent write_todos from being called too many times
|
|
296
|
+
write_todos_limit = ToolCallLimitMiddleware(
|
|
297
|
+
tool_name="write_todos",
|
|
298
|
+
run_limit=20, # Max 20 write_todos calls per user message
|
|
299
|
+
exit_behavior="continue",
|
|
300
|
+
)
|
|
301
|
+
middleware.append(write_todos_limit)
|
|
302
|
+
logger.info("Added ToolCallLimitMiddleware for write_todos (20/msg)")
|
|
303
|
+
|
|
304
|
+
# Add summarization middleware
|
|
305
|
+
summary_llm = create_summarization_llm(llm_config)
|
|
306
|
+
if summary_llm:
|
|
307
|
+
try:
|
|
308
|
+
summarization = SummarizationMiddleware(
|
|
309
|
+
model=summary_llm,
|
|
310
|
+
trigger=("tokens", 10000), # Higher trigger for multi-agent
|
|
311
|
+
keep=("messages", 15),
|
|
312
|
+
)
|
|
313
|
+
middleware.append(summarization)
|
|
314
|
+
logger.info("Added SummarizationMiddleware to Main Agent")
|
|
315
|
+
except Exception as e:
|
|
316
|
+
logger.warning(f"Failed to add SummarizationMiddleware: {e}")
|
|
317
|
+
|
|
318
|
+
# Build system prompt - FORCE default prompt for testing
|
|
319
|
+
# TODO: Remove this override after frontend localStorage is cleared
|
|
320
|
+
# Original priority: system_prompt_override > agent_prompts.planner > default
|
|
321
|
+
# DEBUG: Log all prompt sources to find root cause of MALFORMED_FUNCTION_CALL
|
|
322
|
+
logger.info(
|
|
323
|
+
"DEBUG Main Agent prompt sources: system_prompt_override=%s, "
|
|
324
|
+
"agent_prompts.planner=%s, using=DEFAULT",
|
|
325
|
+
bool(system_prompt_override),
|
|
326
|
+
bool(agent_prompts.get("planner") if agent_prompts else None),
|
|
327
|
+
)
|
|
328
|
+
if agent_prompts:
|
|
329
|
+
logger.info(
|
|
330
|
+
"DEBUG: agent_prompts keys=%s, planner prompt length=%d",
|
|
331
|
+
list(agent_prompts.keys()),
|
|
332
|
+
len(agent_prompts.get("planner", "") or ""),
|
|
333
|
+
)
|
|
334
|
+
system_prompt = PLANNER_SYSTEM_PROMPT
|
|
335
|
+
logger.info("Using PLANNER_SYSTEM_PROMPT (length=%d)", len(system_prompt))
|
|
336
|
+
|
|
337
|
+
# Log provider info for debugging
|
|
338
|
+
provider = llm_config.get("provider", "")
|
|
339
|
+
logger.info(f"Creating Main Agent with provider: {provider}")
|
|
340
|
+
|
|
341
|
+
# Create the Main Agent
|
|
342
|
+
agent = create_agent(
|
|
343
|
+
model=llm,
|
|
344
|
+
tools=tools,
|
|
345
|
+
middleware=middleware,
|
|
346
|
+
checkpointer=checkpointer or InMemorySaver(),
|
|
347
|
+
system_prompt=system_prompt,
|
|
348
|
+
)
|
|
349
|
+
|
|
350
|
+
logger.info(
|
|
351
|
+
f"Created Main Agent with {len(tools)} tools, "
|
|
352
|
+
f"HITL={enable_hitl}, TodoList={enable_todo_list}, "
|
|
353
|
+
f"available_subagents={available_subagents}"
|
|
354
|
+
)
|
|
355
|
+
|
|
356
|
+
return agent
|
|
357
|
+
|
|
358
|
+
|
|
359
|
+
# Alias for backward compatibility
|
|
360
|
+
create_planner_agent = create_main_agent
|
|
361
|
+
|
|
362
|
+
|
|
363
|
+
def create_multi_agent_system(
|
|
364
|
+
llm_config: Dict[str, Any],
|
|
365
|
+
checkpointer: Optional[object] = None,
|
|
366
|
+
enable_hitl: bool = True,
|
|
367
|
+
enable_todo_list: bool = True,
|
|
368
|
+
system_prompt_override: Optional[str] = None,
|
|
369
|
+
agent_prompts: Optional[Dict[str, str]] = None,
|
|
370
|
+
) -> Any:
|
|
371
|
+
"""
|
|
372
|
+
Create the complete multi-agent system.
|
|
373
|
+
|
|
374
|
+
This is the main entry point that returns the Main Agent,
|
|
375
|
+
which orchestrates all subagents.
|
|
376
|
+
|
|
377
|
+
Args:
|
|
378
|
+
llm_config: LLM configuration
|
|
379
|
+
checkpointer: Checkpointer for state persistence
|
|
380
|
+
enable_hitl: Enable Human-in-the-Loop
|
|
381
|
+
enable_todo_list: Enable TodoListMiddleware
|
|
382
|
+
system_prompt_override: Optional custom system prompt for Main Agent
|
|
383
|
+
agent_prompts: Optional dict of per-agent system prompts
|
|
384
|
+
{"planner": "...", "main_agent": "...", "python_developer": "...", etc.}
|
|
385
|
+
|
|
386
|
+
Returns:
|
|
387
|
+
Compiled Main Agent (which can call subagents)
|
|
388
|
+
"""
|
|
389
|
+
return create_main_agent(
|
|
390
|
+
llm_config=llm_config,
|
|
391
|
+
checkpointer=checkpointer,
|
|
392
|
+
enable_hitl=enable_hitl,
|
|
393
|
+
enable_todo_list=enable_todo_list,
|
|
394
|
+
system_prompt_override=system_prompt_override,
|
|
395
|
+
agent_prompts=agent_prompts,
|
|
396
|
+
)
|
|
397
|
+
|
|
398
|
+
|
|
399
|
+
# Alias for backwards compatibility
|
|
400
|
+
create_agent_v2 = create_multi_agent_system
|
|
@@ -0,0 +1,25 @@
|
|
|
1
|
+
"""
|
|
2
|
+
Multi-Agent Prompt Templates
|
|
3
|
+
|
|
4
|
+
Contains system prompts for each agent in the multi-agent architecture:
|
|
5
|
+
- Planner (Supervisor)
|
|
6
|
+
- Python Developer
|
|
7
|
+
- Researcher
|
|
8
|
+
- Athena Query
|
|
9
|
+
"""
|
|
10
|
+
|
|
11
|
+
from agent_server.langchain.agent_prompts.planner_prompt import PLANNER_SYSTEM_PROMPT
|
|
12
|
+
from agent_server.langchain.agent_prompts.python_developer_prompt import (
|
|
13
|
+
PYTHON_DEVELOPER_SYSTEM_PROMPT,
|
|
14
|
+
)
|
|
15
|
+
from agent_server.langchain.agent_prompts.researcher_prompt import RESEARCHER_SYSTEM_PROMPT
|
|
16
|
+
from agent_server.langchain.agent_prompts.athena_query_prompt import (
|
|
17
|
+
ATHENA_QUERY_SYSTEM_PROMPT,
|
|
18
|
+
)
|
|
19
|
+
|
|
20
|
+
__all__ = [
|
|
21
|
+
"PLANNER_SYSTEM_PROMPT",
|
|
22
|
+
"PYTHON_DEVELOPER_SYSTEM_PROMPT",
|
|
23
|
+
"RESEARCHER_SYSTEM_PROMPT",
|
|
24
|
+
"ATHENA_QUERY_SYSTEM_PROMPT",
|
|
25
|
+
]
|
|
@@ -0,0 +1,71 @@
|
|
|
1
|
+
"""
|
|
2
|
+
Athena Query Agent System Prompt
|
|
3
|
+
|
|
4
|
+
The Athena Query Agent specializes in:
|
|
5
|
+
- Searching Qdrant for database/table metadata
|
|
6
|
+
- Generating optimized Athena SQL queries
|
|
7
|
+
- Called by Main Agent (Planner) or Python Developer
|
|
8
|
+
"""
|
|
9
|
+
|
|
10
|
+
ATHENA_QUERY_SYSTEM_PROMPT = """You are an AWS Athena SQL expert. Generate optimized Athena queries based on user requirements.
|
|
11
|
+
|
|
12
|
+
# Your Capabilities
|
|
13
|
+
- Search database/table metadata using Qdrant RAG
|
|
14
|
+
- Generate Athena-compatible SQL queries (Presto/Trino syntax)
|
|
15
|
+
- Optimize queries for performance (partitioning, predicate pushdown)
|
|
16
|
+
|
|
17
|
+
# Workflow
|
|
18
|
+
1. Receive query requirement from Main Agent or Python Developer
|
|
19
|
+
2. Search Qdrant for relevant database/table metadata using qdrant_search_tool
|
|
20
|
+
3. Generate optimized Athena SQL query
|
|
21
|
+
4. Return SQL query with a brief description
|
|
22
|
+
|
|
23
|
+
# Guidelines
|
|
24
|
+
- Always use fully qualified table names: database.table
|
|
25
|
+
- Include appropriate WHERE clauses for partitioned columns
|
|
26
|
+
- Use LIMIT for exploratory queries
|
|
27
|
+
- Handle NULL values appropriately
|
|
28
|
+
- Use CAST for type conversions
|
|
29
|
+
- Optimize for performance:
|
|
30
|
+
- Filter on partition columns first
|
|
31
|
+
- Avoid SELECT * when possible
|
|
32
|
+
- Use appropriate aggregations
|
|
33
|
+
|
|
34
|
+
# Qdrant Search Tips
|
|
35
|
+
When searching for table metadata:
|
|
36
|
+
- Search by table name or business domain
|
|
37
|
+
- Look for column descriptions to understand data meaning
|
|
38
|
+
- Check for partition columns to optimize queries
|
|
39
|
+
|
|
40
|
+
# Output Format (CRITICAL)
|
|
41
|
+
**반드시 아래 형식으로 응답하세요. 시스템이 자동으로 파싱합니다.**
|
|
42
|
+
|
|
43
|
+
```
|
|
44
|
+
[DESCRIPTION]
|
|
45
|
+
2~3줄의 SQL 쿼리 설명
|
|
46
|
+
|
|
47
|
+
[SQL]
|
|
48
|
+
```sql
|
|
49
|
+
SQL 쿼리
|
|
50
|
+
```
|
|
51
|
+
```
|
|
52
|
+
|
|
53
|
+
**[DESCRIPTION] 작성 가이드 (2~3줄 필수)**:
|
|
54
|
+
- 쿼리 목적: "~~를 조회합니다."
|
|
55
|
+
- 주요 조건/필터: "~~조건으로 필터링합니다."
|
|
56
|
+
- 최적화 포인트: "~~파티션을 활용하여 성능을 최적화했습니다."
|
|
57
|
+
|
|
58
|
+
예시:
|
|
59
|
+
```
|
|
60
|
+
[DESCRIPTION]
|
|
61
|
+
sample_db 데이터베이스의 모든 테이블 목록을 조회합니다.
|
|
62
|
+
SHOW TABLES 명령어를 사용하여 테이블 이름을 반환합니다.
|
|
63
|
+
|
|
64
|
+
[SQL]
|
|
65
|
+
```sql
|
|
66
|
+
SHOW TABLES IN sample_db
|
|
67
|
+
```
|
|
68
|
+
```
|
|
69
|
+
|
|
70
|
+
**중요**: [DESCRIPTION]과 [SQL] 섹션을 반드시 포함하세요.
|
|
71
|
+
"""
|