hdsp-jupyter-extension 2.0.16__py3-none-any.whl → 2.0.19__py3-none-any.whl
This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
- agent_server/langchain/agent_prompts/planner_prompt.py +19 -11
- agent_server/langchain/custom_middleware.py +114 -65
- agent_server/langchain/llm_factory.py +2 -5
- agent_server/langchain/prompts.py +11 -7
- {hdsp_jupyter_extension-2.0.16.data → hdsp_jupyter_extension-2.0.19.data}/data/share/jupyter/labextensions/hdsp-agent/build_log.json +1 -1
- {hdsp_jupyter_extension-2.0.16.data → hdsp_jupyter_extension-2.0.19.data}/data/share/jupyter/labextensions/hdsp-agent/package.json +2 -2
- hdsp_jupyter_extension-2.0.16.data/data/share/jupyter/labextensions/hdsp-agent/static/frontend_styles_index_js.037b3c8e5d6a92b63b16.js → hdsp_jupyter_extension-2.0.19.data/data/share/jupyter/labextensions/hdsp-agent/static/frontend_styles_index_js.96745acc14125453fba8.js +36 -2
- hdsp_jupyter_extension-2.0.19.data/data/share/jupyter/labextensions/hdsp-agent/static/frontend_styles_index_js.96745acc14125453fba8.js.map +1 -0
- jupyter_ext/labextension/static/lib_index_js.90a86cec4c50b0798fb2.js → hdsp_jupyter_extension-2.0.19.data/data/share/jupyter/labextensions/hdsp-agent/static/lib_index_js.1917fbaea37d75dc69b3.js +23 -13
- hdsp_jupyter_extension-2.0.19.data/data/share/jupyter/labextensions/hdsp-agent/static/lib_index_js.1917fbaea37d75dc69b3.js.map +1 -0
- hdsp_jupyter_extension-2.0.16.data/data/share/jupyter/labextensions/hdsp-agent/static/remoteEntry.44355425a0862dac7bd1.js → hdsp_jupyter_extension-2.0.19.data/data/share/jupyter/labextensions/hdsp-agent/static/remoteEntry.d686ab71eb65b5ef8f15.js +3 -3
- jupyter_ext/labextension/static/remoteEntry.44355425a0862dac7bd1.js.map → hdsp_jupyter_extension-2.0.19.data/data/share/jupyter/labextensions/hdsp-agent/static/remoteEntry.d686ab71eb65b5ef8f15.js.map +1 -1
- {hdsp_jupyter_extension-2.0.16.dist-info → hdsp_jupyter_extension-2.0.19.dist-info}/METADATA +1 -1
- {hdsp_jupyter_extension-2.0.16.dist-info → hdsp_jupyter_extension-2.0.19.dist-info}/RECORD +44 -44
- jupyter_ext/_version.py +1 -1
- jupyter_ext/labextension/build_log.json +1 -1
- jupyter_ext/labextension/package.json +2 -2
- jupyter_ext/labextension/static/{frontend_styles_index_js.037b3c8e5d6a92b63b16.js → frontend_styles_index_js.96745acc14125453fba8.js} +36 -2
- jupyter_ext/labextension/static/frontend_styles_index_js.96745acc14125453fba8.js.map +1 -0
- hdsp_jupyter_extension-2.0.16.data/data/share/jupyter/labextensions/hdsp-agent/static/lib_index_js.90a86cec4c50b0798fb2.js → jupyter_ext/labextension/static/lib_index_js.1917fbaea37d75dc69b3.js +23 -13
- jupyter_ext/labextension/static/lib_index_js.1917fbaea37d75dc69b3.js.map +1 -0
- jupyter_ext/labextension/static/{remoteEntry.44355425a0862dac7bd1.js → remoteEntry.d686ab71eb65b5ef8f15.js} +3 -3
- hdsp_jupyter_extension-2.0.16.data/data/share/jupyter/labextensions/hdsp-agent/static/remoteEntry.44355425a0862dac7bd1.js.map → jupyter_ext/labextension/static/remoteEntry.d686ab71eb65b5ef8f15.js.map +1 -1
- hdsp_jupyter_extension-2.0.16.data/data/share/jupyter/labextensions/hdsp-agent/static/frontend_styles_index_js.037b3c8e5d6a92b63b16.js.map +0 -1
- hdsp_jupyter_extension-2.0.16.data/data/share/jupyter/labextensions/hdsp-agent/static/lib_index_js.90a86cec4c50b0798fb2.js.map +0 -1
- jupyter_ext/labextension/static/frontend_styles_index_js.037b3c8e5d6a92b63b16.js.map +0 -1
- jupyter_ext/labextension/static/lib_index_js.90a86cec4c50b0798fb2.js.map +0 -1
- {hdsp_jupyter_extension-2.0.16.data → hdsp_jupyter_extension-2.0.19.data}/data/etc/jupyter/jupyter_server_config.d/hdsp_jupyter_extension.json +0 -0
- {hdsp_jupyter_extension-2.0.16.data → hdsp_jupyter_extension-2.0.19.data}/data/share/jupyter/labextensions/hdsp-agent/install.json +0 -0
- {hdsp_jupyter_extension-2.0.16.data → hdsp_jupyter_extension-2.0.19.data}/data/share/jupyter/labextensions/hdsp-agent/static/node_modules_emotion_use-insertion-effect-with-fallbacks_dist_emotion-use-insertion-effect-wi-3ba6b80.c095373419d05e6f141a.js +0 -0
- {hdsp_jupyter_extension-2.0.16.data → hdsp_jupyter_extension-2.0.19.data}/data/share/jupyter/labextensions/hdsp-agent/static/node_modules_emotion_use-insertion-effect-with-fallbacks_dist_emotion-use-insertion-effect-wi-3ba6b80.c095373419d05e6f141a.js.map +0 -0
- {hdsp_jupyter_extension-2.0.16.data → hdsp_jupyter_extension-2.0.19.data}/data/share/jupyter/labextensions/hdsp-agent/static/node_modules_emotion_use-insertion-effect-with-fallbacks_dist_emotion-use-insertion-effect-wi-3ba6b81.61e75fb98ecff46cf836.js +0 -0
- {hdsp_jupyter_extension-2.0.16.data → hdsp_jupyter_extension-2.0.19.data}/data/share/jupyter/labextensions/hdsp-agent/static/node_modules_emotion_use-insertion-effect-with-fallbacks_dist_emotion-use-insertion-effect-wi-3ba6b81.61e75fb98ecff46cf836.js.map +0 -0
- {hdsp_jupyter_extension-2.0.16.data → hdsp_jupyter_extension-2.0.19.data}/data/share/jupyter/labextensions/hdsp-agent/static/style.js +0 -0
- {hdsp_jupyter_extension-2.0.16.data → hdsp_jupyter_extension-2.0.19.data}/data/share/jupyter/labextensions/hdsp-agent/static/vendors-node_modules_babel_runtime_helpers_esm_extends_js-node_modules_emotion_serialize_dist-051195.e2553aab0c3963b83dd7.js +0 -0
- {hdsp_jupyter_extension-2.0.16.data → hdsp_jupyter_extension-2.0.19.data}/data/share/jupyter/labextensions/hdsp-agent/static/vendors-node_modules_babel_runtime_helpers_esm_extends_js-node_modules_emotion_serialize_dist-051195.e2553aab0c3963b83dd7.js.map +0 -0
- {hdsp_jupyter_extension-2.0.16.data → hdsp_jupyter_extension-2.0.19.data}/data/share/jupyter/labextensions/hdsp-agent/static/vendors-node_modules_emotion_cache_dist_emotion-cache_browser_development_esm_js.24edcc52a1c014a8a5f0.js +0 -0
- {hdsp_jupyter_extension-2.0.16.data → hdsp_jupyter_extension-2.0.19.data}/data/share/jupyter/labextensions/hdsp-agent/static/vendors-node_modules_emotion_cache_dist_emotion-cache_browser_development_esm_js.24edcc52a1c014a8a5f0.js.map +0 -0
- {hdsp_jupyter_extension-2.0.16.data → hdsp_jupyter_extension-2.0.19.data}/data/share/jupyter/labextensions/hdsp-agent/static/vendors-node_modules_emotion_react_dist_emotion-react_browser_development_esm_js.19ecf6babe00caff6b8a.js +0 -0
- {hdsp_jupyter_extension-2.0.16.data → hdsp_jupyter_extension-2.0.19.data}/data/share/jupyter/labextensions/hdsp-agent/static/vendors-node_modules_emotion_react_dist_emotion-react_browser_development_esm_js.19ecf6babe00caff6b8a.js.map +0 -0
- {hdsp_jupyter_extension-2.0.16.data → hdsp_jupyter_extension-2.0.19.data}/data/share/jupyter/labextensions/hdsp-agent/static/vendors-node_modules_emotion_styled_dist_emotion-styled_browser_development_esm_js.661fb5836f4978a7c6e1.js +0 -0
- {hdsp_jupyter_extension-2.0.16.data → hdsp_jupyter_extension-2.0.19.data}/data/share/jupyter/labextensions/hdsp-agent/static/vendors-node_modules_emotion_styled_dist_emotion-styled_browser_development_esm_js.661fb5836f4978a7c6e1.js.map +0 -0
- {hdsp_jupyter_extension-2.0.16.data → hdsp_jupyter_extension-2.0.19.data}/data/share/jupyter/labextensions/hdsp-agent/static/vendors-node_modules_mui_material_index_js.985697e0162d8d088ca2.js +0 -0
- {hdsp_jupyter_extension-2.0.16.data → hdsp_jupyter_extension-2.0.19.data}/data/share/jupyter/labextensions/hdsp-agent/static/vendors-node_modules_mui_material_index_js.985697e0162d8d088ca2.js.map +0 -0
- {hdsp_jupyter_extension-2.0.16.data → hdsp_jupyter_extension-2.0.19.data}/data/share/jupyter/labextensions/hdsp-agent/static/vendors-node_modules_mui_material_utils_createSvgIcon_js.1f5038488cdfd8b3a85d.js +0 -0
- {hdsp_jupyter_extension-2.0.16.data → hdsp_jupyter_extension-2.0.19.data}/data/share/jupyter/labextensions/hdsp-agent/static/vendors-node_modules_mui_material_utils_createSvgIcon_js.1f5038488cdfd8b3a85d.js.map +0 -0
- {hdsp_jupyter_extension-2.0.16.dist-info → hdsp_jupyter_extension-2.0.19.dist-info}/WHEEL +0 -0
- {hdsp_jupyter_extension-2.0.16.dist-info → hdsp_jupyter_extension-2.0.19.dist-info}/licenses/LICENSE +0 -0
|
@@ -23,24 +23,32 @@ PLANNER_SYSTEM_PROMPT = """당신은 작업을 조율하는 Main Agent입니다.
|
|
|
23
23
|
| athena_query | SQL 쿼리 생성 | task_tool(agent_name="athena_query", description="매출 테이블 조회 쿼리") |
|
|
24
24
|
| researcher | 정보 검색 | task_tool(agent_name="researcher", description="관련 문서 검색") |
|
|
25
25
|
|
|
26
|
-
## Step 3: 결과
|
|
26
|
+
## Step 3: 결과 실행/적용 (필수!)
|
|
27
27
|
**task_tool을 호출 했다면, 호출 후 반드시 결과를 처리해야 함:**
|
|
28
28
|
|
|
29
|
-
| 서브에이전트 | 처리 방법 | 예시 |
|
|
30
|
-
|
|
31
|
-
| python_developer |
|
|
32
|
-
|
|
|
33
|
-
|
|
|
29
|
+
| 서브에이전트 | 작업 유형 | 처리 방법 | 예시 |
|
|
30
|
+
|-------------|----------|----------|------|
|
|
31
|
+
| python_developer | 코드 실행 (데이터 분석, 시각화) | jupyter_cell_tool | jupyter_cell_tool(code=반환된_코드) |
|
|
32
|
+
| python_developer | **파일 생성/수정** | **write_file_tool 또는 multiedit_file_tool** | write_file_tool(path="script.js", content=반환된_코드) |
|
|
33
|
+
| athena_query | SQL 표시 | markdown_tool | markdown_tool(content="```sql\n반환된_쿼리\n```") |
|
|
34
|
+
| researcher | 텍스트 요약 | 직접 응답 | - |
|
|
34
35
|
|
|
35
|
-
|
|
36
|
+
**🔴 중요: 코드 저장 도구 선택**
|
|
37
|
+
- **파일 생성/수정 요청** → `write_file_tool` 또는 `multiedit_file_tool` 사용
|
|
38
|
+
- **코드 실행 요청** (데이터 분석, 차트 등) → `jupyter_cell_tool` 사용
|
|
39
|
+
- **❌ markdown_tool은 코드 저장용이 아님!** (표시 전용)
|
|
40
|
+
|
|
41
|
+
**중요**: task_tool 결과를 받은 후 바로 write_todos로 완료 처리하지 말고, 반드시 위 도구로 결과를 먼저 적용!
|
|
36
42
|
|
|
37
43
|
# write_todos 규칙 [필수]
|
|
38
44
|
- 한국어로 작성
|
|
39
45
|
- **🔴 기존 todo 절대 삭제 금지**: 전체 리스트를 항상 포함하고 status만 변경
|
|
40
|
-
-
|
|
41
|
-
-
|
|
42
|
-
|
|
43
|
-
|
|
46
|
+
- **🔴 상태 전환 순서 필수**: pending → in_progress → completed (건너뛰기 금지!)
|
|
47
|
+
- **🔴 초기 생성 규칙**: 첫 write_todos 호출 시 첫 번째 todo만 in_progress, 나머지는 모두 pending
|
|
48
|
+
- 올바른 초기 예: [{"content": "작업1", "status": "in_progress"}, {"content": "작업2", "status": "pending"}, {"content": "작업 요약 및 다음 단계 제시", "status": "pending"}]
|
|
49
|
+
- 잘못된 초기 예: [{"content": "작업1", "status": "completed"}, ...] ← 실제 작업 없이 completed 금지!
|
|
50
|
+
- **🔴 completed 전환 조건**: 실제 도구(task_tool, jupyter_cell_tool 등)로 작업 수행 후에만 completed로 변경
|
|
51
|
+
- in_progress 상태는 **동시에 1개만** 허용 (completed, pending todo는 삭제하지 않고 모두 유지)
|
|
44
52
|
- content에 도구(tool)명 언급 금지
|
|
45
53
|
- **[필수] 마지막 todo는 반드시 "작업 요약 및 다음 단계 제시"**
|
|
46
54
|
|
|
@@ -444,63 +444,78 @@ def create_handle_empty_response_middleware(wrap_model_call):
|
|
|
444
444
|
)
|
|
445
445
|
|
|
446
446
|
if has_summary_pattern:
|
|
447
|
-
#
|
|
448
|
-
|
|
449
|
-
|
|
450
|
-
|
|
451
|
-
|
|
452
|
-
|
|
453
|
-
|
|
454
|
-
|
|
455
|
-
|
|
456
|
-
|
|
457
|
-
|
|
458
|
-
|
|
459
|
-
|
|
447
|
+
# Check if pending todos exist - if so, don't force complete
|
|
448
|
+
current_todos = request.state.get("todos", [])
|
|
449
|
+
pending_todos = [
|
|
450
|
+
t for t in current_todos
|
|
451
|
+
if isinstance(t, dict) and t.get("status") == "pending"
|
|
452
|
+
]
|
|
453
|
+
if pending_todos:
|
|
454
|
+
logger.warning(
|
|
455
|
+
"Summary JSON detected but pending todos remain - not forcing completion: %s",
|
|
456
|
+
[t.get("content", "")[:30] for t in pending_todos],
|
|
457
|
+
)
|
|
458
|
+
# Don't synthesize completion, return response as-is
|
|
459
|
+
# Let LLM continue working on pending todos
|
|
460
|
+
else:
|
|
461
|
+
# No pending todos, safe to synthesize completion
|
|
462
|
+
# Try to extract and repair summary JSON from mixed content
|
|
463
|
+
try:
|
|
464
|
+
# Try to find JSON object containing summary
|
|
465
|
+
import re
|
|
466
|
+
json_match = re.search(r'\{[^{}]*"summary"[^{}]*"next_items"[^{}]*\}', content, re.DOTALL)
|
|
467
|
+
if json_match:
|
|
468
|
+
repaired_summary = repair_json(
|
|
469
|
+
json_match.group(), return_objects=True
|
|
470
|
+
)
|
|
471
|
+
else:
|
|
472
|
+
repaired_summary = repair_json(
|
|
473
|
+
content, return_objects=True
|
|
474
|
+
)
|
|
460
475
|
|
|
461
|
-
|
|
462
|
-
|
|
463
|
-
|
|
464
|
-
|
|
465
|
-
|
|
466
|
-
|
|
467
|
-
|
|
468
|
-
|
|
469
|
-
)
|
|
470
|
-
logger.info(
|
|
471
|
-
"Detected and repaired summary JSON in content (pattern-based detection)"
|
|
472
|
-
)
|
|
473
|
-
# Create message with repaired content
|
|
474
|
-
repaired_response_message = AIMessage(
|
|
475
|
-
content=repaired_content,
|
|
476
|
-
tool_calls=getattr(
|
|
477
|
-
response_message, "tool_calls", []
|
|
476
|
+
if (
|
|
477
|
+
isinstance(repaired_summary, dict)
|
|
478
|
+
and "summary" in repaired_summary
|
|
479
|
+
and "next_items" in repaired_summary
|
|
480
|
+
):
|
|
481
|
+
# Create new message with repaired JSON content
|
|
482
|
+
repaired_content = json.dumps(
|
|
483
|
+
repaired_summary, ensure_ascii=False
|
|
478
484
|
)
|
|
479
|
-
|
|
480
|
-
|
|
481
|
-
|
|
482
|
-
|
|
483
|
-
repaired_response_message
|
|
484
|
-
|
|
485
|
-
|
|
486
|
-
|
|
487
|
-
|
|
488
|
-
|
|
489
|
-
|
|
490
|
-
|
|
491
|
-
|
|
485
|
+
logger.info(
|
|
486
|
+
"Detected and repaired summary JSON in content (pattern-based detection)"
|
|
487
|
+
)
|
|
488
|
+
# Create message with repaired content
|
|
489
|
+
repaired_response_message = AIMessage(
|
|
490
|
+
content=repaired_content,
|
|
491
|
+
tool_calls=getattr(
|
|
492
|
+
response_message, "tool_calls", []
|
|
493
|
+
)
|
|
494
|
+
or [],
|
|
495
|
+
)
|
|
496
|
+
synthetic_message = _create_synthetic_completion(
|
|
497
|
+
request,
|
|
498
|
+
repaired_response_message,
|
|
499
|
+
has_content=True,
|
|
500
|
+
)
|
|
501
|
+
response = _replace_ai_message_in_response(
|
|
502
|
+
response, synthetic_message
|
|
503
|
+
)
|
|
504
|
+
return response
|
|
505
|
+
except Exception as e:
|
|
506
|
+
logger.debug(f"Failed to extract summary JSON from mixed content: {e}")
|
|
492
507
|
|
|
493
|
-
|
|
494
|
-
|
|
495
|
-
|
|
496
|
-
|
|
497
|
-
|
|
498
|
-
|
|
499
|
-
|
|
500
|
-
|
|
501
|
-
|
|
502
|
-
|
|
503
|
-
|
|
508
|
+
# Fallback: accept as-is if repair failed but looks like summary
|
|
509
|
+
logger.info(
|
|
510
|
+
"Detected summary JSON pattern in content - accepting and synthesizing write_todos"
|
|
511
|
+
)
|
|
512
|
+
synthetic_message = _create_synthetic_completion(
|
|
513
|
+
request, response_message, has_content=True
|
|
514
|
+
)
|
|
515
|
+
response = _replace_ai_message_in_response(
|
|
516
|
+
response, synthetic_message
|
|
517
|
+
)
|
|
518
|
+
return response
|
|
504
519
|
|
|
505
520
|
# Legacy: Also check if current todo is a summary todo (backward compatibility)
|
|
506
521
|
todos = request.state.get("todos", [])
|
|
@@ -1009,17 +1024,51 @@ def create_normalize_tool_args_middleware(wrap_model_call, tools=None):
|
|
|
1009
1024
|
else:
|
|
1010
1025
|
found_first = True
|
|
1011
1026
|
|
|
1012
|
-
#
|
|
1013
|
-
#
|
|
1014
|
-
|
|
1015
|
-
|
|
1016
|
-
|
|
1017
|
-
|
|
1018
|
-
|
|
1019
|
-
|
|
1020
|
-
|
|
1021
|
-
|
|
1022
|
-
|
|
1027
|
+
# Validate: "작업 요약 및 다음 단계 제시" cannot be in_progress if pending todos exist
|
|
1028
|
+
# This prevents LLM from skipping pending tasks
|
|
1029
|
+
summary_keywords = ["작업 요약", "다음 단계 제시"]
|
|
1030
|
+
for i, todo in enumerate(todos):
|
|
1031
|
+
if not isinstance(todo, dict):
|
|
1032
|
+
continue
|
|
1033
|
+
content = todo.get("content", "")
|
|
1034
|
+
is_summary_todo = any(kw in content for kw in summary_keywords)
|
|
1035
|
+
|
|
1036
|
+
if is_summary_todo and todo.get("status") == "in_progress":
|
|
1037
|
+
# Check if there are pending todos before this one
|
|
1038
|
+
pending_before = [
|
|
1039
|
+
t for t in todos[:i]
|
|
1040
|
+
if isinstance(t, dict) and t.get("status") == "pending"
|
|
1041
|
+
]
|
|
1042
|
+
if pending_before:
|
|
1043
|
+
# Revert summary todo to pending
|
|
1044
|
+
todo["status"] = "pending"
|
|
1045
|
+
# Set the first pending todo to in_progress
|
|
1046
|
+
for t in todos:
|
|
1047
|
+
if isinstance(t, dict) and t.get("status") == "pending":
|
|
1048
|
+
t["status"] = "in_progress"
|
|
1049
|
+
logger.warning(
|
|
1050
|
+
"Reverted summary todo to pending, set '%s' to in_progress (pending todos exist)",
|
|
1051
|
+
t.get("content", "")[:30],
|
|
1052
|
+
)
|
|
1053
|
+
break
|
|
1054
|
+
break
|
|
1055
|
+
|
|
1056
|
+
# Clean AIMessage content when write_todos is called
|
|
1057
|
+
# Remove redundant todos JSON from content (keep summary JSON)
|
|
1058
|
+
if tool_name == "write_todos":
|
|
1059
|
+
msg_content = getattr(msg, "content", "") or ""
|
|
1060
|
+
if msg_content and '"todos"' in msg_content:
|
|
1061
|
+
# Keep content only if it's summary JSON
|
|
1062
|
+
is_summary_json = (
|
|
1063
|
+
'"summary"' in msg_content
|
|
1064
|
+
and '"next_items"' in msg_content
|
|
1065
|
+
)
|
|
1066
|
+
if not is_summary_json:
|
|
1067
|
+
# Clear redundant todos content
|
|
1068
|
+
msg.content = ""
|
|
1069
|
+
logger.info(
|
|
1070
|
+
"Cleared redundant todos JSON from AIMessage content (write_todos tool_call exists)"
|
|
1071
|
+
)
|
|
1023
1072
|
|
|
1024
1073
|
return response
|
|
1025
1074
|
|
|
@@ -97,15 +97,14 @@ def _create_vllm_llm(llm_config: Dict[str, Any], callbacks):
|
|
|
97
97
|
endpoint = vllm_config.get("endpoint", "http://localhost:8000/v1")
|
|
98
98
|
model = vllm_config.get("model", "default")
|
|
99
99
|
api_key = vllm_config.get("apiKey", "dummy")
|
|
100
|
-
use_responses_api = vllm_config.get("useResponsesApi", False)
|
|
101
100
|
|
|
102
|
-
logger.info(f"Creating vLLM LLM with model: {model}, endpoint: {endpoint}
|
|
101
|
+
logger.info(f"Creating vLLM LLM with model: {model}, endpoint: {endpoint}")
|
|
103
102
|
|
|
104
103
|
return ChatOpenAI(
|
|
105
104
|
model=model,
|
|
106
105
|
api_key=api_key,
|
|
107
106
|
base_url=endpoint, # Use endpoint as-is (no /v1 suffix added)
|
|
108
|
-
|
|
107
|
+
streaming=False, # Agent mode: disable LLM streaming (SSE handled by agent server)
|
|
109
108
|
temperature=0.0,
|
|
110
109
|
max_tokens=32768,
|
|
111
110
|
callbacks=callbacks,
|
|
@@ -156,13 +155,11 @@ def create_summarization_llm(llm_config: Dict[str, Any]):
|
|
|
156
155
|
endpoint = vllm_config.get("endpoint", "http://localhost:8000/v1")
|
|
157
156
|
model = vllm_config.get("model", "default")
|
|
158
157
|
api_key = vllm_config.get("apiKey", "dummy")
|
|
159
|
-
use_responses_api = vllm_config.get("useResponsesApi", False)
|
|
160
158
|
|
|
161
159
|
return ChatOpenAI(
|
|
162
160
|
model=model,
|
|
163
161
|
api_key=api_key,
|
|
164
162
|
base_url=endpoint, # Use endpoint as-is
|
|
165
|
-
use_responses_api=use_responses_api,
|
|
166
163
|
temperature=0.0,
|
|
167
164
|
)
|
|
168
165
|
except Exception as e:
|
|
@@ -19,14 +19,15 @@ DEFAULT_SYSTEM_PROMPT = """You are an expert Python data scientist and Jupyter n
|
|
|
19
19
|
# write_todos 규칙 [필수]
|
|
20
20
|
- 한국어로 작성
|
|
21
21
|
- **🔴 기존 todo 절대 삭제 금지**: 전체 리스트를 항상 포함하고 status만 변경
|
|
22
|
-
|
|
23
|
-
|
|
24
|
-
-
|
|
25
|
-
-
|
|
22
|
+
- **🔴 상태 전환 순서 필수**: pending → in_progress → completed (건너뛰기 금지!)
|
|
23
|
+
- **🔴 초기 생성 규칙**: 첫 write_todos 호출 시 첫 번째 todo만 in_progress, 나머지는 모두 pending
|
|
24
|
+
- 올바른 초기 예: [{"content": "작업1", "status": "in_progress"}, {"content": "작업2", "status": "pending"}, {"content": "작업 요약 및 다음 단계 제시", "status": "pending"}]
|
|
25
|
+
- 잘못된 초기 예: [{"content": "작업1", "status": "completed"}, ...] ← 실제 작업 없이 completed 금지!
|
|
26
|
+
- **🔴 completed 전환 조건**: 실제 도구로 작업 수행 후에만 completed로 변경
|
|
27
|
+
- in_progress는 **동시에 1개만** 유지
|
|
26
28
|
- **[필수] 마지막 todo는 반드시 "작업 요약 및 다음 단계 제시"로 생성**
|
|
27
29
|
- **🔴 [실행 순서 필수]**: "작업 요약 및 다음 단계 제시"는 **반드시 가장 마지막에 실행**
|
|
28
30
|
- 다른 모든 todo가 completed 상태가 된 후에만 이 todo를 in_progress로 변경
|
|
29
|
-
- 비슷한 이름의 다른 작업(보고서 검토, 결과 정리 등)과 혼동 금지
|
|
30
31
|
- **[중요] "작업 요약 및 다음 단계 제시"는 summary JSON 출력 후에만 completed 표시**
|
|
31
32
|
|
|
32
33
|
# 모든 작업 완료 후 [필수]
|
|
@@ -85,8 +86,11 @@ TODO_LIST_TOOL_DESCRIPTION = """Todo 리스트 관리 도구.
|
|
|
85
86
|
- 진행 상황 추적이 필요할 때
|
|
86
87
|
|
|
87
88
|
규칙:
|
|
88
|
-
-
|
|
89
|
-
-
|
|
89
|
+
- **🔴 기존 todo 삭제 금지**: status만 변경하고 전체 리스트 유지
|
|
90
|
+
- **🔴 상태 전환 순서 필수**: pending → in_progress → completed (건너뛰기 금지!)
|
|
91
|
+
- **🔴 초기 생성**: 첫 호출 시 첫 번째만 in_progress, 나머지는 pending
|
|
92
|
+
- **🔴 completed 조건**: 실제 도구로 작업 수행 후에만 completed로 변경
|
|
93
|
+
- in_progress 상태는 **동시에 1개만** 허용
|
|
90
94
|
- **[필수] 마지막 todo는 반드시 "작업 요약 및 다음 단계 제시"로 생성**
|
|
91
95
|
- **🔴 [실행 순서]**: todo는 반드시 리스트 순서대로 실행하고, "작업 요약 및 다음 단계 제시"는 맨 마지막에 실행
|
|
92
96
|
- 이 "작업 요약 및 다음 단계 제시" todo 에서는 전체 작업 요약과 다음 단계를 제시하는 내용을 JSON 형태로 출력:
|
|
@@ -1,6 +1,6 @@
|
|
|
1
1
|
{
|
|
2
2
|
"name": "hdsp-agent",
|
|
3
|
-
"version": "2.0.
|
|
3
|
+
"version": "2.0.19",
|
|
4
4
|
"description": "HDSP Agent JupyterLab Extension - Thin client for Agent Server",
|
|
5
5
|
"keywords": [
|
|
6
6
|
"jupyter",
|
|
@@ -132,7 +132,7 @@
|
|
|
132
132
|
}
|
|
133
133
|
},
|
|
134
134
|
"_build": {
|
|
135
|
-
"load": "static/remoteEntry.
|
|
135
|
+
"load": "static/remoteEntry.d686ab71eb65b5ef8f15.js",
|
|
136
136
|
"extension": "./extension",
|
|
137
137
|
"style": "./style"
|
|
138
138
|
}
|