bohr-agent-sdk 0.1.102__py3-none-any.whl → 0.1.104__py3-none-any.whl
This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
- bohr_agent_sdk-0.1.104.dist-info/METADATA +293 -0
- {bohr_agent_sdk-0.1.102.dist-info → bohr_agent_sdk-0.1.104.dist-info}/RECORD +10 -10
- dp/agent/cli/cli.py +1 -1
- dp/agent/server/calculation_mcp_server.py +75 -48
- dp/agent/server/executor/dispatcher_executor.py +60 -24
- dp/agent/server/storage/http_storage.py +1 -1
- dp/agent/server/utils.py +23 -115
- bohr_agent_sdk-0.1.102.dist-info/METADATA +0 -228
- {bohr_agent_sdk-0.1.102.dist-info → bohr_agent_sdk-0.1.104.dist-info}/WHEEL +0 -0
- {bohr_agent_sdk-0.1.102.dist-info → bohr_agent_sdk-0.1.104.dist-info}/entry_points.txt +0 -0
- {bohr_agent_sdk-0.1.102.dist-info → bohr_agent_sdk-0.1.104.dist-info}/top_level.txt +0 -0
|
@@ -0,0 +1,293 @@
|
|
|
1
|
+
Metadata-Version: 2.4
|
|
2
|
+
Name: bohr-agent-sdk
|
|
3
|
+
Version: 0.1.104
|
|
4
|
+
Summary: SDK for scientific agents
|
|
5
|
+
Home-page: https://github.com/dptech-corp/bohr-agent-sdk/
|
|
6
|
+
Author: DP Technology
|
|
7
|
+
Maintainer-email: liupeng <liupeng@dp.tech>, zjgemi <liuxzj@dp.tech>
|
|
8
|
+
License: MIT
|
|
9
|
+
Project-URL: Homepage, https://github.com/dptech-corp/bohr-agent-sdk
|
|
10
|
+
Project-URL: repository, https://github.com/dptech-corp/bohr-agent-sdk
|
|
11
|
+
Project-URL: Bug Reports, https://github.com/dptech-corp/bohr-agent-sdk/issues
|
|
12
|
+
Keywords: agent SDK,AI for science
|
|
13
|
+
Classifier: License :: OSI Approved :: MIT License
|
|
14
|
+
Classifier: Programming Language :: Python :: 3
|
|
15
|
+
Classifier: Programming Language :: Python :: 3.10
|
|
16
|
+
Classifier: Programming Language :: Python :: 3.11
|
|
17
|
+
Classifier: Programming Language :: Python :: 3.12
|
|
18
|
+
Classifier: Programming Language :: Python :: 3.13
|
|
19
|
+
Requires-Python: >=3.10
|
|
20
|
+
Description-Content-Type: text/markdown
|
|
21
|
+
Requires-Dist: click>=8.0.0
|
|
22
|
+
Requires-Dist: mcp>=1.17.0
|
|
23
|
+
Requires-Dist: python-dotenv>=1.0.0
|
|
24
|
+
Requires-Dist: typing-extensions>=4.8.0
|
|
25
|
+
Requires-Dist: dpdispatcher>=0.6.8
|
|
26
|
+
Requires-Dist: lbg>=1.2.29
|
|
27
|
+
Requires-Dist: jsonpickle>=3.0.3
|
|
28
|
+
Requires-Dist: psutil>=5.9.6
|
|
29
|
+
Requires-Dist: paho-mqtt>=2.1.0
|
|
30
|
+
Requires-Dist: redis>=6.2.0
|
|
31
|
+
Requires-Dist: twine>=6.1.0
|
|
32
|
+
Requires-Dist: build>=1.2.2.post1
|
|
33
|
+
Requires-Dist: cloudpickle==2.2.0
|
|
34
|
+
Requires-Dist: watchdog>=6.0.0
|
|
35
|
+
Requires-Dist: fastapi>=0.116.0
|
|
36
|
+
Requires-Dist: bohrium-open-sdk
|
|
37
|
+
Provides-Extra: device
|
|
38
|
+
Requires-Dist: pywinauto-recorder>=0.1.0; extra == "device"
|
|
39
|
+
Provides-Extra: cloud
|
|
40
|
+
Requires-Dist: paho-mqtt>=1.6.1; extra == "cloud"
|
|
41
|
+
Requires-Dist: redis>=5.0.1; extra == "cloud"
|
|
42
|
+
Requires-Dist: aiohttp>=3.9.1; extra == "cloud"
|
|
43
|
+
Provides-Extra: dev
|
|
44
|
+
Requires-Dist: pytest>=7.4.0; extra == "dev"
|
|
45
|
+
Requires-Dist: pytest-asyncio>=0.23.0; extra == "dev"
|
|
46
|
+
Requires-Dist: pytest-cov>=4.1.0; extra == "dev"
|
|
47
|
+
Requires-Dist: black>=23.11.0; extra == "dev"
|
|
48
|
+
Requires-Dist: isort>=5.12.0; extra == "dev"
|
|
49
|
+
Requires-Dist: mypy>=1.7.0; extra == "dev"
|
|
50
|
+
Requires-Dist: pylint>=3.0.0; extra == "dev"
|
|
51
|
+
Requires-Dist: google-adk; extra == "dev"
|
|
52
|
+
Provides-Extra: docs
|
|
53
|
+
Requires-Dist: sphinx>=7.2.0; extra == "docs"
|
|
54
|
+
Requires-Dist: sphinx-rtd-theme>=1.3.0; extra == "docs"
|
|
55
|
+
Provides-Extra: all
|
|
56
|
+
Requires-Dist: bohr-agent-sdk[bohrium,cloud,dev,device,dispatcher,docs]; extra == "all"
|
|
57
|
+
Dynamic: home-page
|
|
58
|
+
Dynamic: requires-python
|
|
59
|
+
|
|
60
|
+
# Bohrium Science Agent SDK
|
|
61
|
+
|
|
62
|
+
[English](README.md) | [简体中文](README_CN.md)
|
|
63
|
+
|
|
64
|
+
**Transform Scientific Software into AI Assistants — 3 Steps to Intelligent Transformation**
|
|
65
|
+
|
|
66
|
+
## 📖 Introduction
|
|
67
|
+
|
|
68
|
+
The Bohrium platform introduces the **bohr-agent-sdk Scientific Agent Development Kit**, enabling AI systems to truly execute professional scientific tasks and helping developers quickly build their own specialized research agents. Through a three-step process — **Invoking MCP Tools, Orchestrating Agent Workflows, and Deploying Services** — any scientific software can be rapidly transformed into an AI assistant.
|
|
69
|
+
|
|
70
|
+
## ✨ Core Features
|
|
71
|
+
|
|
72
|
+
### 🎯 Intelligent Task Management: Simplified Development, Standardized Output
|
|
73
|
+
With a decorator pattern, just a few annotations can quickly transform scientific computing programs into MCP standard services. Built-in application templates turn scattered research code into standardized, reusable intelligent components.
|
|
74
|
+
|
|
75
|
+
### 🔧 Multi-Backend Framework Support
|
|
76
|
+
Supports mainstream Agent open frameworks including Google ADK, Langraph, and Camel, providing flexible choices for developers familiar with different technology stacks.
|
|
77
|
+
|
|
78
|
+
### ☁️ Flexible Deployment: Local Development, Cloud Production
|
|
79
|
+
Dual-mode architecture supports seamless transition between development and production. Local environments enable rapid iteration and feature validation, while Bohrium's cloud GPU clusters handle production-grade computing tasks. The SDK automatically manages the complete workflow of task scheduling, status monitoring, and result collection, with built-in file transfer mechanisms for handling large-scale data uploads and downloads. Developers focus on core algorithm implementation while infrastructure management is fully automated.
|
|
80
|
+
|
|
81
|
+
### 🖼️ Visual Interactive Interface: Professional Presentation, Intuitive Operation
|
|
82
|
+
Based on the modern React framework, deploy fully-featured web applications with one click. Built-in 3D molecular visualization engine supports multiple structure formats and rendering modes for interactive molecular structure display. Real-time data synchronization ensures instant computing status updates, while multi-session management supports parallel task processing. Integrated with enterprise-grade features including file management, project switching, and permission control. Transform command-line tools into professional visual applications, significantly enhancing user experience and tool usability.
|
|
83
|
+
|
|
84
|
+
## 🖼️ Interface Showcase
|
|
85
|
+
|
|
86
|
+
### Scientific Computing Master Console
|
|
87
|
+
<div align="center">
|
|
88
|
+
|
|
89
|
+

|
|
90
|
+
|
|
91
|
+
*Powerful scientific computing task management and monitoring platform*
|
|
92
|
+
|
|
93
|
+
</div>
|
|
94
|
+
|
|
95
|
+
### Visual Interactive Interface
|
|
96
|
+
<div align="center">
|
|
97
|
+
|
|
98
|
+

|
|
99
|
+
|
|
100
|
+
*Modern web application interface providing intuitive user experience*
|
|
101
|
+
|
|
102
|
+
</div>
|
|
103
|
+
|
|
104
|
+
## 🚀 Quick Start
|
|
105
|
+
|
|
106
|
+
### Installation
|
|
107
|
+
|
|
108
|
+
```bash
|
|
109
|
+
pip install bohr-agent-sdk -i https://pypi.org/simple --upgrade
|
|
110
|
+
```
|
|
111
|
+
|
|
112
|
+
### Build Your Research Agent in 3 Steps
|
|
113
|
+
|
|
114
|
+
#### Step 1: Get Project Templates
|
|
115
|
+
|
|
116
|
+
```bash
|
|
117
|
+
# Get calculation project template
|
|
118
|
+
dp-agent fetch scaffolding --type=calculation
|
|
119
|
+
|
|
120
|
+
# Get device control project template
|
|
121
|
+
dp-agent fetch scaffolding --type=device
|
|
122
|
+
|
|
123
|
+
# Get configuration file
|
|
124
|
+
dp-agent fetch config
|
|
125
|
+
```
|
|
126
|
+
|
|
127
|
+
#### Step 2: Develop Your Agent
|
|
128
|
+
|
|
129
|
+
**Lab Mode Development Example**
|
|
130
|
+
|
|
131
|
+
```python
|
|
132
|
+
from typing import Dict, TypedDict
|
|
133
|
+
from dp.agent.device.device import Device, action, BaseParams, SuccessResult
|
|
134
|
+
|
|
135
|
+
class TakePictureParams(BaseParams):
|
|
136
|
+
"""Picture taking parameters"""
|
|
137
|
+
horizontal_width: str # Image horizontal width
|
|
138
|
+
|
|
139
|
+
class PictureData(TypedDict):
|
|
140
|
+
"""Picture data structure"""
|
|
141
|
+
image_id: str
|
|
142
|
+
|
|
143
|
+
class PictureResult(SuccessResult):
|
|
144
|
+
"""Picture taking result"""
|
|
145
|
+
data: PictureData
|
|
146
|
+
|
|
147
|
+
class MyDevice(Device):
|
|
148
|
+
"""Custom device class"""
|
|
149
|
+
device_name = "my_device"
|
|
150
|
+
|
|
151
|
+
@action("take_picture")
|
|
152
|
+
def take_picture(self, params: TakePictureParams) -> PictureResult:
|
|
153
|
+
"""
|
|
154
|
+
Execute picture taking action
|
|
155
|
+
|
|
156
|
+
Through the @action decorator, automatically register this method as an MCP standard service
|
|
157
|
+
"""
|
|
158
|
+
hw = params.get("horizontal_width", "default")
|
|
159
|
+
# Execute actual device control logic
|
|
160
|
+
return PictureResult(
|
|
161
|
+
message=f"Picture taken with {self.device_name}",
|
|
162
|
+
data={"image_id": "image_123"}
|
|
163
|
+
)
|
|
164
|
+
```
|
|
165
|
+
|
|
166
|
+
**Cloud Mode Development Example**
|
|
167
|
+
|
|
168
|
+
```python
|
|
169
|
+
"""
|
|
170
|
+
MCP protocol-based cloud device control example
|
|
171
|
+
"""
|
|
172
|
+
import signal
|
|
173
|
+
import sys
|
|
174
|
+
from dp.agent.cloud import mcp, get_mqtt_cloud_instance
|
|
175
|
+
from dp.agent.device.device import TescanDevice, register_mcp_tools
|
|
176
|
+
|
|
177
|
+
def signal_handler(sig, frame):
|
|
178
|
+
"""Graceful shutdown handling"""
|
|
179
|
+
print("Shutting down...")
|
|
180
|
+
get_mqtt_cloud_instance().stop()
|
|
181
|
+
sys.exit(0)
|
|
182
|
+
|
|
183
|
+
def main():
|
|
184
|
+
"""Start cloud services"""
|
|
185
|
+
print("Starting Tescan Device Twin Cloud Services...")
|
|
186
|
+
|
|
187
|
+
# Register signal handler
|
|
188
|
+
signal.signal(signal.SIGINT, signal_handler)
|
|
189
|
+
|
|
190
|
+
# Create device instance
|
|
191
|
+
device = TescanDevice(mcp, device)
|
|
192
|
+
|
|
193
|
+
# Automatically register device tools to MCP server
|
|
194
|
+
# register_mcp_tools implements automatic registration through Python introspection
|
|
195
|
+
register_mcp_tools(device)
|
|
196
|
+
|
|
197
|
+
# Start MCP server
|
|
198
|
+
print("Starting MCP server...")
|
|
199
|
+
mcp.run(transport="sse")
|
|
200
|
+
|
|
201
|
+
if __name__ == "__main__":
|
|
202
|
+
main()
|
|
203
|
+
```
|
|
204
|
+
|
|
205
|
+
#### Step 3: Run and Deploy
|
|
206
|
+
|
|
207
|
+
```bash
|
|
208
|
+
# Local lab environment
|
|
209
|
+
dp-agent run tool device
|
|
210
|
+
|
|
211
|
+
# Cloud computing environment
|
|
212
|
+
dp-agent run tool cloud
|
|
213
|
+
|
|
214
|
+
# Scientific calculation mode
|
|
215
|
+
dp-agent run tool calculation
|
|
216
|
+
|
|
217
|
+
# Start agent (with Web UI)
|
|
218
|
+
dp-agent run agent --config
|
|
219
|
+
|
|
220
|
+
# Debug mode
|
|
221
|
+
dp-agent run debug
|
|
222
|
+
```
|
|
223
|
+
|
|
224
|
+
## 🏗️ Project Structure
|
|
225
|
+
|
|
226
|
+
After running `dp-agent fetch scaffolding`, you'll get a standardized project structure:
|
|
227
|
+
|
|
228
|
+
```
|
|
229
|
+
your-project/
|
|
230
|
+
├── lab/ # Lab mode
|
|
231
|
+
│ ├── __init__.py
|
|
232
|
+
│ └── tescan_device.py # Device control implementation
|
|
233
|
+
├── cloud/ # Cloud mode
|
|
234
|
+
│ ├── __init__.py
|
|
235
|
+
│ └── mcp_server.py # MCP service implementation
|
|
236
|
+
├── calculation/ # Calculation mode
|
|
237
|
+
│ └── __init__.py
|
|
238
|
+
├── .env # Environment configuration
|
|
239
|
+
└── main.py # Main program entry
|
|
240
|
+
```
|
|
241
|
+
|
|
242
|
+
## ⚙️ Configuration
|
|
243
|
+
|
|
244
|
+
Configure necessary environment variables in the `.env` file:
|
|
245
|
+
|
|
246
|
+
```bash
|
|
247
|
+
# MQTT connection configuration
|
|
248
|
+
MQTT_INSTANCE_ID=your_instance_id
|
|
249
|
+
MQTT_ENDPOINT=your_endpoint
|
|
250
|
+
MQTT_DEVICE_ID=your_device_id
|
|
251
|
+
MQTT_GROUP_ID=your_group_id
|
|
252
|
+
MQTT_AK=your_access_key
|
|
253
|
+
MQTT_SK=your_secret_key
|
|
254
|
+
|
|
255
|
+
# Computing resource configuration
|
|
256
|
+
BOHRIUM_USERNAME=your_username
|
|
257
|
+
BOHRIUM_PASSWORD=your_password
|
|
258
|
+
```
|
|
259
|
+
|
|
260
|
+
Note: The `dp-agent fetch config` command automatically downloads configuration files and replaces dynamic variables (such as MQTT_DEVICE_ID). For security reasons, this feature is only available in internal network environments.
|
|
261
|
+
|
|
262
|
+
## 🎯 Application Scenarios
|
|
263
|
+
|
|
264
|
+
- **Materials Science Computing**: Molecular dynamics simulation, first-principles calculations
|
|
265
|
+
- **Bioinformatics Analysis**: Gene sequence analysis, protein structure prediction
|
|
266
|
+
- **Laboratory Equipment Control**: Intelligent control of research equipment such as electron microscopes and X-ray diffractometers
|
|
267
|
+
- **Data Processing Workflows**: Automated data cleaning, analysis, and visualization
|
|
268
|
+
- **Machine Learning Training**: Model training, hyperparameter optimization, result evaluation
|
|
269
|
+
|
|
270
|
+
## 🔧 Advanced Features
|
|
271
|
+
|
|
272
|
+
### File Management
|
|
273
|
+
|
|
274
|
+
```bash
|
|
275
|
+
# Upload files to cloud
|
|
276
|
+
dp-agent artifact upload <path>
|
|
277
|
+
|
|
278
|
+
# Download cloud files
|
|
279
|
+
dp-agent artifact download <artifact_id>
|
|
280
|
+
```
|
|
281
|
+
|
|
282
|
+
### Task Monitoring
|
|
283
|
+
|
|
284
|
+
The SDK provides real-time task status monitoring, supporting:
|
|
285
|
+
- Task queue management
|
|
286
|
+
- Computing resource scheduling
|
|
287
|
+
- Automatic result collection
|
|
288
|
+
- Exception handling and retry mechanisms
|
|
289
|
+
|
|
290
|
+
## 📚 Documentation & Support
|
|
291
|
+
|
|
292
|
+
- 📖 [Detailed Documentation](https://dptechnology.feishu.cn/wiki/ZSj9wbLJEiwdNek0Iu7cKsFanuW)
|
|
293
|
+
|
|
@@ -9,7 +9,7 @@ dp/agent/adapter/camel/__init__.py,sha256=RN1NhdmsJyN43fTxTXFld4UKZksjpSV0b2QvFn
|
|
|
9
9
|
dp/agent/adapter/camel/client/__init__.py,sha256=ld-r0_WsZLFv6yyrmxjWmR8JgnrQzOw4fX0hwVHzciY,93
|
|
10
10
|
dp/agent/adapter/camel/client/calculation_mcp_client.py,sha256=JZZUYYfMgXvHzK4f6IJp-ia33wn3aYZqDSDVa5yKtdc,1860
|
|
11
11
|
dp/agent/cli/__init__.py,sha256=47DEQpj8HBSa-_TImW-5JCeuQeRkm5NMpJWZG3hSuFU,0
|
|
12
|
-
dp/agent/cli/cli.py,sha256=
|
|
12
|
+
dp/agent/cli/cli.py,sha256=Xp3Q7gp50ttsidPG1i3bOQ0-_pMN7yILePJ-ZyOogvA,12363
|
|
13
13
|
dp/agent/cli/templates/__init__.py,sha256=h5__iNn8QzUyYpCUORJO9GTd42tga_BFQ3ZjA3QtSCI,15
|
|
14
14
|
dp/agent/cli/templates/main.py.template,sha256=gEv_naKkBKUmVY1aGM_RZPWqXhkYDIrk4eu7uQFpWrA,1986
|
|
15
15
|
dp/agent/cli/templates/calculation/simple.py.template,sha256=AkOrMWZf9YKQuLnl5yRgl2V-fepFuZJ1qDcTc9S5Gj0,395
|
|
@@ -60,21 +60,21 @@ dp/agent/device/device/__init__.py,sha256=w7_1S16S1vWUq0RGl0GFgjq2vFkc5oNvy8cQTn
|
|
|
60
60
|
dp/agent/device/device/device.py,sha256=9ZRIJth-4qMO-i-u_b_cO3d6a4eTbTQjPaxFsV_zEkc,9643
|
|
61
61
|
dp/agent/device/device/types.py,sha256=JuxB-hjf1CjjvfBxCLwRAXVFlYS-nPEdiJpBWLFVCzo,1924
|
|
62
62
|
dp/agent/server/__init__.py,sha256=rckaYd8pbYyB4ENEhgjXKeGMXjdnrgcJpdM1gu5u1Wc,508
|
|
63
|
-
dp/agent/server/calculation_mcp_server.py,sha256=
|
|
63
|
+
dp/agent/server/calculation_mcp_server.py,sha256=iRFOdgTxySMGk7ZaSseNssEp-A7zT5cW1Ym2_MIKnG4,12602
|
|
64
64
|
dp/agent/server/preprocessor.py,sha256=XUWu7QOwo_sIDMYS2b1OTrM33EXEVH_73vk-ju1Ok8A,1264
|
|
65
|
-
dp/agent/server/utils.py,sha256=
|
|
65
|
+
dp/agent/server/utils.py,sha256=ui3lca9EagcGqmYf8BKLsPARIzXxJ3jgN98yuEO3OSQ,1668
|
|
66
66
|
dp/agent/server/executor/__init__.py,sha256=s95M5qKQk39Yi9qaVJZhk_nfj54quSf7EDghR3OCFUA,248
|
|
67
67
|
dp/agent/server/executor/base_executor.py,sha256=EFJBsYVYAvuRbiLAbLOwLTw3h7ScjN025xnSP4uJHrQ,2052
|
|
68
|
-
dp/agent/server/executor/dispatcher_executor.py,sha256=
|
|
68
|
+
dp/agent/server/executor/dispatcher_executor.py,sha256=p2ISxvLUcR1QOPF5BxDLD7AFPFibmw5uKJJ_fL8zecY,10836
|
|
69
69
|
dp/agent/server/executor/local_executor.py,sha256=wYCclNZFkLb3v7KpW1nCnupO8piBES-esYlDAuz86zk,6120
|
|
70
70
|
dp/agent/server/storage/__init__.py,sha256=Sgsyp5hb0_hhIGugAPfQFzBHt_854rS_MuMuE3sn8Gs,389
|
|
71
71
|
dp/agent/server/storage/base_storage.py,sha256=728-oNG6N8isV95gZVnyi4vTznJPJhSjxw9Gl5Y_y5o,2356
|
|
72
72
|
dp/agent/server/storage/bohrium_storage.py,sha256=EsKX4dWWvZTn2TEhZv4zsvihfDK0mmPFecrln-Ytk40,10488
|
|
73
|
-
dp/agent/server/storage/http_storage.py,sha256=
|
|
73
|
+
dp/agent/server/storage/http_storage.py,sha256=KiySq7g9-iJr12XQCKKyJLn8wJoDnSRpQAR5_qPJ1ZU,1471
|
|
74
74
|
dp/agent/server/storage/local_storage.py,sha256=t1wfjByjXew9ws3PuUxWxmZQ0-Wt1a6t4wmj3fW62GI,1352
|
|
75
75
|
dp/agent/server/storage/oss_storage.py,sha256=pgjmi7Gir3Y5wkMDCvU4fvSls15fXT7Ax-h9MYHFPK0,3359
|
|
76
|
-
bohr_agent_sdk-0.1.
|
|
77
|
-
bohr_agent_sdk-0.1.
|
|
78
|
-
bohr_agent_sdk-0.1.
|
|
79
|
-
bohr_agent_sdk-0.1.
|
|
80
|
-
bohr_agent_sdk-0.1.
|
|
76
|
+
bohr_agent_sdk-0.1.104.dist-info/METADATA,sha256=h9BBCLrH_qtMCtfHzoCpk0WIgppExXMvQ8jwnXhWFHg,10226
|
|
77
|
+
bohr_agent_sdk-0.1.104.dist-info/WHEEL,sha256=_zCd3N1l69ArxyTb8rzEoP9TpbYXkqRFSNOD5OuxnTs,91
|
|
78
|
+
bohr_agent_sdk-0.1.104.dist-info/entry_points.txt,sha256=5n5kneF5IbDQtoQ2WfF-QuBjDtsimJte9Rv9baSGgc0,86
|
|
79
|
+
bohr_agent_sdk-0.1.104.dist-info/top_level.txt,sha256=87xLUDhu_1nQHoGLwlhJ6XlO7OsjILh6i1nX6ljFzDo,3
|
|
80
|
+
bohr_agent_sdk-0.1.104.dist-info/RECORD,,
|
dp/agent/cli/cli.py
CHANGED
|
@@ -188,7 +188,7 @@ def calculation():
|
|
|
188
188
|
def agent(ui, config, port, module, agent_name, dev):
|
|
189
189
|
"""Run the science agent with optional UI interface."""
|
|
190
190
|
if not ui:
|
|
191
|
-
|
|
191
|
+
click.echo("Starting agent in console mode...")
|
|
192
192
|
click.echo("Console mode not yet implemented.")
|
|
193
193
|
return
|
|
194
194
|
|
|
@@ -6,17 +6,23 @@ from copy import deepcopy
|
|
|
6
6
|
from datetime import datetime
|
|
7
7
|
from pathlib import Path
|
|
8
8
|
from urllib.parse import urlparse
|
|
9
|
-
from typing import Literal, Optional,
|
|
9
|
+
from typing import Any, Literal, Optional, TypedDict
|
|
10
10
|
|
|
11
|
+
import mcp
|
|
11
12
|
from mcp.server.fastmcp import FastMCP
|
|
12
|
-
from mcp.server.fastmcp.
|
|
13
|
-
|
|
13
|
+
from mcp.server.fastmcp.utilities.context_injection import (
|
|
14
|
+
find_context_parameter,
|
|
15
|
+
)
|
|
16
|
+
from mcp.server.fastmcp.utilities.func_metadata import (
|
|
17
|
+
_get_typed_signature,
|
|
18
|
+
func_metadata,
|
|
19
|
+
)
|
|
14
20
|
from starlette.responses import JSONResponse
|
|
15
21
|
from starlette.routing import Route
|
|
16
22
|
|
|
17
23
|
from .executor import executor_dict
|
|
18
24
|
from .storage import storage_dict
|
|
19
|
-
from .utils import get_logger,
|
|
25
|
+
from .utils import get_logger, JobResult, Tool
|
|
20
26
|
logger = get_logger(__name__)
|
|
21
27
|
|
|
22
28
|
|
|
@@ -133,7 +139,7 @@ def handle_output_artifacts(results, exec_id, storage):
|
|
|
133
139
|
|
|
134
140
|
|
|
135
141
|
def get_job_results(job_id: str, executor: Optional[dict] = None,
|
|
136
|
-
storage: Optional[dict] = None) ->
|
|
142
|
+
storage: Optional[dict] = None) -> Any:
|
|
137
143
|
"""
|
|
138
144
|
Get results of a calculation job
|
|
139
145
|
Args:
|
|
@@ -148,63 +154,80 @@ def get_job_results(job_id: str, executor: Optional[dict] = None,
|
|
|
148
154
|
results, output_artifacts = handle_output_artifacts(
|
|
149
155
|
results, exec_id, storage)
|
|
150
156
|
logger.info("Job %s result is %s" % (job_id, results))
|
|
151
|
-
return
|
|
157
|
+
return JobResult(result=results, job_info={
|
|
152
158
|
"output_artifacts": output_artifacts,
|
|
153
159
|
})
|
|
154
160
|
|
|
155
161
|
|
|
156
162
|
class CalculationMCPServer:
|
|
157
|
-
def __init__(self, *args, preprocess_func=None,
|
|
163
|
+
def __init__(self, *args, preprocess_func=None, fastmcp_mode=False,
|
|
164
|
+
**kwargs):
|
|
165
|
+
"""
|
|
166
|
+
Args:
|
|
167
|
+
preprocess_func: The preprocess function for all tools
|
|
168
|
+
fastmcp_mode: compatible for fastmcp.FastMCP
|
|
169
|
+
"""
|
|
158
170
|
self.preprocess_func = preprocess_func
|
|
171
|
+
self.fastmcp_mode = fastmcp_mode
|
|
159
172
|
self.mcp = FastMCP(*args, **kwargs)
|
|
160
173
|
|
|
161
|
-
def add_patched_tool(self, fn, new_fn, name, is_async=False, doc=None
|
|
162
|
-
|
|
163
|
-
|
|
164
|
-
|
|
165
|
-
|
|
166
|
-
|
|
167
|
-
|
|
168
|
-
|
|
169
|
-
|
|
170
|
-
|
|
171
|
-
|
|
172
|
-
|
|
173
|
-
|
|
174
|
-
|
|
175
|
-
|
|
176
|
-
|
|
177
|
-
|
|
178
|
-
|
|
179
|
-
|
|
180
|
-
|
|
181
|
-
|
|
182
|
-
|
|
183
|
-
|
|
184
|
-
|
|
185
|
-
|
|
186
|
-
parameters
|
|
187
|
-
|
|
188
|
-
|
|
189
|
-
|
|
174
|
+
def add_patched_tool(self, fn, new_fn, name, is_async=False, doc=None,
|
|
175
|
+
override_return_annotation=False):
|
|
176
|
+
"""patch the metadata of the tool"""
|
|
177
|
+
context_kwarg = find_context_parameter(fn)
|
|
178
|
+
|
|
179
|
+
def _get_typed_signature_patched(call):
|
|
180
|
+
"""patch parameters"""
|
|
181
|
+
typed_signature = _get_typed_signature(call)
|
|
182
|
+
new_typed_signature = _get_typed_signature(new_fn)
|
|
183
|
+
parameters = []
|
|
184
|
+
for param in typed_signature.parameters.values():
|
|
185
|
+
if param.annotation is Path:
|
|
186
|
+
parameters.append(inspect.Parameter(
|
|
187
|
+
name=param.name, default=param.default,
|
|
188
|
+
annotation=str, kind=param.kind))
|
|
189
|
+
elif param.annotation is Optional[Path]:
|
|
190
|
+
parameters.append(inspect.Parameter(
|
|
191
|
+
name=param.name, default=param.default,
|
|
192
|
+
annotation=Optional[str], kind=param.kind))
|
|
193
|
+
else:
|
|
194
|
+
parameters.append(param)
|
|
195
|
+
for param in new_typed_signature.parameters.values():
|
|
196
|
+
if param.name != "kwargs":
|
|
197
|
+
parameters.append(param)
|
|
198
|
+
return inspect.Signature(
|
|
199
|
+
parameters,
|
|
200
|
+
return_annotation=(new_typed_signature.return_annotation
|
|
201
|
+
if override_return_annotation
|
|
202
|
+
else typed_signature.return_annotation))
|
|
203
|
+
|
|
204
|
+
# Due to the frequent changes of MCP, we use a patching style here
|
|
205
|
+
mcp.server.fastmcp.utilities.func_metadata._get_typed_signature = \
|
|
206
|
+
_get_typed_signature_patched
|
|
207
|
+
func_arg_metadata = func_metadata(
|
|
208
|
+
fn,
|
|
190
209
|
skip_names=[context_kwarg] if context_kwarg is not None else [],
|
|
191
|
-
|
|
210
|
+
structured_output=None,
|
|
192
211
|
)
|
|
193
|
-
|
|
212
|
+
mcp.server.fastmcp.utilities.func_metadata._get_typed_signature = \
|
|
213
|
+
_get_typed_signature
|
|
214
|
+
if self.fastmcp_mode and func_arg_metadata.wrap_output:
|
|
215
|
+
# Only simulate behavior of fastmcp for output_schema
|
|
216
|
+
func_arg_metadata.output_schema["x-fastmcp-wrap-result"] = True
|
|
217
|
+
parameters = func_arg_metadata.arg_model.model_json_schema(
|
|
218
|
+
by_alias=True)
|
|
194
219
|
tool = Tool(
|
|
195
220
|
fn=new_fn,
|
|
196
221
|
name=name,
|
|
197
222
|
description=doc or fn.__doc__,
|
|
198
|
-
parameters=
|
|
223
|
+
parameters=parameters,
|
|
199
224
|
fn_metadata=func_arg_metadata,
|
|
200
225
|
is_async=is_async,
|
|
201
226
|
context_kwarg=context_kwarg,
|
|
202
|
-
annotations=None,
|
|
203
227
|
)
|
|
204
228
|
self.mcp._tool_manager._tools[name] = tool
|
|
205
229
|
|
|
206
230
|
def add_tool(self, fn, *args, **kwargs):
|
|
207
|
-
self.mcp.add_tool(fn, *args, **kwargs)
|
|
208
231
|
tool = Tool.from_function(fn, *args, **kwargs)
|
|
209
232
|
self.mcp._tool_manager._tools[tool.name] = tool
|
|
210
233
|
return tool
|
|
@@ -215,7 +238,9 @@ class CalculationMCPServer:
|
|
|
215
238
|
|
|
216
239
|
def decorator(fn: Callable) -> Callable:
|
|
217
240
|
def submit_job(executor: Optional[dict] = None,
|
|
218
|
-
storage: Optional[dict] = None,
|
|
241
|
+
storage: Optional[dict] = None,
|
|
242
|
+
**kwargs) -> TypedDict("results", {
|
|
243
|
+
"job_id": str, "extra_info": Optional[dict]}):
|
|
219
244
|
trace_id = datetime.today().strftime('%Y-%m-%d-%H:%M:%S.%f')
|
|
220
245
|
logger.info("Job processing (Trace ID: %s)" % trace_id)
|
|
221
246
|
with set_directory(trace_id):
|
|
@@ -233,7 +258,7 @@ class CalculationMCPServer:
|
|
|
233
258
|
"job_id": job_id,
|
|
234
259
|
"extra_info": res.get("extra_info"),
|
|
235
260
|
}
|
|
236
|
-
return
|
|
261
|
+
return JobResult(result=result, job_info={
|
|
237
262
|
"trace_id": trace_id,
|
|
238
263
|
"executor_type": executor_type,
|
|
239
264
|
"job_id": job_id,
|
|
@@ -263,7 +288,7 @@ class CalculationMCPServer:
|
|
|
263
288
|
logger.info("Job %s result is %s" % (job_id, results))
|
|
264
289
|
await context.log(level="info", message="Job %s result is"
|
|
265
290
|
" %s" % (job_id, results))
|
|
266
|
-
return
|
|
291
|
+
return JobResult(result=results, job_info={
|
|
267
292
|
"trace_id": trace_id,
|
|
268
293
|
"executor_type": executor_type,
|
|
269
294
|
"job_id": job_id,
|
|
@@ -273,8 +298,9 @@ class CalculationMCPServer:
|
|
|
273
298
|
})
|
|
274
299
|
|
|
275
300
|
self.add_patched_tool(fn, run_job, fn.__name__, is_async=True)
|
|
276
|
-
self.add_patched_tool(
|
|
277
|
-
|
|
301
|
+
self.add_patched_tool(
|
|
302
|
+
fn, submit_job, "submit_" + fn.__name__, doc="Submit a job",
|
|
303
|
+
override_return_annotation=True)
|
|
278
304
|
self.add_tool(query_job_status)
|
|
279
305
|
self.add_tool(terminate_job)
|
|
280
306
|
self.add_tool(get_job_results)
|
|
@@ -284,9 +310,10 @@ class CalculationMCPServer:
|
|
|
284
310
|
def run(self, **kwargs):
|
|
285
311
|
if os.environ.get("DP_AGENT_RUNNING_MODE") in ["1", "true"]:
|
|
286
312
|
return
|
|
287
|
-
|
|
313
|
+
|
|
314
|
+
async def health_check(request):
|
|
288
315
|
return JSONResponse({"status": "ok"})
|
|
289
|
-
|
|
316
|
+
|
|
290
317
|
self.mcp._custom_starlette_routes.append(
|
|
291
318
|
Route(
|
|
292
319
|
"/health",
|
|
@@ -9,6 +9,7 @@ from pathlib import Path
|
|
|
9
9
|
|
|
10
10
|
import jsonpickle
|
|
11
11
|
from dpdispatcher import Machine, Resources, Task, Submission
|
|
12
|
+
from dpdispatcher.utils.job_status import JobStatus
|
|
12
13
|
|
|
13
14
|
from .base_executor import BaseExecutor
|
|
14
15
|
from .... import __path__
|
|
@@ -33,6 +34,50 @@ def get_source_code(fn):
|
|
|
33
34
|
return "".join(pre_lines + source_lines) + "\n"
|
|
34
35
|
|
|
35
36
|
|
|
37
|
+
def get_func_def_script(fn):
|
|
38
|
+
script = ""
|
|
39
|
+
packages = []
|
|
40
|
+
fn_name = fn.__name__
|
|
41
|
+
module_name = fn.__module__
|
|
42
|
+
module = sys.modules[module_name]
|
|
43
|
+
if getattr(module, fn_name, None) is not fn:
|
|
44
|
+
# cannot import from module, maybe a local function
|
|
45
|
+
import cloudpickle
|
|
46
|
+
packages.extend(cloudpickle.__path__)
|
|
47
|
+
script += "import cloudpickle\n"
|
|
48
|
+
script += "%s = cloudpickle.loads(%s)\n" % \
|
|
49
|
+
(fn_name, cloudpickle.dumps(fn))
|
|
50
|
+
elif module_name in ["__main__", "__mp_main__"]:
|
|
51
|
+
if hasattr(module, "__file__"):
|
|
52
|
+
name = os.path.splitext(os.path.basename(module.__file__))[0]
|
|
53
|
+
if getattr(module, "__package__", None):
|
|
54
|
+
package = module.__package__
|
|
55
|
+
package_name = package.split('.')[0]
|
|
56
|
+
module = importlib.import_module(package_name)
|
|
57
|
+
packages.extend(module.__path__)
|
|
58
|
+
script += "from %s.%s import %s\n" % (
|
|
59
|
+
package, name, fn_name)
|
|
60
|
+
else:
|
|
61
|
+
packages.append(module.__file__)
|
|
62
|
+
script += "from %s import %s\n" % (name, fn_name)
|
|
63
|
+
else:
|
|
64
|
+
# cannot get file of __main__, maybe in the interactive mode
|
|
65
|
+
import cloudpickle
|
|
66
|
+
packages.extend(cloudpickle.__path__)
|
|
67
|
+
script += "import cloudpickle\n"
|
|
68
|
+
script += "%s = cloudpickle.loads(%s)\n" % \
|
|
69
|
+
(fn_name, cloudpickle.dumps(fn))
|
|
70
|
+
else:
|
|
71
|
+
package_name = module_name.split('.')[0]
|
|
72
|
+
module = importlib.import_module(package_name)
|
|
73
|
+
if hasattr(module, "__path__"):
|
|
74
|
+
packages.extend(module.__path__)
|
|
75
|
+
elif hasattr(module, "__file__"):
|
|
76
|
+
packages.append(module.__file__)
|
|
77
|
+
script += "from %s import %s\n" % (module_name, fn_name)
|
|
78
|
+
return script, packages
|
|
79
|
+
|
|
80
|
+
|
|
36
81
|
class DispatcherExecutor(BaseExecutor):
|
|
37
82
|
def __init__(
|
|
38
83
|
self,
|
|
@@ -86,24 +131,8 @@ class DispatcherExecutor(BaseExecutor):
|
|
|
86
131
|
def submit(self, fn, kwargs):
|
|
87
132
|
script = ""
|
|
88
133
|
fn_name = fn.__name__
|
|
89
|
-
|
|
90
|
-
|
|
91
|
-
if module_name in ["__main__", "__mp_main__"]:
|
|
92
|
-
module = sys.modules[module_name]
|
|
93
|
-
if hasattr(module, "__file__"):
|
|
94
|
-
self.python_packages.append(module.__file__)
|
|
95
|
-
name = os.path.splitext(os.path.basename(module.__file__))[0]
|
|
96
|
-
import_func_line = "from %s import %s\n" % (name, fn_name)
|
|
97
|
-
else:
|
|
98
|
-
script += get_source_code(fn)
|
|
99
|
-
else:
|
|
100
|
-
package_name = module_name.split('.')[0]
|
|
101
|
-
module = importlib.import_module(package_name)
|
|
102
|
-
if hasattr(module, "__path__"):
|
|
103
|
-
self.python_packages.extend(module.__path__)
|
|
104
|
-
elif hasattr(module, "__file__"):
|
|
105
|
-
self.python_packages.append(module.__file__)
|
|
106
|
-
import_func_line = "from %s import %s\n" % (module_name, fn_name)
|
|
134
|
+
func_def_script, packages = get_func_def_script(fn)
|
|
135
|
+
self.python_packages.extend(packages)
|
|
107
136
|
|
|
108
137
|
script += "import asyncio, jsonpickle, os\n"
|
|
109
138
|
script += "from pathlib import Path\n\n"
|
|
@@ -112,8 +141,9 @@ class DispatcherExecutor(BaseExecutor):
|
|
|
112
141
|
script += " kwargs = jsonpickle.loads(r'''%s''')\n" % \
|
|
113
142
|
jsonpickle.dumps(kwargs)
|
|
114
143
|
script += " try:\n"
|
|
115
|
-
|
|
116
|
-
|
|
144
|
+
for line in func_def_script.splitlines():
|
|
145
|
+
if line:
|
|
146
|
+
script += " " + line + "\n"
|
|
117
147
|
if inspect.iscoroutinefunction(fn):
|
|
118
148
|
script += " results = asyncio.run(%s(**kwargs))\n" % fn_name
|
|
119
149
|
else:
|
|
@@ -198,17 +228,23 @@ class DispatcherExecutor(BaseExecutor):
|
|
|
198
228
|
submission = Submission.deserialize(
|
|
199
229
|
submission_dict=json.loads(content))
|
|
200
230
|
submission.update_submission_state()
|
|
201
|
-
if not submission.check_all_finished()
|
|
231
|
+
if not submission.check_all_finished() and not any(
|
|
232
|
+
job.job_state in [JobStatus.terminated, JobStatus.unknown,
|
|
233
|
+
JobStatus.unsubmitted]
|
|
234
|
+
for job in submission.belonging_jobs):
|
|
202
235
|
return "Running"
|
|
203
236
|
try:
|
|
204
237
|
submission.run_submission(exit_on_submit=True)
|
|
205
238
|
except Exception as e:
|
|
206
239
|
logger.error(e)
|
|
207
240
|
return "Failed"
|
|
208
|
-
if
|
|
209
|
-
|
|
241
|
+
if submission.check_all_finished():
|
|
242
|
+
if os.path.isfile("results.txt"):
|
|
243
|
+
return "Succeeded"
|
|
244
|
+
else:
|
|
245
|
+
return "Failed"
|
|
210
246
|
else:
|
|
211
|
-
return "
|
|
247
|
+
return "Running"
|
|
212
248
|
|
|
213
249
|
def terminate(self, job_id):
|
|
214
250
|
machine = Machine.load_from_dict(self.machine)
|
dp/agent/server/utils.py
CHANGED
|
@@ -1,26 +1,10 @@
|
|
|
1
|
-
import inspect
|
|
2
1
|
import logging
|
|
3
2
|
import traceback
|
|
4
|
-
from
|
|
5
|
-
from typing import Annotated, Any, List, Optional
|
|
3
|
+
from typing import Any
|
|
6
4
|
|
|
7
|
-
import jsonpickle
|
|
8
5
|
import mcp
|
|
9
|
-
from mcp.
|
|
10
|
-
from
|
|
11
|
-
ArgModelBase,
|
|
12
|
-
_get_typed_annotation,
|
|
13
|
-
FuncMetadata,
|
|
14
|
-
)
|
|
15
|
-
from mcp.server.fastmcp.utilities.types import Image
|
|
16
|
-
from mcp.types import (
|
|
17
|
-
EmbeddedResource,
|
|
18
|
-
ImageContent,
|
|
19
|
-
TextContent,
|
|
20
|
-
)
|
|
21
|
-
from pydantic import Field, WithJsonSchema, create_model
|
|
22
|
-
from pydantic.fields import FieldInfo
|
|
23
|
-
from pydantic_core import PydanticUndefined
|
|
6
|
+
from mcp.types import TextContent
|
|
7
|
+
from pydantic import BaseModel
|
|
24
8
|
|
|
25
9
|
|
|
26
10
|
def get_logger(name, level="INFO",
|
|
@@ -34,110 +18,34 @@ def get_logger(name, level="INFO",
|
|
|
34
18
|
return logger
|
|
35
19
|
|
|
36
20
|
|
|
37
|
-
|
|
38
|
-
|
|
39
|
-
|
|
40
|
-
skip_names: Sequence[str] = (),
|
|
41
|
-
globalns: dict = {},
|
|
42
|
-
) -> FuncMetadata:
|
|
43
|
-
dynamic_pydantic_model_params: dict[str, Any] = {}
|
|
44
|
-
for param in parameters:
|
|
45
|
-
if param.name.startswith("_"):
|
|
46
|
-
raise InvalidSignature(
|
|
47
|
-
f"Parameter {param.name} of {func_name} cannot start with '_'"
|
|
48
|
-
)
|
|
49
|
-
if param.name in skip_names:
|
|
50
|
-
continue
|
|
51
|
-
annotation = param.annotation
|
|
52
|
-
|
|
53
|
-
# `x: None` / `x: None = None`
|
|
54
|
-
if annotation is None:
|
|
55
|
-
annotation = Annotated[
|
|
56
|
-
None,
|
|
57
|
-
Field(
|
|
58
|
-
default=param.default
|
|
59
|
-
if param.default is not inspect.Parameter.empty
|
|
60
|
-
else PydanticUndefined
|
|
61
|
-
),
|
|
62
|
-
]
|
|
63
|
-
|
|
64
|
-
# Untyped field
|
|
65
|
-
if annotation is inspect.Parameter.empty:
|
|
66
|
-
annotation = Annotated[
|
|
67
|
-
Any,
|
|
68
|
-
Field(),
|
|
69
|
-
# 🤷
|
|
70
|
-
WithJsonSchema({"title": param.name, "type": "string"}),
|
|
71
|
-
]
|
|
72
|
-
|
|
73
|
-
field_info = FieldInfo.from_annotated_attribute(
|
|
74
|
-
_get_typed_annotation(annotation, globalns),
|
|
75
|
-
param.default
|
|
76
|
-
if param.default is not inspect.Parameter.empty
|
|
77
|
-
else PydanticUndefined,
|
|
78
|
-
)
|
|
79
|
-
dynamic_pydantic_model_params[param.name] = (
|
|
80
|
-
field_info.annotation, field_info)
|
|
81
|
-
continue
|
|
82
|
-
|
|
83
|
-
arguments_model = create_model(
|
|
84
|
-
f"{func_name}Arguments",
|
|
85
|
-
**dynamic_pydantic_model_params,
|
|
86
|
-
__base__=ArgModelBase,
|
|
87
|
-
)
|
|
88
|
-
resp = FuncMetadata(arg_model=arguments_model)
|
|
89
|
-
return resp
|
|
90
|
-
|
|
91
|
-
|
|
92
|
-
def convert_to_content(
|
|
93
|
-
result: Any,
|
|
94
|
-
job_info: Optional[dict] = None,
|
|
95
|
-
) -> Sequence[TextContent | ImageContent | EmbeddedResource]:
|
|
96
|
-
"""Convert a result to a sequence of content objects."""
|
|
97
|
-
other_contents = []
|
|
98
|
-
if isinstance(result, Image):
|
|
99
|
-
other_contents.append(result.to_image_content())
|
|
100
|
-
result = None
|
|
101
|
-
|
|
102
|
-
if isinstance(result, TextContent | ImageContent | EmbeddedResource):
|
|
103
|
-
other_contents.append(result)
|
|
104
|
-
result = None
|
|
105
|
-
|
|
106
|
-
if isinstance(result, list | tuple):
|
|
107
|
-
for item in result.copy():
|
|
108
|
-
if isinstance(item, Image):
|
|
109
|
-
other_contents.append(item.to_image_content())
|
|
110
|
-
result.remove(item)
|
|
111
|
-
elif isinstance(
|
|
112
|
-
result, TextContent | ImageContent | EmbeddedResource):
|
|
113
|
-
other_contents.append(item)
|
|
114
|
-
result.remove(item)
|
|
115
|
-
|
|
116
|
-
if isinstance(result, dict):
|
|
117
|
-
for key, value in list(result.items()):
|
|
118
|
-
if isinstance(value, Image):
|
|
119
|
-
other_contents.append(value.to_image_content())
|
|
120
|
-
del result[key]
|
|
121
|
-
elif isinstance(
|
|
122
|
-
value, TextContent | ImageContent | EmbeddedResource):
|
|
123
|
-
other_contents.append(value)
|
|
124
|
-
del result[key]
|
|
125
|
-
|
|
126
|
-
if not isinstance(result, str):
|
|
127
|
-
result = jsonpickle.dumps(result)
|
|
128
|
-
|
|
129
|
-
return [TextContent(type="text", text=result, job_info=job_info)] \
|
|
130
|
-
+ other_contents
|
|
21
|
+
class JobResult(BaseModel):
|
|
22
|
+
result: Any
|
|
23
|
+
job_info: dict
|
|
131
24
|
|
|
132
25
|
|
|
133
26
|
class Tool(mcp.server.fastmcp.tools.Tool):
|
|
134
27
|
"""
|
|
135
28
|
Workaround MCP server cannot print traceback
|
|
136
|
-
|
|
29
|
+
Add job info to first unstructured content
|
|
137
30
|
"""
|
|
138
31
|
async def run(self, *args, **kwargs):
|
|
139
32
|
try:
|
|
140
|
-
|
|
33
|
+
kwargs["convert_result"] = False
|
|
34
|
+
result = await super().run(*args, **kwargs)
|
|
35
|
+
if isinstance(result, JobResult):
|
|
36
|
+
job_info = result.job_info
|
|
37
|
+
result = self.fn_metadata.convert_result(result.result)
|
|
38
|
+
if isinstance(result, tuple) and len(result) == 2:
|
|
39
|
+
unstructured_content, _ = result
|
|
40
|
+
else:
|
|
41
|
+
unstructured_content = result
|
|
42
|
+
if len(unstructured_content) == 0:
|
|
43
|
+
unstructured_content.append(
|
|
44
|
+
TextContent(type="text", text="null"))
|
|
45
|
+
unstructured_content[0].job_info = job_info
|
|
46
|
+
else:
|
|
47
|
+
result = self.fn_metadata.convert_result(result)
|
|
48
|
+
return result
|
|
141
49
|
except Exception as e:
|
|
142
50
|
traceback.print_exc()
|
|
143
51
|
raise e
|
|
@@ -1,228 +0,0 @@
|
|
|
1
|
-
Metadata-Version: 2.4
|
|
2
|
-
Name: bohr-agent-sdk
|
|
3
|
-
Version: 0.1.102
|
|
4
|
-
Summary: SDK for scientific agents
|
|
5
|
-
Home-page: https://github.com/dptech-corp/bohr-agent-sdk/
|
|
6
|
-
Author: DP Technology
|
|
7
|
-
Maintainer-email: liupeng <liupeng@dp.tech>, zjgemi <liuxzj@dp.tech>
|
|
8
|
-
License: MIT
|
|
9
|
-
Project-URL: Homepage, https://github.com/dptech-corp/bohr-agent-sdk
|
|
10
|
-
Project-URL: repository, https://github.com/dptech-corp/bohr-agent-sdk
|
|
11
|
-
Project-URL: Bug Reports, https://github.com/dptech-corp/bohr-agent-sdk/issues
|
|
12
|
-
Keywords: agent SDK,AI for science
|
|
13
|
-
Classifier: License :: OSI Approved :: MIT License
|
|
14
|
-
Classifier: Programming Language :: Python :: 3
|
|
15
|
-
Classifier: Programming Language :: Python :: 3.10
|
|
16
|
-
Classifier: Programming Language :: Python :: 3.11
|
|
17
|
-
Classifier: Programming Language :: Python :: 3.12
|
|
18
|
-
Classifier: Programming Language :: Python :: 3.13
|
|
19
|
-
Requires-Python: >=3.10
|
|
20
|
-
Description-Content-Type: text/markdown
|
|
21
|
-
Requires-Dist: click>=8.0.0
|
|
22
|
-
Requires-Dist: mcp
|
|
23
|
-
Requires-Dist: python-dotenv>=1.0.0
|
|
24
|
-
Requires-Dist: typing-extensions>=4.8.0
|
|
25
|
-
Requires-Dist: dpdispatcher>=0.6.8
|
|
26
|
-
Requires-Dist: lbg>=1.2.29
|
|
27
|
-
Requires-Dist: jsonpickle>=3.0.3
|
|
28
|
-
Requires-Dist: psutil>=5.9.6
|
|
29
|
-
Requires-Dist: paho-mqtt>=2.1.0
|
|
30
|
-
Requires-Dist: redis>=6.2.0
|
|
31
|
-
Requires-Dist: twine>=6.1.0
|
|
32
|
-
Requires-Dist: build>=1.2.2.post1
|
|
33
|
-
Requires-Dist: watchdog>=6.0.0
|
|
34
|
-
Requires-Dist: fastapi>=0.116.0
|
|
35
|
-
Requires-Dist: bohrium-open-sdk==0.1.5
|
|
36
|
-
Provides-Extra: device
|
|
37
|
-
Requires-Dist: pywinauto-recorder>=0.1.0; extra == "device"
|
|
38
|
-
Provides-Extra: cloud
|
|
39
|
-
Requires-Dist: paho-mqtt>=1.6.1; extra == "cloud"
|
|
40
|
-
Requires-Dist: redis>=5.0.1; extra == "cloud"
|
|
41
|
-
Requires-Dist: aiohttp>=3.9.1; extra == "cloud"
|
|
42
|
-
Provides-Extra: dev
|
|
43
|
-
Requires-Dist: pytest>=7.4.0; extra == "dev"
|
|
44
|
-
Requires-Dist: pytest-asyncio>=0.23.0; extra == "dev"
|
|
45
|
-
Requires-Dist: pytest-cov>=4.1.0; extra == "dev"
|
|
46
|
-
Requires-Dist: black>=23.11.0; extra == "dev"
|
|
47
|
-
Requires-Dist: isort>=5.12.0; extra == "dev"
|
|
48
|
-
Requires-Dist: mypy>=1.7.0; extra == "dev"
|
|
49
|
-
Requires-Dist: pylint>=3.0.0; extra == "dev"
|
|
50
|
-
Requires-Dist: google-adk; extra == "dev"
|
|
51
|
-
Provides-Extra: docs
|
|
52
|
-
Requires-Dist: sphinx>=7.2.0; extra == "docs"
|
|
53
|
-
Requires-Dist: sphinx-rtd-theme>=1.3.0; extra == "docs"
|
|
54
|
-
Provides-Extra: all
|
|
55
|
-
Requires-Dist: bohr-agent-sdk[bohrium,cloud,dev,device,dispatcher,docs]; extra == "all"
|
|
56
|
-
Dynamic: home-page
|
|
57
|
-
Dynamic: requires-python
|
|
58
|
-
|
|
59
|
-
# Bohrium Science Agent SDK
|
|
60
|
-
|
|
61
|
-
这是DP Tech的Bohrium Science Agent SDK,提供了一个命令行工具dp-agent,用于管理科学计算任务。同时提供了Python SDK用于开发自定义的科学计算应用。
|
|
62
|
-
|
|
63
|
-
## 安装
|
|
64
|
-
|
|
65
|
-
```bash
|
|
66
|
-
pip install bohr-agent-sdk -i https://pypi.org/simple --upgrade
|
|
67
|
-
```
|
|
68
|
-
|
|
69
|
-
## CLI 使用方法
|
|
70
|
-
|
|
71
|
-
安装后,您可以使用以下命令:
|
|
72
|
-
|
|
73
|
-
### 获取资源
|
|
74
|
-
|
|
75
|
-
```bash
|
|
76
|
-
# 获取基础代码结构
|
|
77
|
-
dp-agent fetch scaffolding --type=calculation/device
|
|
78
|
-
|
|
79
|
-
# 获取配置文件
|
|
80
|
-
dp-agent fetch config
|
|
81
|
-
```
|
|
82
|
-
|
|
83
|
-
`fetch config` 命令会下载 .env 配置文件并替换部分动态变量,如 MQTT_DEVICE_ID。
|
|
84
|
-
注意:出于安全考虑,此功能仅在内网环境可用。其他环境需要手动配置。
|
|
85
|
-
|
|
86
|
-
### 运行命令
|
|
87
|
-
|
|
88
|
-
```bash
|
|
89
|
-
# 运行实验环境
|
|
90
|
-
dp-agent run tool device
|
|
91
|
-
|
|
92
|
-
# 运行云环境
|
|
93
|
-
dp-agent run tool cloud
|
|
94
|
-
|
|
95
|
-
# 运行计算环境
|
|
96
|
-
dp-agent run tool calculation
|
|
97
|
-
|
|
98
|
-
# 运行代理
|
|
99
|
-
dp-agent run agent --config config.json
|
|
100
|
-
|
|
101
|
-
# 调试模式
|
|
102
|
-
dp-agent run debug
|
|
103
|
-
```
|
|
104
|
-
|
|
105
|
-
## SDK 快速入门
|
|
106
|
-
|
|
107
|
-
Bohrium Science Agent SDK 提供了两种主要的开发模式:实验室模式(Lab)和云模式(Cloud)。
|
|
108
|
-
|
|
109
|
-
### 基础结构
|
|
110
|
-
|
|
111
|
-
安装完成并运行 `dp-agent fetch scaffolding` 后,您将获得以下基础项目结构:
|
|
112
|
-
|
|
113
|
-
```
|
|
114
|
-
your-project/
|
|
115
|
-
├── lab/ # 实验室模式相关代码
|
|
116
|
-
│ ├── __init__.py
|
|
117
|
-
│ └── tescan_device.py # 设备控制示例
|
|
118
|
-
├── cloud/ # 云模式相关代码
|
|
119
|
-
│ └── __init__.py
|
|
120
|
-
└── main.py # 主程序入口
|
|
121
|
-
```
|
|
122
|
-
|
|
123
|
-
### 实验室模式开发
|
|
124
|
-
|
|
125
|
-
实验室模式主要用于控制本地实验设备。以下是一个基于 Tescan 设备的示例:
|
|
126
|
-
|
|
127
|
-
```python
|
|
128
|
-
from typing import Dict, TypedDict
|
|
129
|
-
from dp.agent.device.device import Device, action, BaseParams, SuccessResult
|
|
130
|
-
|
|
131
|
-
class TakePictureParams(BaseParams):
|
|
132
|
-
"""拍照参数"""
|
|
133
|
-
horizontal_width: str
|
|
134
|
-
|
|
135
|
-
class PictureData(TypedDict):
|
|
136
|
-
"""照片数据"""
|
|
137
|
-
image_id: str
|
|
138
|
-
|
|
139
|
-
class PictureResult(SuccessResult):
|
|
140
|
-
"""拍照结果"""
|
|
141
|
-
data: PictureData
|
|
142
|
-
|
|
143
|
-
class MyDevice(Device):
|
|
144
|
-
device_name = "my_device"
|
|
145
|
-
|
|
146
|
-
@action("take_picture")
|
|
147
|
-
def take_picture(self, params: TakePictureParams) -> PictureResult:
|
|
148
|
-
"""拍照动作
|
|
149
|
-
|
|
150
|
-
Args:
|
|
151
|
-
params: 拍照参数
|
|
152
|
-
- horizontal_width: 图片水平宽度
|
|
153
|
-
"""
|
|
154
|
-
hw = params.get("horizontal_width", "default")
|
|
155
|
-
return PictureResult(
|
|
156
|
-
message=f"Picture taken with {self.device_name}",
|
|
157
|
-
data={"image_id": "image_123"}
|
|
158
|
-
)
|
|
159
|
-
```
|
|
160
|
-
|
|
161
|
-
### 云端开发
|
|
162
|
-
|
|
163
|
-
云模式基于 MCP (Message Control Protocol) 实现,用于处理远程设备控制和任务调度。register_mcp_tools 通过 python 的自省和反射机制实现了设备控制的自动注册,无需重复实现操作定义。
|
|
164
|
-
以下展示如何创建设备并注册到 MCP 服务器:
|
|
165
|
-
|
|
166
|
-
```python
|
|
167
|
-
"""
|
|
168
|
-
Example of using the bohr-agent-sdk cloud functionality.
|
|
169
|
-
"""
|
|
170
|
-
import signal
|
|
171
|
-
import sys
|
|
172
|
-
from dp.agent.cloud import mcp, get_mqtt_cloud_instance
|
|
173
|
-
from dp.agent.device.device import TescanDevice, register_mcp_tools
|
|
174
|
-
|
|
175
|
-
def signal_handler(sig, frame):
|
|
176
|
-
"""Handle SIGINT signal to gracefully shutdown."""
|
|
177
|
-
print("Shutting down...")
|
|
178
|
-
get_mqtt_cloud_instance().stop()
|
|
179
|
-
sys.exit(0)
|
|
180
|
-
|
|
181
|
-
def main():
|
|
182
|
-
"""Start the cloud services."""
|
|
183
|
-
print("Starting Tescan Device Twin Cloud Services...")
|
|
184
|
-
|
|
185
|
-
# Register signal handler
|
|
186
|
-
signal.signal(signal.SIGINT, signal_handler)
|
|
187
|
-
|
|
188
|
-
# Create device instance
|
|
189
|
-
device = TescanDevice(mcp, device)
|
|
190
|
-
|
|
191
|
-
# Register device tools
|
|
192
|
-
register_mcp_tools(device)
|
|
193
|
-
|
|
194
|
-
# Start MCP server
|
|
195
|
-
print("Starting MCP server...")
|
|
196
|
-
mcp.run(transport="sse")
|
|
197
|
-
|
|
198
|
-
if __name__ == "__main__":
|
|
199
|
-
main()
|
|
200
|
-
```
|
|
201
|
-
|
|
202
|
-
|
|
203
|
-
### 配置说明
|
|
204
|
-
|
|
205
|
-
在 `.env` 文件中配置必要的环境变量:
|
|
206
|
-
|
|
207
|
-
```
|
|
208
|
-
MQTT_INSTANCE_ID=your_instance_id
|
|
209
|
-
MQTT_ENDPOINT=your_endpoint
|
|
210
|
-
MQTT_DEVICE_ID=your_device_id
|
|
211
|
-
MQTT_GROUP_ID=your_group_id
|
|
212
|
-
MQTT_AK=your_access_key
|
|
213
|
-
MQTT_SK=your_secret_key
|
|
214
|
-
```
|
|
215
|
-
|
|
216
|
-
### 主要功能
|
|
217
|
-
|
|
218
|
-
- 设备控制接口(Lab模式)
|
|
219
|
-
- 设备初始化
|
|
220
|
-
- 命令执行
|
|
221
|
-
- 状态监控
|
|
222
|
-
|
|
223
|
-
- 云端任务处理(Cloud模式)
|
|
224
|
-
- 任务队列管理
|
|
225
|
-
- 计算资源调度
|
|
226
|
-
- 结果回传
|
|
227
|
-
|
|
228
|
-
更详细的API文档请参考代码中的注释。
|
|
File without changes
|
|
File without changes
|
|
File without changes
|