pycityagent 1.1.8__tar.gz → 1.1.10__tar.gz

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
Files changed (59) hide show
  1. {pycityagent-1.1.8 → pycityagent-1.1.10}/PKG-INFO +17 -8
  2. {pycityagent-1.1.8 → pycityagent-1.1.10}/README.md +16 -7
  3. {pycityagent-1.1.8 → pycityagent-1.1.10}/pycityagent/ac/__init__.py +1 -1
  4. {pycityagent-1.1.8 → pycityagent-1.1.10}/pycityagent/agent_func.py +81 -6
  5. {pycityagent-1.1.8 → pycityagent-1.1.10}/pycityagent/brain/sence.py +4 -80
  6. {pycityagent-1.1.8 → pycityagent-1.1.10}/pycityagent/urbanllm/urbanllm.py +73 -21
  7. pycityagent-1.1.10/pycityagent/utils.py +148 -0
  8. {pycityagent-1.1.8 → pycityagent-1.1.10}/pycityagent.egg-info/PKG-INFO +17 -8
  9. {pycityagent-1.1.8 → pycityagent-1.1.10}/pycityagent.egg-info/SOURCES.txt +1 -0
  10. {pycityagent-1.1.8 → pycityagent-1.1.10}/LICENSE +0 -0
  11. {pycityagent-1.1.8 → pycityagent-1.1.10}/pycityagent/__init__.py +0 -0
  12. {pycityagent-1.1.8 → pycityagent-1.1.10}/pycityagent/ac/ac.py +0 -0
  13. {pycityagent-1.1.8 → pycityagent-1.1.10}/pycityagent/ac/action.py +0 -0
  14. {pycityagent-1.1.8 → pycityagent-1.1.10}/pycityagent/ac/action_stream.py +0 -0
  15. {pycityagent-1.1.8 → pycityagent-1.1.10}/pycityagent/ac/citizen_actions/controled.py +0 -0
  16. {pycityagent-1.1.8 → pycityagent-1.1.10}/pycityagent/ac/citizen_actions/converse.py +0 -0
  17. {pycityagent-1.1.8 → pycityagent-1.1.10}/pycityagent/ac/citizen_actions/idle.py +0 -0
  18. {pycityagent-1.1.8 → pycityagent-1.1.10}/pycityagent/ac/citizen_actions/shop.py +0 -0
  19. {pycityagent-1.1.8 → pycityagent-1.1.10}/pycityagent/ac/citizen_actions/trip.py +0 -0
  20. {pycityagent-1.1.8 → pycityagent-1.1.10}/pycityagent/ac/hub_actions.py +0 -0
  21. {pycityagent-1.1.8 → pycityagent-1.1.10}/pycityagent/ac/sim_actions.py +0 -0
  22. {pycityagent-1.1.8 → pycityagent-1.1.10}/pycityagent/agent.py +0 -0
  23. {pycityagent-1.1.8 → pycityagent-1.1.10}/pycityagent/agent_citizen.py +0 -0
  24. {pycityagent-1.1.8 → pycityagent-1.1.10}/pycityagent/brain/__init__.py +0 -0
  25. {pycityagent-1.1.8 → pycityagent-1.1.10}/pycityagent/brain/brain.py +0 -0
  26. {pycityagent-1.1.8 → pycityagent-1.1.10}/pycityagent/brain/brainfc.py +0 -0
  27. {pycityagent-1.1.8 → pycityagent-1.1.10}/pycityagent/brain/memory.py +0 -0
  28. {pycityagent-1.1.8 → pycityagent-1.1.10}/pycityagent/brain/persistence/__init__.py +0 -0
  29. {pycityagent-1.1.8 → pycityagent-1.1.10}/pycityagent/brain/persistence/social.py +0 -0
  30. {pycityagent-1.1.8 → pycityagent-1.1.10}/pycityagent/brain/persistence/spatial.py +0 -0
  31. {pycityagent-1.1.8 → pycityagent-1.1.10}/pycityagent/brain/reason/__init__.py +0 -0
  32. {pycityagent-1.1.8 → pycityagent-1.1.10}/pycityagent/brain/reason/shop.py +0 -0
  33. {pycityagent-1.1.8 → pycityagent-1.1.10}/pycityagent/brain/reason/social.py +0 -0
  34. {pycityagent-1.1.8 → pycityagent-1.1.10}/pycityagent/brain/reason/trip.py +0 -0
  35. {pycityagent-1.1.8 → pycityagent-1.1.10}/pycityagent/brain/reason/user.py +0 -0
  36. {pycityagent-1.1.8 → pycityagent-1.1.10}/pycityagent/brain/retrive/__init__.py +0 -0
  37. {pycityagent-1.1.8 → pycityagent-1.1.10}/pycityagent/brain/retrive/social.py +0 -0
  38. {pycityagent-1.1.8 → pycityagent-1.1.10}/pycityagent/brain/scheduler.py +0 -0
  39. {pycityagent-1.1.8 → pycityagent-1.1.10}/pycityagent/brain/static.py +0 -0
  40. {pycityagent-1.1.8 → pycityagent-1.1.10}/pycityagent/cc/__init__.py +0 -0
  41. {pycityagent-1.1.8 → pycityagent-1.1.10}/pycityagent/cc/cc.py +0 -0
  42. {pycityagent-1.1.8 → pycityagent-1.1.10}/pycityagent/cc/conve.py +0 -0
  43. {pycityagent-1.1.8 → pycityagent-1.1.10}/pycityagent/cc/idle.py +0 -0
  44. {pycityagent-1.1.8 → pycityagent-1.1.10}/pycityagent/cc/shop.py +0 -0
  45. {pycityagent-1.1.8 → pycityagent-1.1.10}/pycityagent/cc/trip.py +0 -0
  46. {pycityagent-1.1.8 → pycityagent-1.1.10}/pycityagent/cc/user.py +0 -0
  47. {pycityagent-1.1.8 → pycityagent-1.1.10}/pycityagent/hubconnector/__init__.py +0 -0
  48. {pycityagent-1.1.8 → pycityagent-1.1.10}/pycityagent/hubconnector/hubconnector.py +0 -0
  49. {pycityagent-1.1.8 → pycityagent-1.1.10}/pycityagent/image/__init__.py +0 -0
  50. {pycityagent-1.1.8 → pycityagent-1.1.10}/pycityagent/image/image.py +0 -0
  51. {pycityagent-1.1.8 → pycityagent-1.1.10}/pycityagent/simulator.py +0 -0
  52. {pycityagent-1.1.8 → pycityagent-1.1.10}/pycityagent/st/__init__.py +0 -0
  53. {pycityagent-1.1.8 → pycityagent-1.1.10}/pycityagent/st/st.py +0 -0
  54. {pycityagent-1.1.8 → pycityagent-1.1.10}/pycityagent/urbanllm/__init__.py +0 -0
  55. {pycityagent-1.1.8 → pycityagent-1.1.10}/pycityagent.egg-info/dependency_links.txt +0 -0
  56. {pycityagent-1.1.8 → pycityagent-1.1.10}/pycityagent.egg-info/requires.txt +0 -0
  57. {pycityagent-1.1.8 → pycityagent-1.1.10}/pycityagent.egg-info/top_level.txt +0 -0
  58. {pycityagent-1.1.8 → pycityagent-1.1.10}/pyproject.toml +0 -0
  59. {pycityagent-1.1.8 → pycityagent-1.1.10}/setup.cfg +0 -0
@@ -1,6 +1,6 @@
1
1
  Metadata-Version: 2.1
2
2
  Name: pycityagent
3
- Version: 1.1.8
3
+ Version: 1.1.10
4
4
  Summary: LLM-based城市模拟器agent构建库
5
5
  Author-email: Yuwei Yan <pinkgranite86@gmail.com>
6
6
  License: MIT License
@@ -79,9 +79,10 @@ llm_request:
79
79
  model: xxx
80
80
  (api_base): xxx (this is an optional config, if you use opanai and want to use your own backend LLM model, default to "https://api.openai.com/v1")
81
81
  img_understand_request:
82
- request_type: qwen
82
+ request_type: openai / qwen
83
83
  api_key: xxx
84
- model: xxx
84
+ model: xxx ('gpt-4-turbo' if you use openai)
85
+ (api_base): same as text_request
85
86
  img_generate_request:
86
87
  request_type: qwen
87
88
  api_key: xxx
@@ -110,11 +111,19 @@ apphub_request:
110
111
 
111
112
  #### LLM_REQUEST
112
113
  - As you can see, the whole CityAgent is based on the LLM, by now, there are three different parts of config items: **text_request**, **img_understand_request** and **img_generate_request**
113
- - By now, we support [**qwen**](https://tongyi.aliyun.com/) and [**openai**](https://openai.com/)
114
- - `Notice: Our environments are basically conducted with qwen. If you prefer to use openai, then you may encounter hardships. AND fell free to issue us.`
115
- - Get your **api_key** and chooce your **model**
116
- - If you want to use your backend models, set the **api_base** (only available when using **openai**)
117
- - default value: "https://api.openai.com/v1"
114
+ - **text_request**
115
+ - By now, we support [**qwen**](https://tongyi.aliyun.com/) and [**openai**](https://openai.com/)
116
+ - `Notice: Our environments are basically conducted with qwen. If you prefer to use openai, then you may encounter hardships. AND fell free to issue us.`
117
+ - Get your **api_key** and chooce your **model**
118
+ - If you want to use your backend models, set the **api_base** (only available when using **openai**)
119
+ - default value: "https://api.openai.com/v1"
120
+ - **img_understand_request**
121
+ - By now, we support **qwen** and **openai**
122
+ - If choose **openai**, then the **model** has to be '**gpt-4-turbo**'
123
+ - If you want to use your backend models, set the **api_base** (only available when using **openai**)
124
+ - default value: "https://api.openai.com/v1"
125
+ - **img_generate_request**
126
+ - By now, only [**qwen**] is supported
118
127
 
119
128
  #### CITYSIM_REQUEST
120
129
  - Most of the configuration options in this part are determined, such as **simulator.server**, **map_request.mongo_coll**, **route_request.server**
@@ -32,9 +32,10 @@ llm_request:
32
32
  model: xxx
33
33
  (api_base): xxx (this is an optional config, if you use opanai and want to use your own backend LLM model, default to "https://api.openai.com/v1")
34
34
  img_understand_request:
35
- request_type: qwen
35
+ request_type: openai / qwen
36
36
  api_key: xxx
37
- model: xxx
37
+ model: xxx ('gpt-4-turbo' if you use openai)
38
+ (api_base): same as text_request
38
39
  img_generate_request:
39
40
  request_type: qwen
40
41
  api_key: xxx
@@ -63,11 +64,19 @@ apphub_request:
63
64
 
64
65
  #### LLM_REQUEST
65
66
  - As you can see, the whole CityAgent is based on the LLM, by now, there are three different parts of config items: **text_request**, **img_understand_request** and **img_generate_request**
66
- - By now, we support [**qwen**](https://tongyi.aliyun.com/) and [**openai**](https://openai.com/)
67
- - `Notice: Our environments are basically conducted with qwen. If you prefer to use openai, then you may encounter hardships. AND fell free to issue us.`
68
- - Get your **api_key** and chooce your **model**
69
- - If you want to use your backend models, set the **api_base** (only available when using **openai**)
70
- - default value: "https://api.openai.com/v1"
67
+ - **text_request**
68
+ - By now, we support [**qwen**](https://tongyi.aliyun.com/) and [**openai**](https://openai.com/)
69
+ - `Notice: Our environments are basically conducted with qwen. If you prefer to use openai, then you may encounter hardships. AND fell free to issue us.`
70
+ - Get your **api_key** and chooce your **model**
71
+ - If you want to use your backend models, set the **api_base** (only available when using **openai**)
72
+ - default value: "https://api.openai.com/v1"
73
+ - **img_understand_request**
74
+ - By now, we support **qwen** and **openai**
75
+ - If choose **openai**, then the **model** has to be '**gpt-4-turbo**'
76
+ - If you want to use your backend models, set the **api_base** (only available when using **openai**)
77
+ - default value: "https://api.openai.com/v1"
78
+ - **img_generate_request**
79
+ - By now, only [**qwen**] is supported
71
80
 
72
81
  #### CITYSIM_REQUEST
73
82
  - Most of the configuration options in this part are determined, such as **simulator.server**, **map_request.mongo_coll**, **route_request.server**
@@ -5,4 +5,4 @@ from .action import *
5
5
  from .hub_actions import *
6
6
  from .sim_actions import *
7
7
 
8
- __all__ = [ActionController, Action, HubAction, SimAction, SendUserMessage, SendStreetview, SendPop, PositionUpdate, ShowPath, ShowPosition, SetSchedule, SendAgentMessage]
8
+ __all__ = [ActionController, Action, HubAction, SimAction, SendUserMessage, SendStreetview, SendPop, ShowPath, ShowPosition, SetSchedule, SendAgentMessage]
@@ -1,10 +1,12 @@
1
1
  """FuncAgent: 功能性智能体及其定义"""
2
2
 
3
+ from typing import Union
3
4
  from pycityagent.urbanllm import UrbanLLM
4
5
  from .urbanllm import UrbanLLM
5
6
  from .agent import Agent, AgentType
6
7
  from .image.image import Image
7
8
  from .ac.hub_actions import PositionUpdate
9
+ from .utils import *
8
10
 
9
11
  class FuncAgent(Agent):
10
12
  """
@@ -63,23 +65,96 @@ class FuncAgent(Agent):
63
65
  - x (double)
64
66
  - y (double)
65
67
  - z (double)
66
- - direction (double): 方向角
68
+ - direction (double): 朝向-方向角
67
69
  """
68
70
 
69
- async def init_position_aoi(self, aoi_id:int):
71
+ self._posUpdate = PositionUpdate(self)
72
+
73
+ async def set_position_aoi(self, aoi_id:int):
70
74
  """
71
- - 将agent的位置初始化到指定aoi
72
- - 根据指定aoi设置aoi_position, longlat_position以及xy_position
75
+ - 将agent的位置设定到指定aoi
76
+
77
+ Args:
78
+ - aoi_id (int): AOI id
73
79
  """
74
80
  if aoi_id in self._simulator.map.aois:
75
81
  aoi = self._simulator.map.aois[aoi_id]
82
+ self.motion['position'] = {}
76
83
  self.motion['position']['aoi_position'] = {'aoi_id': aoi_id}
77
84
  self.motion['position']['longlat_position'] = {'longitude': aoi['shapely_lnglat'].centroid.coords[0][0], 'latitude': aoi['shapely_lnglat'].centroid.coords[0][1]}
78
85
  x, y = self._simulator.map.lnglat2xy(lng=self.motion['position']['longlat_position']['longitude'],
79
86
  lat=self.motion['position']['longlat_position']['latitude'])
80
87
  self.motion['position']['xy_position'] = {'x': x, 'y': y}
81
- pos = PositionUpdate(self)
82
- await pos.Forward(longlat=[self.motion['position']['longlat_position']['longitude'], self.motion['position']['longlat_position']['latitude']])
88
+ await self._posUpdate.Forward(longlat=[self.motion['position']['longlat_position']['longitude'], self.motion['position']['longlat_position']['latitude']])
89
+ else:
90
+ print("Error: wrong aoi id")
91
+
92
+ async def set_position_lane(self, lane_id:int, s:float=0.0, direction:Union[float, str]='front'):
93
+ """
94
+ - 将agent的位置设定到指定lane
95
+
96
+ Args:
97
+ - lane_id (int): Lane id
98
+ - s (float): distance from the start point of the lane, default None, if None, then set to the start point
99
+ - direction (Union[float, str]): agent方向角, 默认值为'front'
100
+ - float: 直接设置为给定方向角(atan2计算得到)
101
+ - str: 可选项['front', 'back'], 指定agent的行走方向
102
+ - 对于driving lane而言, 只能朝一个方向通行, 只能是'front'
103
+ - 对于walking lane而言, 可以朝两个方向通行, 可以是'front'或'back'
104
+ """
105
+ if lane_id in self._simulator.map.lanes:
106
+ lane = self._simulator.map.lanes[lane_id]
107
+ if s > lane['length']:
108
+ print("Error: 's' too large")
109
+ self.motion['position'] = {}
110
+ self.motion['position']['lane_position'] = {'lane_id': lane_id, 's': s}
111
+ nodes = lane['center_line']['nodes']
112
+ x, y = get_xy_in_lane(nodes, s)
113
+ longlat = self._simulator.map.xy2lnglat(x=x, y=y)
114
+ self.motion['position']['longlat_position'] = {
115
+ 'longitude': longlat[0],
116
+ 'latitude': longlat[1]
117
+ }
118
+ self.motion['position']['xy_position'] = {
119
+ 'x': x,
120
+ 'y': y
121
+ }
122
+ if isinstance(direction, float):
123
+ self.motion['direction'] = direction
124
+ else:
125
+ # 计算方向角
126
+ direction_ = get_direction_by_s(nodes, s, direction)
127
+ self.motion['direction'] = direction_
128
+ await self._posUpdate.Forward(longlat=[self.motion['position']['longlat_position']['longitude'], self.motion['position']['longlat_position']['latitude']])
129
+ else:
130
+ print("Error: wrong lane id")
131
+
132
+ async def set_position_poi(self, poi_id:int):
133
+ """
134
+ - 将agent的位置设定到指定poi
135
+
136
+ Args:
137
+ - poi_id (int): Poi id
138
+ """
139
+ if poi_id in self._simulator.map.pois:
140
+ poi = self._simulator.map.pois[poi_id]
141
+ x = poi['position']['x']
142
+ y = poi['position']['y']
143
+ longlat = self._simulator.map.xy2lnglat(x=x, y=y)
144
+ aoi_id = poi['aoi_id']
145
+ self.motion['position'] = {}
146
+ self.motion['position']['aoi_position'] = {'aoi_id': aoi_id}
147
+ self.motion['position']['longlat_position'] = {
148
+ 'longitude': longlat[0],
149
+ 'latitude': longlat[1]
150
+ }
151
+ self.motion['position']['xy_position'] = {
152
+ 'x': x,
153
+ 'y': y
154
+ }
155
+ await self._posUpdate.Forward(longlat=[self.motion['position']['longlat_position']['longitude'], self.motion['position']['longlat_position']['latitude']])
156
+ else:
157
+ print("Error: wrong poi id")
83
158
 
84
159
  def Bind(self):
85
160
  """
@@ -13,83 +13,7 @@ from citystreetview import (
13
13
  )
14
14
  from .brainfc import BrainFunction
15
15
  from .static import POI_TYPE_DICT, LEVEL_ONE_PRE
16
-
17
- def point_on_line_given_distance(start_node, end_node, distance):
18
- """
19
- Given two points (start_point and end_point) defining a line, and a distance s to travel along the line,
20
- return the coordinates of the point reached after traveling s units along the line, starting from start_point.
21
-
22
- Args:
23
- start_point (tuple): Tuple of (x, y) representing the starting point on the line.
24
- end_point (tuple): Tuple of (x, y) representing the ending point on the line.
25
- distance (float): Distance to travel along the line, starting from start_point.
26
-
27
- Returns:
28
- tuple: Tuple of (x, y) representing the new point reached after traveling s units along the line.
29
- """
30
-
31
- x1, y1 = start_node['x'], start_node['y']
32
- x2, y2 = end_node['x'], end_node['y']
33
-
34
- # Calculate the slope m and the y-intercept b of the line
35
- if x1 == x2:
36
- # Vertical line, distance is only along the y-axis
37
- return (x1, y1 + distance if distance >= 0 else y1 - abs(distance))
38
- else:
39
- m = (y2 - y1) / (x2 - x1)
40
- b = y1 - m * x1
41
-
42
- # Calculate the direction vector (dx, dy) along the line
43
- dx = (x2 - x1) / math.sqrt((x2 - x1)**2 + (y2 - y1)**2)
44
- dy = (y2 - y1) / math.sqrt((x2 - x1)**2 + (y2 - y1)**2)
45
-
46
- # Scale the direction vector by the given distance
47
- scaled_dx = dx * distance
48
- scaled_dy = dy * distance
49
-
50
- # Calculate the new point's coordinates
51
- x = x1 + scaled_dx
52
- y = y1 + scaled_dy
53
-
54
- return [x, y]
55
-
56
- def get_xy_in_lane(nodes, distance, direction:str='front'):
57
- temp_sum = 0
58
- remain_s = 0
59
- if direction == 'front':
60
- # 顺道路方向前进
61
- if distance == 0:
62
- return [nodes[0]['x'], nodes[0]['y']]
63
- key_index = 0 # first node
64
- for i in range(1, len(nodes)):
65
- x1, y1 = nodes[i-1]['x'], nodes[i-1]['y']
66
- x2, y2 = nodes[i]['x'], nodes[i]['y']
67
- temp_sum += math.sqrt((x2 - x1)**2 + (y2-y1)**2)
68
- if temp_sum > distance:
69
- remain_s = distance - (temp_sum - math.sqrt((x2 - x1)**2 + (y2-y1)**2))
70
- break;
71
- key_index += 1
72
- if remain_s < 0.5:
73
- return [nodes[key_index]['x'], nodes[key_index]['y']]
74
- longlat = point_on_line_given_distance(nodes[key_index], nodes[key_index+1], remain_s)
75
- return longlat
76
- else:
77
- # 逆道路方向前进
78
- if distance == 0:
79
- return [nodes[-1]['x'], nodes[-1]['y']]
80
- key_index = len(nodes)-1 # last node
81
- for i in range(len(nodes)-1, 0, -1):
82
- x1, y1 = nodes[i]['x'], nodes[i]['y']
83
- x2, y2 = nodes[i-1]['x'], nodes[i-1]['y']
84
- temp_sum += math.sqrt((x2 - x1)**2 + (y2-y1)**2)
85
- if temp_sum > distance:
86
- remain_s = distance - (temp_sum - math.sqrt((x2 - x1)**2 + (y2-y1)**2))
87
- break;
88
- key_index -= 1
89
- if remain_s < 0.5:
90
- return [nodes[key_index]['x'], nodes[key_index]['y']]
91
- longlat = point_on_line_given_distance(nodes[key_index], nodes[key_index-1], remain_s)
92
- return longlat
16
+ from ..utils import point_on_line_given_distance, get_xy_in_lane
93
17
 
94
18
  class SencePlug:
95
19
  """
@@ -146,10 +70,10 @@ class Sence(BrainFunction):
146
70
  SencePlug Buffer: used to store those sence plug content
147
71
  """
148
72
 
149
- self.enable_streeview = False
73
+ self.enable_streeview = True
150
74
  """
151
- 街景感知功能接口, 默认为False
152
- Interface of streetview function, defualt: False
75
+ 街景感知功能接口, 默认为True
76
+ Interface of streetview function, defualt: True
153
77
  """
154
78
 
155
79
  self._lane_type_mapping = {1: 'driving', 2: 'walking'}
@@ -3,12 +3,16 @@
3
3
  from openai import OpenAI
4
4
  from http import HTTPStatus
5
5
  import dashscope
6
- from urllib.parse import urlparse, unquote
7
- from pathlib import PurePosixPath
8
6
  import requests
9
7
  from dashscope import ImageSynthesis
10
8
  from PIL import Image
11
9
  from io import BytesIO
10
+ from typing import Union
11
+ import base64
12
+
13
+ def encode_image(image_path):
14
+ with open(image_path, "rb") as image_file:
15
+ return base64.b64encode(image_file.read()).decode('utf-8')
12
16
 
13
17
  class LLMConfig:
14
18
  """
@@ -79,19 +83,19 @@ class UrbanLLM:
79
83
  if response.status_code == HTTPStatus.OK:
80
84
  return response.output.choices[0]['message']['content']
81
85
  else:
82
- return "Error: {}".format(response.status_code)
86
+ return "Error: {}, {}".format(response.status_code, response.message)
83
87
  else:
84
88
  print("ERROR: Wrong Config")
85
89
  return "wrong config"
86
90
 
87
- def img_understand(self, img_path:str, prompt:str=None) -> str:
91
+ def img_understand(self, img_path:Union[str, list[str]], prompt:str=None) -> str:
88
92
  """
89
93
  图像理解
90
94
  Image understanding
91
95
 
92
96
  Args:
93
- - img_path: 目标图像的路径. The path of selected Image
94
- - prompt: 理解提示词 - 例如理解方向. The understanding prompts
97
+ - img_path (Union[str, list[str]]): 目标图像的路径, 既可以是一个路径也可以是包含多张图片路径的list. The path of selected Image
98
+ - prompt (str): 理解提示词 - 例如理解方向. The understanding prompts
95
99
 
96
100
  Returns:
97
101
  - (str): the understanding content
@@ -99,22 +103,70 @@ class UrbanLLM:
99
103
  ppt = "如何理解这幅图像?"
100
104
  if prompt != None:
101
105
  ppt = prompt
102
- dialog = [{
103
- 'role': 'user',
104
- 'content': [
105
- {'image': 'file://' + img_path},
106
- {'text': ppt}
107
- ]
108
- }]
109
- response = dashscope.MultiModalConversation.call(
110
- model=self.config.image_u['model'],
111
- api_key=self.config.image_u['api_key'],
112
- messages=dialog
113
- )
114
- if response.status_code == HTTPStatus.OK:
115
- return response.output.choices[0]['message']['content']
106
+ if self.config.image_u['request_type'] == 'openai':
107
+ if 'api_base' in self.config.image_u.keys():
108
+ api_base = self.config.image_u['api_base']
109
+ else:
110
+ api_base = None
111
+ client = OpenAI(
112
+ api_key=self.config.text['api_key'],
113
+ base_url=api_base,
114
+ )
115
+ content = []
116
+ content.append({'type': 'text', 'text': ppt})
117
+ if isinstance(img_path, str):
118
+ base64_image = encode_image(img_path)
119
+ content.append({
120
+ 'type': 'image_url',
121
+ 'image_url': {
122
+ 'url': f"data:image/jpeg;base64,{base64_image}"
123
+ }
124
+ })
125
+ elif isinstance(img_path, list) and all(isinstance(item, str) for item in img_path):
126
+ for item in img_path:
127
+ base64_image = encode_image(item)
128
+ content.append({
129
+ 'type': 'image_url',
130
+ 'image_url': {
131
+ 'url': f"data:image/jpeg;base64,{base64_image}"
132
+ }
133
+ })
134
+ response = client.chat.completions.create(
135
+ model=self.config.image_u['model'],
136
+ messages=[{
137
+ 'role': 'user',
138
+ 'content': content
139
+ }]
140
+ )
141
+ return response.choices[0].message.content
142
+ elif self.config.image_u['request_type'] == 'qwen':
143
+ content = []
144
+ if isinstance(img_path, str):
145
+ content.append({'image': 'file://' + img_path})
146
+ content.append({'text': ppt})
147
+ elif isinstance(img_path, list) and all(isinstance(item, str) for item in img_path):
148
+ for item in img_path:
149
+ content.append({
150
+ 'image': 'file://' + item
151
+ })
152
+ content.append({'text': ppt})
153
+
154
+ dialog = [{
155
+ 'role': 'user',
156
+ 'content': content
157
+ }]
158
+ response = dashscope.MultiModalConversation.call(
159
+ model=self.config.image_u['model'],
160
+ api_key=self.config.image_u['api_key'],
161
+ messages=dialog
162
+ )
163
+ if response.status_code == HTTPStatus.OK:
164
+ return response.output.choices[0]['message']['content']
165
+ else:
166
+ print(response.code) # The error code.
167
+ return "Error"
116
168
  else:
117
- print(response.code) # The error code.
169
+ print("ERROR: wrong image understanding type, only 'openai' and 'openai' is available")
118
170
  return "Error"
119
171
 
120
172
  def img_generate(self, prompt:str, size:str='512*512', quantity:int = 1):
@@ -0,0 +1,148 @@
1
+ import math
2
+
3
+ def get_angle(x, y):
4
+ return math.atan2(y, x)*180/math.pi
5
+
6
+ def point_on_line_given_distance(start_node, end_node, distance):
7
+ """
8
+ Given two points (start_point and end_point) defining a line, and a distance s to travel along the line,
9
+ return the coordinates of the point reached after traveling s units along the line, starting from start_point.
10
+
11
+ Args:
12
+ start_point (tuple): Tuple of (x, y) representing the starting point on the line.
13
+ end_point (tuple): Tuple of (x, y) representing the ending point on the line.
14
+ distance (float): Distance to travel along the line, starting from start_point.
15
+
16
+ Returns:
17
+ tuple: Tuple of (x, y) representing the new point reached after traveling s units along the line.
18
+ """
19
+
20
+ x1, y1 = start_node['x'], start_node['y']
21
+ x2, y2 = end_node['x'], end_node['y']
22
+
23
+ # Calculate the slope m and the y-intercept b of the line
24
+ if x1 == x2:
25
+ # Vertical line, distance is only along the y-axis
26
+ return (x1, y1 + distance if distance >= 0 else y1 - abs(distance))
27
+ else:
28
+ m = (y2 - y1) / (x2 - x1)
29
+ b = y1 - m * x1
30
+
31
+ # Calculate the direction vector (dx, dy) along the line
32
+ dx = (x2 - x1) / math.sqrt((x2 - x1)**2 + (y2 - y1)**2)
33
+ dy = (y2 - y1) / math.sqrt((x2 - x1)**2 + (y2 - y1)**2)
34
+
35
+ # Scale the direction vector by the given distance
36
+ scaled_dx = dx * distance
37
+ scaled_dy = dy * distance
38
+
39
+ # Calculate the new point's coordinates
40
+ x = x1 + scaled_dx
41
+ y = y1 + scaled_dy
42
+
43
+ return [x, y]
44
+
45
+ def get_xy_in_lane(nodes, distance, direction:str='front'):
46
+ temp_sum = 0
47
+ remain_s = 0
48
+ if direction == 'front':
49
+ # 顺道路方向前进
50
+ if distance == 0:
51
+ return [nodes[0]['x'], nodes[0]['y']]
52
+ key_index = 0 # first node
53
+ for i in range(1, len(nodes)):
54
+ x1, y1 = nodes[i-1]['x'], nodes[i-1]['y']
55
+ x2, y2 = nodes[i]['x'], nodes[i]['y']
56
+ temp_sum += math.sqrt((x2 - x1)**2 + (y2-y1)**2)
57
+ if temp_sum > distance:
58
+ remain_s = distance - (temp_sum - math.sqrt((x2 - x1)**2 + (y2-y1)**2))
59
+ break;
60
+ key_index += 1
61
+ if remain_s < 0.5:
62
+ return [nodes[key_index]['x'], nodes[key_index]['y']]
63
+ longlat = point_on_line_given_distance(nodes[key_index], nodes[key_index+1], remain_s)
64
+ return longlat
65
+ else:
66
+ # 逆道路方向前进
67
+ if distance == 0:
68
+ return [nodes[-1]['x'], nodes[-1]['y']]
69
+ key_index = len(nodes)-1 # last node
70
+ for i in range(len(nodes)-1, 0, -1):
71
+ x1, y1 = nodes[i]['x'], nodes[i]['y']
72
+ x2, y2 = nodes[i-1]['x'], nodes[i-1]['y']
73
+ temp_sum += math.sqrt((x2 - x1)**2 + (y2-y1)**2)
74
+ if temp_sum > distance:
75
+ remain_s = distance - (temp_sum - math.sqrt((x2 - x1)**2 + (y2-y1)**2))
76
+ break;
77
+ key_index -= 1
78
+ if remain_s < 0.5:
79
+ return [nodes[key_index]['x'], nodes[key_index]['y']]
80
+ longlat = point_on_line_given_distance(nodes[key_index], nodes[key_index-1], remain_s)
81
+ return longlat
82
+
83
+ def get_direction_by_s(nodes, distance, direction:str='front'):
84
+ temp_sum = 0
85
+ if direction == 'front':
86
+ # 顺道路方向前进
87
+ if distance == 0:
88
+ return [nodes[0]['x'], nodes[0]['y']]
89
+ key_index = 0 # first node
90
+ for i in range(1, len(nodes)):
91
+ x1, y1 = nodes[i-1]['x'], nodes[i-1]['y']
92
+ x2, y2 = nodes[i]['x'], nodes[i]['y']
93
+ temp_sum += math.sqrt((x2 - x1)**2 + (y2-y1)**2)
94
+ if temp_sum > distance:
95
+ break;
96
+ key_index += 1
97
+ if key_index == len(nodes)-1:
98
+ # 端点
99
+ x = nodes[key_index]['x']-nodes[key_index-1]['x']
100
+ y = nodes[key_index]['y']-nodes[key_index-1]['y']
101
+ return get_angle(x, y)
102
+ else:
103
+ # 中间点
104
+ x = nodes[key_index+1]['x'] - nodes[key_index]['x']
105
+ y = nodes[key_index+1]['y'] - nodes[key_index]['y']
106
+ return get_angle(x, y)
107
+ elif direction == 'back':
108
+ # 逆道路方向前进
109
+ if distance == 0:
110
+ return [nodes[-1]['x'], nodes[-1]['y']]
111
+ key_index = len(nodes)-1 # last node
112
+ for i in range(len(nodes)-1, 0, -1):
113
+ x1, y1 = nodes[i]['x'], nodes[i]['y']
114
+ x2, y2 = nodes[i-1]['x'], nodes[i-1]['y']
115
+ temp_sum += math.sqrt((x2 - x1)**2 + (y2-y1)**2)
116
+ if temp_sum > distance:
117
+ break;
118
+ key_index -= 1
119
+ if key_index == 0:
120
+ x = nodes[key_index]['x'] - nodes[key_index+1]['x']
121
+ y = nodes[key_index]['y'] - nodes[key_index+1]['y']
122
+ return get_angle(x, y)
123
+ else:
124
+ x = nodes[key_index-1]['x'] - nodes[key_index]['x']
125
+ y = nodes[key_index-1]['y'] - nodes[key_index]['y']
126
+ return get_angle(x, y)
127
+ else:
128
+ print("Warning: wroing direction, 'front' instead")
129
+ if distance == 0:
130
+ return [nodes[0]['x'], nodes[0]['y']]
131
+ key_index = 0 # first node
132
+ for i in range(1, len(nodes)):
133
+ x1, y1 = nodes[i-1]['x'], nodes[i-1]['y']
134
+ x2, y2 = nodes[i]['x'], nodes[i]['y']
135
+ temp_sum += math.sqrt((x2 - x1)**2 + (y2-y1)**2)
136
+ if temp_sum > distance:
137
+ break;
138
+ key_index += 1
139
+ if key_index == len(nodes)-1:
140
+ # 端点
141
+ x = nodes[key_index]['x']-nodes[key_index-1]['x']
142
+ y = nodes[key_index]['y']-nodes[key_index-1]['y']
143
+ return get_angle(x, y)
144
+ else:
145
+ # 中间点
146
+ x = nodes[key_index+1]['x'] - nodes[key_index]['x']
147
+ y = nodes[key_index+1]['y'] - nodes[key_index]['y']
148
+ return get_angle(x, y)
@@ -1,6 +1,6 @@
1
1
  Metadata-Version: 2.1
2
2
  Name: pycityagent
3
- Version: 1.1.8
3
+ Version: 1.1.10
4
4
  Summary: LLM-based城市模拟器agent构建库
5
5
  Author-email: Yuwei Yan <pinkgranite86@gmail.com>
6
6
  License: MIT License
@@ -79,9 +79,10 @@ llm_request:
79
79
  model: xxx
80
80
  (api_base): xxx (this is an optional config, if you use opanai and want to use your own backend LLM model, default to "https://api.openai.com/v1")
81
81
  img_understand_request:
82
- request_type: qwen
82
+ request_type: openai / qwen
83
83
  api_key: xxx
84
- model: xxx
84
+ model: xxx ('gpt-4-turbo' if you use openai)
85
+ (api_base): same as text_request
85
86
  img_generate_request:
86
87
  request_type: qwen
87
88
  api_key: xxx
@@ -110,11 +111,19 @@ apphub_request:
110
111
 
111
112
  #### LLM_REQUEST
112
113
  - As you can see, the whole CityAgent is based on the LLM, by now, there are three different parts of config items: **text_request**, **img_understand_request** and **img_generate_request**
113
- - By now, we support [**qwen**](https://tongyi.aliyun.com/) and [**openai**](https://openai.com/)
114
- - `Notice: Our environments are basically conducted with qwen. If you prefer to use openai, then you may encounter hardships. AND fell free to issue us.`
115
- - Get your **api_key** and chooce your **model**
116
- - If you want to use your backend models, set the **api_base** (only available when using **openai**)
117
- - default value: "https://api.openai.com/v1"
114
+ - **text_request**
115
+ - By now, we support [**qwen**](https://tongyi.aliyun.com/) and [**openai**](https://openai.com/)
116
+ - `Notice: Our environments are basically conducted with qwen. If you prefer to use openai, then you may encounter hardships. AND fell free to issue us.`
117
+ - Get your **api_key** and chooce your **model**
118
+ - If you want to use your backend models, set the **api_base** (only available when using **openai**)
119
+ - default value: "https://api.openai.com/v1"
120
+ - **img_understand_request**
121
+ - By now, we support **qwen** and **openai**
122
+ - If choose **openai**, then the **model** has to be '**gpt-4-turbo**'
123
+ - If you want to use your backend models, set the **api_base** (only available when using **openai**)
124
+ - default value: "https://api.openai.com/v1"
125
+ - **img_generate_request**
126
+ - By now, only [**qwen**] is supported
118
127
 
119
128
  #### CITYSIM_REQUEST
120
129
  - Most of the configuration options in this part are determined, such as **simulator.server**, **map_request.mongo_coll**, **route_request.server**
@@ -6,6 +6,7 @@ pycityagent/agent.py
6
6
  pycityagent/agent_citizen.py
7
7
  pycityagent/agent_func.py
8
8
  pycityagent/simulator.py
9
+ pycityagent/utils.py
9
10
  pycityagent.egg-info/PKG-INFO
10
11
  pycityagent.egg-info/SOURCES.txt
11
12
  pycityagent.egg-info/dependency_links.txt
File without changes
File without changes
File without changes