pycityagent 1.1.8__tar.gz → 1.1.9__tar.gz

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
Files changed (58) hide show
  1. {pycityagent-1.1.8 → pycityagent-1.1.9}/PKG-INFO +17 -8
  2. {pycityagent-1.1.8 → pycityagent-1.1.9}/README.md +16 -7
  3. {pycityagent-1.1.8 → pycityagent-1.1.9}/pycityagent/brain/sence.py +3 -3
  4. {pycityagent-1.1.8 → pycityagent-1.1.9}/pycityagent/urbanllm/urbanllm.py +72 -20
  5. {pycityagent-1.1.8 → pycityagent-1.1.9}/pycityagent.egg-info/PKG-INFO +17 -8
  6. {pycityagent-1.1.8 → pycityagent-1.1.9}/LICENSE +0 -0
  7. {pycityagent-1.1.8 → pycityagent-1.1.9}/pycityagent/__init__.py +0 -0
  8. {pycityagent-1.1.8 → pycityagent-1.1.9}/pycityagent/ac/__init__.py +0 -0
  9. {pycityagent-1.1.8 → pycityagent-1.1.9}/pycityagent/ac/ac.py +0 -0
  10. {pycityagent-1.1.8 → pycityagent-1.1.9}/pycityagent/ac/action.py +0 -0
  11. {pycityagent-1.1.8 → pycityagent-1.1.9}/pycityagent/ac/action_stream.py +0 -0
  12. {pycityagent-1.1.8 → pycityagent-1.1.9}/pycityagent/ac/citizen_actions/controled.py +0 -0
  13. {pycityagent-1.1.8 → pycityagent-1.1.9}/pycityagent/ac/citizen_actions/converse.py +0 -0
  14. {pycityagent-1.1.8 → pycityagent-1.1.9}/pycityagent/ac/citizen_actions/idle.py +0 -0
  15. {pycityagent-1.1.8 → pycityagent-1.1.9}/pycityagent/ac/citizen_actions/shop.py +0 -0
  16. {pycityagent-1.1.8 → pycityagent-1.1.9}/pycityagent/ac/citizen_actions/trip.py +0 -0
  17. {pycityagent-1.1.8 → pycityagent-1.1.9}/pycityagent/ac/hub_actions.py +0 -0
  18. {pycityagent-1.1.8 → pycityagent-1.1.9}/pycityagent/ac/sim_actions.py +0 -0
  19. {pycityagent-1.1.8 → pycityagent-1.1.9}/pycityagent/agent.py +0 -0
  20. {pycityagent-1.1.8 → pycityagent-1.1.9}/pycityagent/agent_citizen.py +0 -0
  21. {pycityagent-1.1.8 → pycityagent-1.1.9}/pycityagent/agent_func.py +0 -0
  22. {pycityagent-1.1.8 → pycityagent-1.1.9}/pycityagent/brain/__init__.py +0 -0
  23. {pycityagent-1.1.8 → pycityagent-1.1.9}/pycityagent/brain/brain.py +0 -0
  24. {pycityagent-1.1.8 → pycityagent-1.1.9}/pycityagent/brain/brainfc.py +0 -0
  25. {pycityagent-1.1.8 → pycityagent-1.1.9}/pycityagent/brain/memory.py +0 -0
  26. {pycityagent-1.1.8 → pycityagent-1.1.9}/pycityagent/brain/persistence/__init__.py +0 -0
  27. {pycityagent-1.1.8 → pycityagent-1.1.9}/pycityagent/brain/persistence/social.py +0 -0
  28. {pycityagent-1.1.8 → pycityagent-1.1.9}/pycityagent/brain/persistence/spatial.py +0 -0
  29. {pycityagent-1.1.8 → pycityagent-1.1.9}/pycityagent/brain/reason/__init__.py +0 -0
  30. {pycityagent-1.1.8 → pycityagent-1.1.9}/pycityagent/brain/reason/shop.py +0 -0
  31. {pycityagent-1.1.8 → pycityagent-1.1.9}/pycityagent/brain/reason/social.py +0 -0
  32. {pycityagent-1.1.8 → pycityagent-1.1.9}/pycityagent/brain/reason/trip.py +0 -0
  33. {pycityagent-1.1.8 → pycityagent-1.1.9}/pycityagent/brain/reason/user.py +0 -0
  34. {pycityagent-1.1.8 → pycityagent-1.1.9}/pycityagent/brain/retrive/__init__.py +0 -0
  35. {pycityagent-1.1.8 → pycityagent-1.1.9}/pycityagent/brain/retrive/social.py +0 -0
  36. {pycityagent-1.1.8 → pycityagent-1.1.9}/pycityagent/brain/scheduler.py +0 -0
  37. {pycityagent-1.1.8 → pycityagent-1.1.9}/pycityagent/brain/static.py +0 -0
  38. {pycityagent-1.1.8 → pycityagent-1.1.9}/pycityagent/cc/__init__.py +0 -0
  39. {pycityagent-1.1.8 → pycityagent-1.1.9}/pycityagent/cc/cc.py +0 -0
  40. {pycityagent-1.1.8 → pycityagent-1.1.9}/pycityagent/cc/conve.py +0 -0
  41. {pycityagent-1.1.8 → pycityagent-1.1.9}/pycityagent/cc/idle.py +0 -0
  42. {pycityagent-1.1.8 → pycityagent-1.1.9}/pycityagent/cc/shop.py +0 -0
  43. {pycityagent-1.1.8 → pycityagent-1.1.9}/pycityagent/cc/trip.py +0 -0
  44. {pycityagent-1.1.8 → pycityagent-1.1.9}/pycityagent/cc/user.py +0 -0
  45. {pycityagent-1.1.8 → pycityagent-1.1.9}/pycityagent/hubconnector/__init__.py +0 -0
  46. {pycityagent-1.1.8 → pycityagent-1.1.9}/pycityagent/hubconnector/hubconnector.py +0 -0
  47. {pycityagent-1.1.8 → pycityagent-1.1.9}/pycityagent/image/__init__.py +0 -0
  48. {pycityagent-1.1.8 → pycityagent-1.1.9}/pycityagent/image/image.py +0 -0
  49. {pycityagent-1.1.8 → pycityagent-1.1.9}/pycityagent/simulator.py +0 -0
  50. {pycityagent-1.1.8 → pycityagent-1.1.9}/pycityagent/st/__init__.py +0 -0
  51. {pycityagent-1.1.8 → pycityagent-1.1.9}/pycityagent/st/st.py +0 -0
  52. {pycityagent-1.1.8 → pycityagent-1.1.9}/pycityagent/urbanllm/__init__.py +0 -0
  53. {pycityagent-1.1.8 → pycityagent-1.1.9}/pycityagent.egg-info/SOURCES.txt +0 -0
  54. {pycityagent-1.1.8 → pycityagent-1.1.9}/pycityagent.egg-info/dependency_links.txt +0 -0
  55. {pycityagent-1.1.8 → pycityagent-1.1.9}/pycityagent.egg-info/requires.txt +0 -0
  56. {pycityagent-1.1.8 → pycityagent-1.1.9}/pycityagent.egg-info/top_level.txt +0 -0
  57. {pycityagent-1.1.8 → pycityagent-1.1.9}/pyproject.toml +0 -0
  58. {pycityagent-1.1.8 → pycityagent-1.1.9}/setup.cfg +0 -0
@@ -1,6 +1,6 @@
1
1
  Metadata-Version: 2.1
2
2
  Name: pycityagent
3
- Version: 1.1.8
3
+ Version: 1.1.9
4
4
  Summary: LLM-based城市模拟器agent构建库
5
5
  Author-email: Yuwei Yan <pinkgranite86@gmail.com>
6
6
  License: MIT License
@@ -79,9 +79,10 @@ llm_request:
79
79
  model: xxx
80
80
  (api_base): xxx (this is an optional config, if you use opanai and want to use your own backend LLM model, default to "https://api.openai.com/v1")
81
81
  img_understand_request:
82
- request_type: qwen
82
+ request_type: openai / qwen
83
83
  api_key: xxx
84
- model: xxx
84
+ model: xxx ('gpt-4-turbo' if you use openai)
85
+ (api_base): same as text_request
85
86
  img_generate_request:
86
87
  request_type: qwen
87
88
  api_key: xxx
@@ -110,11 +111,19 @@ apphub_request:
110
111
 
111
112
  #### LLM_REQUEST
112
113
  - As you can see, the whole CityAgent is based on the LLM, by now, there are three different parts of config items: **text_request**, **img_understand_request** and **img_generate_request**
113
- - By now, we support [**qwen**](https://tongyi.aliyun.com/) and [**openai**](https://openai.com/)
114
- - `Notice: Our environments are basically conducted with qwen. If you prefer to use openai, then you may encounter hardships. AND fell free to issue us.`
115
- - Get your **api_key** and chooce your **model**
116
- - If you want to use your backend models, set the **api_base** (only available when using **openai**)
117
- - default value: "https://api.openai.com/v1"
114
+ - **text_request**
115
+ - By now, we support [**qwen**](https://tongyi.aliyun.com/) and [**openai**](https://openai.com/)
116
+ - `Notice: Our environments are basically conducted with qwen. If you prefer to use openai, then you may encounter hardships. AND fell free to issue us.`
117
+ - Get your **api_key** and chooce your **model**
118
+ - If you want to use your backend models, set the **api_base** (only available when using **openai**)
119
+ - default value: "https://api.openai.com/v1"
120
+ - **img_understand_request**
121
+ - By now, we support **qwen** and **openai**
122
+ - If choose **openai**, then the **model** has to be '**gpt-4-turbo**'
123
+ - If you want to use your backend models, set the **api_base** (only available when using **openai**)
124
+ - default value: "https://api.openai.com/v1"
125
+ - **img_generate_request**
126
+ - By now, only [**qwen**] is supported
118
127
 
119
128
  #### CITYSIM_REQUEST
120
129
  - Most of the configuration options in this part are determined, such as **simulator.server**, **map_request.mongo_coll**, **route_request.server**
@@ -32,9 +32,10 @@ llm_request:
32
32
  model: xxx
33
33
  (api_base): xxx (this is an optional config, if you use opanai and want to use your own backend LLM model, default to "https://api.openai.com/v1")
34
34
  img_understand_request:
35
- request_type: qwen
35
+ request_type: openai / qwen
36
36
  api_key: xxx
37
- model: xxx
37
+ model: xxx ('gpt-4-turbo' if you use openai)
38
+ (api_base): same as text_request
38
39
  img_generate_request:
39
40
  request_type: qwen
40
41
  api_key: xxx
@@ -63,11 +64,19 @@ apphub_request:
63
64
 
64
65
  #### LLM_REQUEST
65
66
  - As you can see, the whole CityAgent is based on the LLM, by now, there are three different parts of config items: **text_request**, **img_understand_request** and **img_generate_request**
66
- - By now, we support [**qwen**](https://tongyi.aliyun.com/) and [**openai**](https://openai.com/)
67
- - `Notice: Our environments are basically conducted with qwen. If you prefer to use openai, then you may encounter hardships. AND fell free to issue us.`
68
- - Get your **api_key** and chooce your **model**
69
- - If you want to use your backend models, set the **api_base** (only available when using **openai**)
70
- - default value: "https://api.openai.com/v1"
67
+ - **text_request**
68
+ - By now, we support [**qwen**](https://tongyi.aliyun.com/) and [**openai**](https://openai.com/)
69
+ - `Notice: Our environments are basically conducted with qwen. If you prefer to use openai, then you may encounter hardships. AND fell free to issue us.`
70
+ - Get your **api_key** and chooce your **model**
71
+ - If you want to use your backend models, set the **api_base** (only available when using **openai**)
72
+ - default value: "https://api.openai.com/v1"
73
+ - **img_understand_request**
74
+ - By now, we support **qwen** and **openai**
75
+ - If choose **openai**, then the **model** has to be '**gpt-4-turbo**'
76
+ - If you want to use your backend models, set the **api_base** (only available when using **openai**)
77
+ - default value: "https://api.openai.com/v1"
78
+ - **img_generate_request**
79
+ - By now, only [**qwen**] is supported
71
80
 
72
81
  #### CITYSIM_REQUEST
73
82
  - Most of the configuration options in this part are determined, such as **simulator.server**, **map_request.mongo_coll**, **route_request.server**
@@ -146,10 +146,10 @@ class Sence(BrainFunction):
146
146
  SencePlug Buffer: used to store those sence plug content
147
147
  """
148
148
 
149
- self.enable_streeview = False
149
+ self.enable_streeview = True
150
150
  """
151
- 街景感知功能接口, 默认为False
152
- Interface of streetview function, defualt: False
151
+ 街景感知功能接口, 默认为True
152
+ Interface of streetview function, defualt: True
153
153
  """
154
154
 
155
155
  self._lane_type_mapping = {1: 'driving', 2: 'walking'}
@@ -3,12 +3,16 @@
3
3
  from openai import OpenAI
4
4
  from http import HTTPStatus
5
5
  import dashscope
6
- from urllib.parse import urlparse, unquote
7
- from pathlib import PurePosixPath
8
6
  import requests
9
7
  from dashscope import ImageSynthesis
10
8
  from PIL import Image
11
9
  from io import BytesIO
10
+ from typing import Union
11
+ import base64
12
+
13
+ def encode_image(image_path):
14
+ with open(image_path, "rb") as image_file:
15
+ return base64.b64encode(image_file.read()).decode('utf-8')
12
16
 
13
17
  class LLMConfig:
14
18
  """
@@ -84,14 +88,14 @@ class UrbanLLM:
84
88
  print("ERROR: Wrong Config")
85
89
  return "wrong config"
86
90
 
87
- def img_understand(self, img_path:str, prompt:str=None) -> str:
91
+ def img_understand(self, img_path:Union[str, list[str]], prompt:str=None) -> str:
88
92
  """
89
93
  图像理解
90
94
  Image understanding
91
95
 
92
96
  Args:
93
- - img_path: 目标图像的路径. The path of selected Image
94
- - prompt: 理解提示词 - 例如理解方向. The understanding prompts
97
+ - img_path (Union[str, list[str]]): 目标图像的路径, 既可以是一个路径也可以是包含多张图片路径的list. The path of selected Image
98
+ - prompt (str): 理解提示词 - 例如理解方向. The understanding prompts
95
99
 
96
100
  Returns:
97
101
  - (str): the understanding content
@@ -99,22 +103,70 @@ class UrbanLLM:
99
103
  ppt = "如何理解这幅图像?"
100
104
  if prompt != None:
101
105
  ppt = prompt
102
- dialog = [{
103
- 'role': 'user',
104
- 'content': [
105
- {'image': 'file://' + img_path},
106
- {'text': ppt}
107
- ]
108
- }]
109
- response = dashscope.MultiModalConversation.call(
110
- model=self.config.image_u['model'],
111
- api_key=self.config.image_u['api_key'],
112
- messages=dialog
113
- )
114
- if response.status_code == HTTPStatus.OK:
115
- return response.output.choices[0]['message']['content']
106
+ if self.config.image_u['request_type'] == 'openai':
107
+ if 'api_base' in self.config.image_u.keys():
108
+ api_base = self.config.image_u['api_base']
109
+ else:
110
+ api_base = None
111
+ client = OpenAI(
112
+ api_key=self.config.text['api_key'],
113
+ base_url=api_base,
114
+ )
115
+ content = []
116
+ content.append({'type': 'text', 'text': ppt})
117
+ if isinstance(img_path, str):
118
+ base64_image = encode_image(img_path)
119
+ content.append({
120
+ 'type': 'image_url',
121
+ 'image_url': {
122
+ 'url': f"data:image/jpeg;base64,{base64_image}"
123
+ }
124
+ })
125
+ elif isinstance(img_path, list) and all(isinstance(item, str) for item in img_path):
126
+ for item in img_path:
127
+ base64_image = encode_image(item)
128
+ content.append({
129
+ 'type': 'image_url',
130
+ 'image_url': {
131
+ 'url': f"data:image/jpeg;base64,{base64_image}"
132
+ }
133
+ })
134
+ response = client.chat.completions.create(
135
+ model=self.config.image_u['model'],
136
+ messages=[{
137
+ 'role': 'user',
138
+ 'content': content
139
+ }]
140
+ )
141
+ return response.choices[0].message.content
142
+ elif self.config.image_u['request_type'] == 'qwen':
143
+ content = []
144
+ if isinstance(img_path, str):
145
+ content.append({'image': 'file://' + img_path})
146
+ content.append({'text': ppt})
147
+ elif isinstance(img_path, list) and all(isinstance(item, str) for item in img_path):
148
+ for item in img_path:
149
+ content.append({
150
+ 'image': 'file://' + item
151
+ })
152
+ content.append({'text': ppt})
153
+
154
+ dialog = [{
155
+ 'role': 'user',
156
+ 'content': content
157
+ }]
158
+ response = dashscope.MultiModalConversation.call(
159
+ model=self.config.image_u['model'],
160
+ api_key=self.config.image_u['api_key'],
161
+ messages=dialog
162
+ )
163
+ if response.status_code == HTTPStatus.OK:
164
+ return response.output.choices[0]['message']['content']
165
+ else:
166
+ print(response.code) # The error code.
167
+ return "Error"
116
168
  else:
117
- print(response.code) # The error code.
169
+ print("ERROR: wrong image understanding type, only 'openai' and 'openai' is available")
118
170
  return "Error"
119
171
 
120
172
  def img_generate(self, prompt:str, size:str='512*512', quantity:int = 1):
@@ -1,6 +1,6 @@
1
1
  Metadata-Version: 2.1
2
2
  Name: pycityagent
3
- Version: 1.1.8
3
+ Version: 1.1.9
4
4
  Summary: LLM-based城市模拟器agent构建库
5
5
  Author-email: Yuwei Yan <pinkgranite86@gmail.com>
6
6
  License: MIT License
@@ -79,9 +79,10 @@ llm_request:
79
79
  model: xxx
80
80
  (api_base): xxx (this is an optional config, if you use opanai and want to use your own backend LLM model, default to "https://api.openai.com/v1")
81
81
  img_understand_request:
82
- request_type: qwen
82
+ request_type: openai / qwen
83
83
  api_key: xxx
84
- model: xxx
84
+ model: xxx ('gpt-4-turbo' if you use openai)
85
+ (api_base): same as text_request
85
86
  img_generate_request:
86
87
  request_type: qwen
87
88
  api_key: xxx
@@ -110,11 +111,19 @@ apphub_request:
110
111
 
111
112
  #### LLM_REQUEST
112
113
  - As you can see, the whole CityAgent is based on the LLM, by now, there are three different parts of config items: **text_request**, **img_understand_request** and **img_generate_request**
113
- - By now, we support [**qwen**](https://tongyi.aliyun.com/) and [**openai**](https://openai.com/)
114
- - `Notice: Our environments are basically conducted with qwen. If you prefer to use openai, then you may encounter hardships. AND fell free to issue us.`
115
- - Get your **api_key** and chooce your **model**
116
- - If you want to use your backend models, set the **api_base** (only available when using **openai**)
117
- - default value: "https://api.openai.com/v1"
114
+ - **text_request**
115
+ - By now, we support [**qwen**](https://tongyi.aliyun.com/) and [**openai**](https://openai.com/)
116
+ - `Notice: Our environments are basically conducted with qwen. If you prefer to use openai, then you may encounter hardships. AND fell free to issue us.`
117
+ - Get your **api_key** and chooce your **model**
118
+ - If you want to use your backend models, set the **api_base** (only available when using **openai**)
119
+ - default value: "https://api.openai.com/v1"
120
+ - **img_understand_request**
121
+ - By now, we support **qwen** and **openai**
122
+ - If choose **openai**, then the **model** has to be '**gpt-4-turbo**'
123
+ - If you want to use your backend models, set the **api_base** (only available when using **openai**)
124
+ - default value: "https://api.openai.com/v1"
125
+ - **img_generate_request**
126
+ - By now, only [**qwen**] is supported
118
127
 
119
128
  #### CITYSIM_REQUEST
120
129
  - Most of the configuration options in this part are determined, such as **simulator.server**, **map_request.mongo_coll**, **route_request.server**
File without changes
File without changes
File without changes