ag2 0.6.0b1__tar.gz → 0.6.1__tar.gz
This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
Potentially problematic release.
This version of ag2 might be problematic. Click here for more details.
- {ag2-0.6.0b1/ag2.egg-info → ag2-0.6.1}/PKG-INFO +19 -19
- {ag2-0.6.0b1 → ag2-0.6.1}/README.md +18 -18
- {ag2-0.6.0b1 → ag2-0.6.1/ag2.egg-info}/PKG-INFO +19 -19
- ag2-0.6.1/ag2.egg-info/requires.txt +106 -0
- {ag2-0.6.0b1 → ag2-0.6.1}/pyproject.toml +2 -0
- {ag2-0.6.0b1 → ag2-0.6.1}/setup.py +11 -2
- {ag2-0.6.0b1 → ag2-0.6.1}/test/test_code_utils.py +3 -3
- {ag2-0.6.0b1 → ag2-0.6.1}/test/test_function_utils.py +6 -7
- {ag2-0.6.0b1 → ag2-0.6.1}/test/test_logging.py +1 -1
- {ag2-0.6.0b1 → ag2-0.6.1}/test/test_pydantic.py +4 -5
- ag2-0.6.0b1/ag2.egg-info/requires.txt +0 -106
- {ag2-0.6.0b1 → ag2-0.6.1}/LICENSE +0 -0
- {ag2-0.6.0b1 → ag2-0.6.1}/NOTICE.md +0 -0
- {ag2-0.6.0b1 → ag2-0.6.1}/ag2.egg-info/SOURCES.txt +0 -0
- {ag2-0.6.0b1 → ag2-0.6.1}/ag2.egg-info/dependency_links.txt +0 -0
- {ag2-0.6.0b1 → ag2-0.6.1}/ag2.egg-info/top_level.txt +0 -0
- {ag2-0.6.0b1 → ag2-0.6.1}/setup.cfg +0 -0
- {ag2-0.6.0b1 → ag2-0.6.1}/setup_ag2.py +0 -0
- {ag2-0.6.0b1 → ag2-0.6.1}/test/test_browser_utils.py +0 -0
- {ag2-0.6.0b1 → ag2-0.6.1}/test/test_graph_utils.py +0 -0
- {ag2-0.6.0b1 → ag2-0.6.1}/test/test_notebook.py +0 -0
- {ag2-0.6.0b1 → ag2-0.6.1}/test/test_retrieve_utils.py +0 -0
- {ag2-0.6.0b1 → ag2-0.6.1}/test/test_token_count.py +0 -0
|
@@ -1,6 +1,6 @@
|
|
|
1
1
|
Metadata-Version: 2.1
|
|
2
2
|
Name: ag2
|
|
3
|
-
Version: 0.6.
|
|
3
|
+
Version: 0.6.1
|
|
4
4
|
Summary: Alias package for pyautogen
|
|
5
5
|
Home-page: https://github.com/ag2ai/ag2
|
|
6
6
|
Author: Chi Wang & Qingyun Wu
|
|
@@ -95,11 +95,11 @@ We adopt the Apache 2.0 license from v0.3. This enhances our commitment to open-
|
|
|
95
95
|
|
|
96
96
|
:tada: May 11, 2024: [AutoGen: Enabling Next-Gen LLM Applications via Multi-Agent Conversation](https://openreview.net/pdf?id=uAjxFFing2) received the best paper award at the [ICLR 2024 LLM Agents Workshop](https://llmagents.github.io/).
|
|
97
97
|
|
|
98
|
-
<!-- :tada: Apr 26, 2024: [AutoGen.NET](https://
|
|
98
|
+
<!-- :tada: Apr 26, 2024: [AutoGen.NET](https://docs.ag2.ai/ag2-for-net/) is available for .NET developers! -->
|
|
99
99
|
|
|
100
100
|
:tada: Apr 17, 2024: Andrew Ng cited AutoGen in [The Batch newsletter](https://www.deeplearning.ai/the-batch/issue-245/) and [What's next for AI agentic workflows](https://youtu.be/sal78ACtGTc?si=JduUzN_1kDnMq0vF) at Sequoia Capital's AI Ascent (Mar 26).
|
|
101
101
|
|
|
102
|
-
:tada: Mar 3, 2024: What's new in AutoGen? 📰[Blog](https://
|
|
102
|
+
:tada: Mar 3, 2024: What's new in AutoGen? 📰[Blog](https://docs.ag2.ai/blog/2024-03-03-AutoGen-Update); 📺[Youtube](https://www.youtube.com/watch?v=j_mtwQiaLGU).
|
|
103
103
|
|
|
104
104
|
<!-- :tada: Mar 1, 2024: the first AutoGen multi-agent experiment on the challenging [GAIA](https://huggingface.co/spaces/gaia-benchmark/leaderboard) benchmark achieved the No. 1 accuracy in all the three levels. -->
|
|
105
105
|
|
|
@@ -107,9 +107,9 @@ We adopt the Apache 2.0 license from v0.3. This enhances our commitment to open-
|
|
|
107
107
|
|
|
108
108
|
:tada: Dec 31, 2023: [AutoGen: Enabling Next-Gen LLM Applications via Multi-Agent Conversation Framework](https://arxiv.org/abs/2308.08155) is selected by [TheSequence: My Five Favorite AI Papers of 2023](https://thesequence.substack.com/p/my-five-favorite-ai-papers-of-2023).
|
|
109
109
|
|
|
110
|
-
<!-- :fire: Nov 24: pyautogen [v0.2](https://github.com/ag2ai/ag2/releases/tag/v0.2.0) is released with many updates and new features compared to v0.1.1. It switches to using openai-python v1. Please read the [migration guide](https://
|
|
110
|
+
<!-- :fire: Nov 24: pyautogen [v0.2](https://github.com/ag2ai/ag2/releases/tag/v0.2.0) is released with many updates and new features compared to v0.1.1. It switches to using openai-python v1. Please read the [migration guide](https://docs.ag2.ai/docs/installation/Installation). -->
|
|
111
111
|
|
|
112
|
-
<!-- :fire: Nov 11: OpenAI's Assistants are available in AutoGen and interoperatable with other AutoGen agents! Checkout our [blogpost](https://
|
|
112
|
+
<!-- :fire: Nov 11: OpenAI's Assistants are available in AutoGen and interoperatable with other AutoGen agents! Checkout our [blogpost](https://docs.ag2.ai/blog/2023-11-13-OAI-assistants) for details and examples. -->
|
|
113
113
|
|
|
114
114
|
:tada: Nov 8, 2023: AutoGen is selected into [Open100: Top 100 Open Source achievements](https://www.benchcouncil.org/evaluation/opencs/annual.html) 35 days after spinoff from [FLAML](https://github.com/microsoft/FLAML).
|
|
115
115
|
|
|
@@ -126,7 +126,7 @@ We adopt the Apache 2.0 license from v0.3. This enhances our commitment to open-
|
|
|
126
126
|
<!--
|
|
127
127
|
:fire: FLAML is highlighted in OpenAI's [cookbook](https://github.com/openai/openai-cookbook#related-resources-from-around-the-web).
|
|
128
128
|
|
|
129
|
-
:fire: [autogen](https://
|
|
129
|
+
:fire: [autogen](https://docs.ag2.ai/) is released with support for ChatGPT and GPT-4, based on [Cost-Effective Hyperparameter Optimization for Large Language Model Generation Inference](https://arxiv.org/abs/2303.04673).
|
|
130
130
|
|
|
131
131
|
:fire: FLAML supports Code-First AutoML & Tuning – Private Preview in [Microsoft Fabric Data Science](https://learn.microsoft.com/en-us/fabric/data-science/). -->
|
|
132
132
|
|
|
@@ -169,11 +169,11 @@ The easiest way to start playing is
|
|
|
169
169
|
</a>
|
|
170
170
|
</p>
|
|
171
171
|
|
|
172
|
-
## [Installation](https://
|
|
172
|
+
## [Installation](https://docs.ag2.ai/docs/installation/Installation)
|
|
173
173
|
|
|
174
174
|
### Option 1. Install and Run AG2 in Docker
|
|
175
175
|
|
|
176
|
-
Find detailed instructions for users [here](https://
|
|
176
|
+
Find detailed instructions for users [here](https://docs.ag2.ai/docs/installation/Docker#step-1-install-docker), and for developers [here](https://docs.ag2.ai/docs/contributor-guide/docker).
|
|
177
177
|
|
|
178
178
|
### Option 2. Install AG2 Locally
|
|
179
179
|
|
|
@@ -190,13 +190,13 @@ Minimal dependencies are installed without extra options. You can install extra
|
|
|
190
190
|
pip install "autogen[blendsearch]"
|
|
191
191
|
``` -->
|
|
192
192
|
|
|
193
|
-
Find more options in [Installation](https://
|
|
193
|
+
Find more options in [Installation](https://docs.ag2.ai/docs/Installation#option-2-install-autogen-locally-using-virtual-environment).
|
|
194
194
|
|
|
195
195
|
<!-- Each of the [`notebook examples`](https://github.com/ag2ai/ag2/tree/main/notebook) may require a specific option to be installed. -->
|
|
196
196
|
|
|
197
|
-
Even if you are installing and running AG2 locally outside of docker, the recommendation and default behavior of agents is to perform [code execution](https://
|
|
197
|
+
Even if you are installing and running AG2 locally outside of docker, the recommendation and default behavior of agents is to perform [code execution](https://docs.ag2.ai/docs/FAQ#if-you-want-to-run-code-execution-in-docker) in docker. Find more instructions and how to change the default behaviour [here](https://docs.ag2.ai/docs/FAQ#if-you-want-to-run-code-execution-locally).
|
|
198
198
|
|
|
199
|
-
For LLM inference configurations, check the [FAQs](https://
|
|
199
|
+
For LLM inference configurations, check the [FAQs](https://docs.ag2.ai/docs/FAQ#set-your-api-endpoints).
|
|
200
200
|
|
|
201
201
|
<p align="right" style="font-size: 14px; color: #555; margin-top: 20px;">
|
|
202
202
|
<a href="#readme-top" style="text-decoration: none; color: blue; font-weight: bold;">
|
|
@@ -206,7 +206,7 @@ For LLM inference configurations, check the [FAQs](https://ag2ai.github.io/ag2/d
|
|
|
206
206
|
|
|
207
207
|
## Multi-Agent Conversation Framework
|
|
208
208
|
|
|
209
|
-
AG2 enables the next-gen LLM applications with a generic [multi-agent conversation](https://
|
|
209
|
+
AG2 enables the next-gen LLM applications with a generic [multi-agent conversation](https://docs.ag2.ai/docs/Use-Cases/agent_chat) framework. It offers customizable and conversable agents that integrate LLMs, tools, and humans.
|
|
210
210
|
By automating chat among multiple capable agents, one can easily make them collectively perform tasks autonomously or with human feedback, including tasks that require using tools via code.
|
|
211
211
|
|
|
212
212
|
Features of this use case include:
|
|
@@ -220,7 +220,7 @@ For [example](https://github.com/ag2ai/ag2/blob/main/test/twoagent.py),
|
|
|
220
220
|
```python
|
|
221
221
|
from autogen import AssistantAgent, UserProxyAgent, config_list_from_json
|
|
222
222
|
# Load LLM inference endpoints from an env variable or a file
|
|
223
|
-
# See https://
|
|
223
|
+
# See https://docs.ag2.ai/docs/FAQ#set-your-api-endpoints
|
|
224
224
|
# and OAI_CONFIG_LIST_sample
|
|
225
225
|
config_list = config_list_from_json(env_or_file="OAI_CONFIG_LIST")
|
|
226
226
|
# You can also set config_list directly as a list, for example, config_list = [{'model': 'gpt-4o', 'api_key': '<your OpenAI API key here>'},]
|
|
@@ -243,7 +243,7 @@ The figure below shows an example conversation flow with AG2.
|
|
|
243
243
|
|
|
244
244
|
|
|
245
245
|
Alternatively, the [sample code](https://github.com/ag2ai/build-with-ag2/blob/main/samples/simple_chat.py) here allows a user to chat with an AG2 agent in ChatGPT style.
|
|
246
|
-
Please find more [code examples](https://
|
|
246
|
+
Please find more [code examples](https://docs.ag2.ai/docs/Examples#automated-multi-agent-chat) for this feature.
|
|
247
247
|
|
|
248
248
|
<p align="right" style="font-size: 14px; color: #555; margin-top: 20px;">
|
|
249
249
|
<a href="#readme-top" style="text-decoration: none; color: blue; font-weight: bold;">
|
|
@@ -253,7 +253,7 @@ Please find more [code examples](https://ag2ai.github.io/ag2/docs/Examples#autom
|
|
|
253
253
|
|
|
254
254
|
## Enhanced LLM Inferences
|
|
255
255
|
|
|
256
|
-
AG2 also helps maximize the utility out of the expensive LLMs such as gpt-4o. It offers [enhanced LLM inference](https://
|
|
256
|
+
AG2 also helps maximize the utility out of the expensive LLMs such as gpt-4o. It offers [enhanced LLM inference](https://docs.ag2.ai/docs/Use-Cases/enhanced_inference#api-unification) with powerful functionalities like caching, error handling, multi-config inference and templating.
|
|
257
257
|
|
|
258
258
|
<!-- For example, you can optimize generations by LLM with your own tuning data, success metrics, and budgets.
|
|
259
259
|
|
|
@@ -272,7 +272,7 @@ config, analysis = autogen.Completion.tune(
|
|
|
272
272
|
response = autogen.Completion.create(context=test_instance, **config)
|
|
273
273
|
```
|
|
274
274
|
|
|
275
|
-
Please find more [code examples](https://
|
|
275
|
+
Please find more [code examples](https://docs.ag2.ai/docs/Examples#tune-gpt-models) for this feature. -->
|
|
276
276
|
|
|
277
277
|
<p align="right" style="font-size: 14px; color: #555; margin-top: 20px;">
|
|
278
278
|
<a href="#readme-top" style="text-decoration: none; color: blue; font-weight: bold;">
|
|
@@ -282,15 +282,15 @@ Please find more [code examples](https://ag2ai.github.io/ag2/docs/Examples#tune-
|
|
|
282
282
|
|
|
283
283
|
## Documentation
|
|
284
284
|
|
|
285
|
-
You can find detailed documentation about AG2 [here](https://
|
|
285
|
+
You can find detailed documentation about AG2 [here](https://docs.ag2.ai/).
|
|
286
286
|
|
|
287
287
|
In addition, you can find:
|
|
288
288
|
|
|
289
|
-
- [Research](https://
|
|
289
|
+
- [Research](https://docs.ag2.ai/docs/Research), [blogposts](https://docs.ag2.ai/blog) around AG2, and [Transparency FAQs](https://github.com/ag2ai/ag2/blob/main/TRANSPARENCY_FAQS.md)
|
|
290
290
|
|
|
291
291
|
- [Discord](https://discord.gg/pAbnFJrkgZ)
|
|
292
292
|
|
|
293
|
-
- [Contributing guide](https://
|
|
293
|
+
- [Contributing guide](https://docs.ag2.ai/docs/contributor-guide/contributing)
|
|
294
294
|
|
|
295
295
|
<p align="right" style="font-size: 14px; color: #555; margin-top: 20px;">
|
|
296
296
|
<a href="#readme-top" style="text-decoration: none; color: blue; font-weight: bold;">
|
|
@@ -43,11 +43,11 @@ We adopt the Apache 2.0 license from v0.3. This enhances our commitment to open-
|
|
|
43
43
|
|
|
44
44
|
:tada: May 11, 2024: [AutoGen: Enabling Next-Gen LLM Applications via Multi-Agent Conversation](https://openreview.net/pdf?id=uAjxFFing2) received the best paper award at the [ICLR 2024 LLM Agents Workshop](https://llmagents.github.io/).
|
|
45
45
|
|
|
46
|
-
<!-- :tada: Apr 26, 2024: [AutoGen.NET](https://
|
|
46
|
+
<!-- :tada: Apr 26, 2024: [AutoGen.NET](https://docs.ag2.ai/ag2-for-net/) is available for .NET developers! -->
|
|
47
47
|
|
|
48
48
|
:tada: Apr 17, 2024: Andrew Ng cited AutoGen in [The Batch newsletter](https://www.deeplearning.ai/the-batch/issue-245/) and [What's next for AI agentic workflows](https://youtu.be/sal78ACtGTc?si=JduUzN_1kDnMq0vF) at Sequoia Capital's AI Ascent (Mar 26).
|
|
49
49
|
|
|
50
|
-
:tada: Mar 3, 2024: What's new in AutoGen? 📰[Blog](https://
|
|
50
|
+
:tada: Mar 3, 2024: What's new in AutoGen? 📰[Blog](https://docs.ag2.ai/blog/2024-03-03-AutoGen-Update); 📺[Youtube](https://www.youtube.com/watch?v=j_mtwQiaLGU).
|
|
51
51
|
|
|
52
52
|
<!-- :tada: Mar 1, 2024: the first AutoGen multi-agent experiment on the challenging [GAIA](https://huggingface.co/spaces/gaia-benchmark/leaderboard) benchmark achieved the No. 1 accuracy in all the three levels. -->
|
|
53
53
|
|
|
@@ -55,9 +55,9 @@ We adopt the Apache 2.0 license from v0.3. This enhances our commitment to open-
|
|
|
55
55
|
|
|
56
56
|
:tada: Dec 31, 2023: [AutoGen: Enabling Next-Gen LLM Applications via Multi-Agent Conversation Framework](https://arxiv.org/abs/2308.08155) is selected by [TheSequence: My Five Favorite AI Papers of 2023](https://thesequence.substack.com/p/my-five-favorite-ai-papers-of-2023).
|
|
57
57
|
|
|
58
|
-
<!-- :fire: Nov 24: pyautogen [v0.2](https://github.com/ag2ai/ag2/releases/tag/v0.2.0) is released with many updates and new features compared to v0.1.1. It switches to using openai-python v1. Please read the [migration guide](https://
|
|
58
|
+
<!-- :fire: Nov 24: pyautogen [v0.2](https://github.com/ag2ai/ag2/releases/tag/v0.2.0) is released with many updates and new features compared to v0.1.1. It switches to using openai-python v1. Please read the [migration guide](https://docs.ag2.ai/docs/installation/Installation). -->
|
|
59
59
|
|
|
60
|
-
<!-- :fire: Nov 11: OpenAI's Assistants are available in AutoGen and interoperatable with other AutoGen agents! Checkout our [blogpost](https://
|
|
60
|
+
<!-- :fire: Nov 11: OpenAI's Assistants are available in AutoGen and interoperatable with other AutoGen agents! Checkout our [blogpost](https://docs.ag2.ai/blog/2023-11-13-OAI-assistants) for details and examples. -->
|
|
61
61
|
|
|
62
62
|
:tada: Nov 8, 2023: AutoGen is selected into [Open100: Top 100 Open Source achievements](https://www.benchcouncil.org/evaluation/opencs/annual.html) 35 days after spinoff from [FLAML](https://github.com/microsoft/FLAML).
|
|
63
63
|
|
|
@@ -74,7 +74,7 @@ We adopt the Apache 2.0 license from v0.3. This enhances our commitment to open-
|
|
|
74
74
|
<!--
|
|
75
75
|
:fire: FLAML is highlighted in OpenAI's [cookbook](https://github.com/openai/openai-cookbook#related-resources-from-around-the-web).
|
|
76
76
|
|
|
77
|
-
:fire: [autogen](https://
|
|
77
|
+
:fire: [autogen](https://docs.ag2.ai/) is released with support for ChatGPT and GPT-4, based on [Cost-Effective Hyperparameter Optimization for Large Language Model Generation Inference](https://arxiv.org/abs/2303.04673).
|
|
78
78
|
|
|
79
79
|
:fire: FLAML supports Code-First AutoML & Tuning – Private Preview in [Microsoft Fabric Data Science](https://learn.microsoft.com/en-us/fabric/data-science/). -->
|
|
80
80
|
|
|
@@ -117,11 +117,11 @@ The easiest way to start playing is
|
|
|
117
117
|
</a>
|
|
118
118
|
</p>
|
|
119
119
|
|
|
120
|
-
## [Installation](https://
|
|
120
|
+
## [Installation](https://docs.ag2.ai/docs/installation/Installation)
|
|
121
121
|
|
|
122
122
|
### Option 1. Install and Run AG2 in Docker
|
|
123
123
|
|
|
124
|
-
Find detailed instructions for users [here](https://
|
|
124
|
+
Find detailed instructions for users [here](https://docs.ag2.ai/docs/installation/Docker#step-1-install-docker), and for developers [here](https://docs.ag2.ai/docs/contributor-guide/docker).
|
|
125
125
|
|
|
126
126
|
### Option 2. Install AG2 Locally
|
|
127
127
|
|
|
@@ -138,13 +138,13 @@ Minimal dependencies are installed without extra options. You can install extra
|
|
|
138
138
|
pip install "autogen[blendsearch]"
|
|
139
139
|
``` -->
|
|
140
140
|
|
|
141
|
-
Find more options in [Installation](https://
|
|
141
|
+
Find more options in [Installation](https://docs.ag2.ai/docs/Installation#option-2-install-autogen-locally-using-virtual-environment).
|
|
142
142
|
|
|
143
143
|
<!-- Each of the [`notebook examples`](https://github.com/ag2ai/ag2/tree/main/notebook) may require a specific option to be installed. -->
|
|
144
144
|
|
|
145
|
-
Even if you are installing and running AG2 locally outside of docker, the recommendation and default behavior of agents is to perform [code execution](https://
|
|
145
|
+
Even if you are installing and running AG2 locally outside of docker, the recommendation and default behavior of agents is to perform [code execution](https://docs.ag2.ai/docs/FAQ#if-you-want-to-run-code-execution-in-docker) in docker. Find more instructions and how to change the default behaviour [here](https://docs.ag2.ai/docs/FAQ#if-you-want-to-run-code-execution-locally).
|
|
146
146
|
|
|
147
|
-
For LLM inference configurations, check the [FAQs](https://
|
|
147
|
+
For LLM inference configurations, check the [FAQs](https://docs.ag2.ai/docs/FAQ#set-your-api-endpoints).
|
|
148
148
|
|
|
149
149
|
<p align="right" style="font-size: 14px; color: #555; margin-top: 20px;">
|
|
150
150
|
<a href="#readme-top" style="text-decoration: none; color: blue; font-weight: bold;">
|
|
@@ -154,7 +154,7 @@ For LLM inference configurations, check the [FAQs](https://ag2ai.github.io/ag2/d
|
|
|
154
154
|
|
|
155
155
|
## Multi-Agent Conversation Framework
|
|
156
156
|
|
|
157
|
-
AG2 enables the next-gen LLM applications with a generic [multi-agent conversation](https://
|
|
157
|
+
AG2 enables the next-gen LLM applications with a generic [multi-agent conversation](https://docs.ag2.ai/docs/Use-Cases/agent_chat) framework. It offers customizable and conversable agents that integrate LLMs, tools, and humans.
|
|
158
158
|
By automating chat among multiple capable agents, one can easily make them collectively perform tasks autonomously or with human feedback, including tasks that require using tools via code.
|
|
159
159
|
|
|
160
160
|
Features of this use case include:
|
|
@@ -168,7 +168,7 @@ For [example](https://github.com/ag2ai/ag2/blob/main/test/twoagent.py),
|
|
|
168
168
|
```python
|
|
169
169
|
from autogen import AssistantAgent, UserProxyAgent, config_list_from_json
|
|
170
170
|
# Load LLM inference endpoints from an env variable or a file
|
|
171
|
-
# See https://
|
|
171
|
+
# See https://docs.ag2.ai/docs/FAQ#set-your-api-endpoints
|
|
172
172
|
# and OAI_CONFIG_LIST_sample
|
|
173
173
|
config_list = config_list_from_json(env_or_file="OAI_CONFIG_LIST")
|
|
174
174
|
# You can also set config_list directly as a list, for example, config_list = [{'model': 'gpt-4o', 'api_key': '<your OpenAI API key here>'},]
|
|
@@ -191,7 +191,7 @@ The figure below shows an example conversation flow with AG2.
|
|
|
191
191
|
|
|
192
192
|
|
|
193
193
|
Alternatively, the [sample code](https://github.com/ag2ai/build-with-ag2/blob/main/samples/simple_chat.py) here allows a user to chat with an AG2 agent in ChatGPT style.
|
|
194
|
-
Please find more [code examples](https://
|
|
194
|
+
Please find more [code examples](https://docs.ag2.ai/docs/Examples#automated-multi-agent-chat) for this feature.
|
|
195
195
|
|
|
196
196
|
<p align="right" style="font-size: 14px; color: #555; margin-top: 20px;">
|
|
197
197
|
<a href="#readme-top" style="text-decoration: none; color: blue; font-weight: bold;">
|
|
@@ -201,7 +201,7 @@ Please find more [code examples](https://ag2ai.github.io/ag2/docs/Examples#autom
|
|
|
201
201
|
|
|
202
202
|
## Enhanced LLM Inferences
|
|
203
203
|
|
|
204
|
-
AG2 also helps maximize the utility out of the expensive LLMs such as gpt-4o. It offers [enhanced LLM inference](https://
|
|
204
|
+
AG2 also helps maximize the utility out of the expensive LLMs such as gpt-4o. It offers [enhanced LLM inference](https://docs.ag2.ai/docs/Use-Cases/enhanced_inference#api-unification) with powerful functionalities like caching, error handling, multi-config inference and templating.
|
|
205
205
|
|
|
206
206
|
<!-- For example, you can optimize generations by LLM with your own tuning data, success metrics, and budgets.
|
|
207
207
|
|
|
@@ -220,7 +220,7 @@ config, analysis = autogen.Completion.tune(
|
|
|
220
220
|
response = autogen.Completion.create(context=test_instance, **config)
|
|
221
221
|
```
|
|
222
222
|
|
|
223
|
-
Please find more [code examples](https://
|
|
223
|
+
Please find more [code examples](https://docs.ag2.ai/docs/Examples#tune-gpt-models) for this feature. -->
|
|
224
224
|
|
|
225
225
|
<p align="right" style="font-size: 14px; color: #555; margin-top: 20px;">
|
|
226
226
|
<a href="#readme-top" style="text-decoration: none; color: blue; font-weight: bold;">
|
|
@@ -230,15 +230,15 @@ Please find more [code examples](https://ag2ai.github.io/ag2/docs/Examples#tune-
|
|
|
230
230
|
|
|
231
231
|
## Documentation
|
|
232
232
|
|
|
233
|
-
You can find detailed documentation about AG2 [here](https://
|
|
233
|
+
You can find detailed documentation about AG2 [here](https://docs.ag2.ai/).
|
|
234
234
|
|
|
235
235
|
In addition, you can find:
|
|
236
236
|
|
|
237
|
-
- [Research](https://
|
|
237
|
+
- [Research](https://docs.ag2.ai/docs/Research), [blogposts](https://docs.ag2.ai/blog) around AG2, and [Transparency FAQs](https://github.com/ag2ai/ag2/blob/main/TRANSPARENCY_FAQS.md)
|
|
238
238
|
|
|
239
239
|
- [Discord](https://discord.gg/pAbnFJrkgZ)
|
|
240
240
|
|
|
241
|
-
- [Contributing guide](https://
|
|
241
|
+
- [Contributing guide](https://docs.ag2.ai/docs/contributor-guide/contributing)
|
|
242
242
|
|
|
243
243
|
<p align="right" style="font-size: 14px; color: #555; margin-top: 20px;">
|
|
244
244
|
<a href="#readme-top" style="text-decoration: none; color: blue; font-weight: bold;">
|
|
@@ -1,6 +1,6 @@
|
|
|
1
1
|
Metadata-Version: 2.1
|
|
2
2
|
Name: ag2
|
|
3
|
-
Version: 0.6.
|
|
3
|
+
Version: 0.6.1
|
|
4
4
|
Summary: Alias package for pyautogen
|
|
5
5
|
Home-page: https://github.com/ag2ai/ag2
|
|
6
6
|
Author: Chi Wang & Qingyun Wu
|
|
@@ -95,11 +95,11 @@ We adopt the Apache 2.0 license from v0.3. This enhances our commitment to open-
|
|
|
95
95
|
|
|
96
96
|
:tada: May 11, 2024: [AutoGen: Enabling Next-Gen LLM Applications via Multi-Agent Conversation](https://openreview.net/pdf?id=uAjxFFing2) received the best paper award at the [ICLR 2024 LLM Agents Workshop](https://llmagents.github.io/).
|
|
97
97
|
|
|
98
|
-
<!-- :tada: Apr 26, 2024: [AutoGen.NET](https://
|
|
98
|
+
<!-- :tada: Apr 26, 2024: [AutoGen.NET](https://docs.ag2.ai/ag2-for-net/) is available for .NET developers! -->
|
|
99
99
|
|
|
100
100
|
:tada: Apr 17, 2024: Andrew Ng cited AutoGen in [The Batch newsletter](https://www.deeplearning.ai/the-batch/issue-245/) and [What's next for AI agentic workflows](https://youtu.be/sal78ACtGTc?si=JduUzN_1kDnMq0vF) at Sequoia Capital's AI Ascent (Mar 26).
|
|
101
101
|
|
|
102
|
-
:tada: Mar 3, 2024: What's new in AutoGen? 📰[Blog](https://
|
|
102
|
+
:tada: Mar 3, 2024: What's new in AutoGen? 📰[Blog](https://docs.ag2.ai/blog/2024-03-03-AutoGen-Update); 📺[Youtube](https://www.youtube.com/watch?v=j_mtwQiaLGU).
|
|
103
103
|
|
|
104
104
|
<!-- :tada: Mar 1, 2024: the first AutoGen multi-agent experiment on the challenging [GAIA](https://huggingface.co/spaces/gaia-benchmark/leaderboard) benchmark achieved the No. 1 accuracy in all the three levels. -->
|
|
105
105
|
|
|
@@ -107,9 +107,9 @@ We adopt the Apache 2.0 license from v0.3. This enhances our commitment to open-
|
|
|
107
107
|
|
|
108
108
|
:tada: Dec 31, 2023: [AutoGen: Enabling Next-Gen LLM Applications via Multi-Agent Conversation Framework](https://arxiv.org/abs/2308.08155) is selected by [TheSequence: My Five Favorite AI Papers of 2023](https://thesequence.substack.com/p/my-five-favorite-ai-papers-of-2023).
|
|
109
109
|
|
|
110
|
-
<!-- :fire: Nov 24: pyautogen [v0.2](https://github.com/ag2ai/ag2/releases/tag/v0.2.0) is released with many updates and new features compared to v0.1.1. It switches to using openai-python v1. Please read the [migration guide](https://
|
|
110
|
+
<!-- :fire: Nov 24: pyautogen [v0.2](https://github.com/ag2ai/ag2/releases/tag/v0.2.0) is released with many updates and new features compared to v0.1.1. It switches to using openai-python v1. Please read the [migration guide](https://docs.ag2.ai/docs/installation/Installation). -->
|
|
111
111
|
|
|
112
|
-
<!-- :fire: Nov 11: OpenAI's Assistants are available in AutoGen and interoperatable with other AutoGen agents! Checkout our [blogpost](https://
|
|
112
|
+
<!-- :fire: Nov 11: OpenAI's Assistants are available in AutoGen and interoperatable with other AutoGen agents! Checkout our [blogpost](https://docs.ag2.ai/blog/2023-11-13-OAI-assistants) for details and examples. -->
|
|
113
113
|
|
|
114
114
|
:tada: Nov 8, 2023: AutoGen is selected into [Open100: Top 100 Open Source achievements](https://www.benchcouncil.org/evaluation/opencs/annual.html) 35 days after spinoff from [FLAML](https://github.com/microsoft/FLAML).
|
|
115
115
|
|
|
@@ -126,7 +126,7 @@ We adopt the Apache 2.0 license from v0.3. This enhances our commitment to open-
|
|
|
126
126
|
<!--
|
|
127
127
|
:fire: FLAML is highlighted in OpenAI's [cookbook](https://github.com/openai/openai-cookbook#related-resources-from-around-the-web).
|
|
128
128
|
|
|
129
|
-
:fire: [autogen](https://
|
|
129
|
+
:fire: [autogen](https://docs.ag2.ai/) is released with support for ChatGPT and GPT-4, based on [Cost-Effective Hyperparameter Optimization for Large Language Model Generation Inference](https://arxiv.org/abs/2303.04673).
|
|
130
130
|
|
|
131
131
|
:fire: FLAML supports Code-First AutoML & Tuning – Private Preview in [Microsoft Fabric Data Science](https://learn.microsoft.com/en-us/fabric/data-science/). -->
|
|
132
132
|
|
|
@@ -169,11 +169,11 @@ The easiest way to start playing is
|
|
|
169
169
|
</a>
|
|
170
170
|
</p>
|
|
171
171
|
|
|
172
|
-
## [Installation](https://
|
|
172
|
+
## [Installation](https://docs.ag2.ai/docs/installation/Installation)
|
|
173
173
|
|
|
174
174
|
### Option 1. Install and Run AG2 in Docker
|
|
175
175
|
|
|
176
|
-
Find detailed instructions for users [here](https://
|
|
176
|
+
Find detailed instructions for users [here](https://docs.ag2.ai/docs/installation/Docker#step-1-install-docker), and for developers [here](https://docs.ag2.ai/docs/contributor-guide/docker).
|
|
177
177
|
|
|
178
178
|
### Option 2. Install AG2 Locally
|
|
179
179
|
|
|
@@ -190,13 +190,13 @@ Minimal dependencies are installed without extra options. You can install extra
|
|
|
190
190
|
pip install "autogen[blendsearch]"
|
|
191
191
|
``` -->
|
|
192
192
|
|
|
193
|
-
Find more options in [Installation](https://
|
|
193
|
+
Find more options in [Installation](https://docs.ag2.ai/docs/Installation#option-2-install-autogen-locally-using-virtual-environment).
|
|
194
194
|
|
|
195
195
|
<!-- Each of the [`notebook examples`](https://github.com/ag2ai/ag2/tree/main/notebook) may require a specific option to be installed. -->
|
|
196
196
|
|
|
197
|
-
Even if you are installing and running AG2 locally outside of docker, the recommendation and default behavior of agents is to perform [code execution](https://
|
|
197
|
+
Even if you are installing and running AG2 locally outside of docker, the recommendation and default behavior of agents is to perform [code execution](https://docs.ag2.ai/docs/FAQ#if-you-want-to-run-code-execution-in-docker) in docker. Find more instructions and how to change the default behaviour [here](https://docs.ag2.ai/docs/FAQ#if-you-want-to-run-code-execution-locally).
|
|
198
198
|
|
|
199
|
-
For LLM inference configurations, check the [FAQs](https://
|
|
199
|
+
For LLM inference configurations, check the [FAQs](https://docs.ag2.ai/docs/FAQ#set-your-api-endpoints).
|
|
200
200
|
|
|
201
201
|
<p align="right" style="font-size: 14px; color: #555; margin-top: 20px;">
|
|
202
202
|
<a href="#readme-top" style="text-decoration: none; color: blue; font-weight: bold;">
|
|
@@ -206,7 +206,7 @@ For LLM inference configurations, check the [FAQs](https://ag2ai.github.io/ag2/d
|
|
|
206
206
|
|
|
207
207
|
## Multi-Agent Conversation Framework
|
|
208
208
|
|
|
209
|
-
AG2 enables the next-gen LLM applications with a generic [multi-agent conversation](https://
|
|
209
|
+
AG2 enables the next-gen LLM applications with a generic [multi-agent conversation](https://docs.ag2.ai/docs/Use-Cases/agent_chat) framework. It offers customizable and conversable agents that integrate LLMs, tools, and humans.
|
|
210
210
|
By automating chat among multiple capable agents, one can easily make them collectively perform tasks autonomously or with human feedback, including tasks that require using tools via code.
|
|
211
211
|
|
|
212
212
|
Features of this use case include:
|
|
@@ -220,7 +220,7 @@ For [example](https://github.com/ag2ai/ag2/blob/main/test/twoagent.py),
|
|
|
220
220
|
```python
|
|
221
221
|
from autogen import AssistantAgent, UserProxyAgent, config_list_from_json
|
|
222
222
|
# Load LLM inference endpoints from an env variable or a file
|
|
223
|
-
# See https://
|
|
223
|
+
# See https://docs.ag2.ai/docs/FAQ#set-your-api-endpoints
|
|
224
224
|
# and OAI_CONFIG_LIST_sample
|
|
225
225
|
config_list = config_list_from_json(env_or_file="OAI_CONFIG_LIST")
|
|
226
226
|
# You can also set config_list directly as a list, for example, config_list = [{'model': 'gpt-4o', 'api_key': '<your OpenAI API key here>'},]
|
|
@@ -243,7 +243,7 @@ The figure below shows an example conversation flow with AG2.
|
|
|
243
243
|
|
|
244
244
|
|
|
245
245
|
Alternatively, the [sample code](https://github.com/ag2ai/build-with-ag2/blob/main/samples/simple_chat.py) here allows a user to chat with an AG2 agent in ChatGPT style.
|
|
246
|
-
Please find more [code examples](https://
|
|
246
|
+
Please find more [code examples](https://docs.ag2.ai/docs/Examples#automated-multi-agent-chat) for this feature.
|
|
247
247
|
|
|
248
248
|
<p align="right" style="font-size: 14px; color: #555; margin-top: 20px;">
|
|
249
249
|
<a href="#readme-top" style="text-decoration: none; color: blue; font-weight: bold;">
|
|
@@ -253,7 +253,7 @@ Please find more [code examples](https://ag2ai.github.io/ag2/docs/Examples#autom
|
|
|
253
253
|
|
|
254
254
|
## Enhanced LLM Inferences
|
|
255
255
|
|
|
256
|
-
AG2 also helps maximize the utility out of the expensive LLMs such as gpt-4o. It offers [enhanced LLM inference](https://
|
|
256
|
+
AG2 also helps maximize the utility out of the expensive LLMs such as gpt-4o. It offers [enhanced LLM inference](https://docs.ag2.ai/docs/Use-Cases/enhanced_inference#api-unification) with powerful functionalities like caching, error handling, multi-config inference and templating.
|
|
257
257
|
|
|
258
258
|
<!-- For example, you can optimize generations by LLM with your own tuning data, success metrics, and budgets.
|
|
259
259
|
|
|
@@ -272,7 +272,7 @@ config, analysis = autogen.Completion.tune(
|
|
|
272
272
|
response = autogen.Completion.create(context=test_instance, **config)
|
|
273
273
|
```
|
|
274
274
|
|
|
275
|
-
Please find more [code examples](https://
|
|
275
|
+
Please find more [code examples](https://docs.ag2.ai/docs/Examples#tune-gpt-models) for this feature. -->
|
|
276
276
|
|
|
277
277
|
<p align="right" style="font-size: 14px; color: #555; margin-top: 20px;">
|
|
278
278
|
<a href="#readme-top" style="text-decoration: none; color: blue; font-weight: bold;">
|
|
@@ -282,15 +282,15 @@ Please find more [code examples](https://ag2ai.github.io/ag2/docs/Examples#tune-
|
|
|
282
282
|
|
|
283
283
|
## Documentation
|
|
284
284
|
|
|
285
|
-
You can find detailed documentation about AG2 [here](https://
|
|
285
|
+
You can find detailed documentation about AG2 [here](https://docs.ag2.ai/).
|
|
286
286
|
|
|
287
287
|
In addition, you can find:
|
|
288
288
|
|
|
289
|
-
- [Research](https://
|
|
289
|
+
- [Research](https://docs.ag2.ai/docs/Research), [blogposts](https://docs.ag2.ai/blog) around AG2, and [Transparency FAQs](https://github.com/ag2ai/ag2/blob/main/TRANSPARENCY_FAQS.md)
|
|
290
290
|
|
|
291
291
|
- [Discord](https://discord.gg/pAbnFJrkgZ)
|
|
292
292
|
|
|
293
|
-
- [Contributing guide](https://
|
|
293
|
+
- [Contributing guide](https://docs.ag2.ai/docs/contributor-guide/contributing)
|
|
294
294
|
|
|
295
295
|
<p align="right" style="font-size: 14px; color: #555; margin-top: 20px;">
|
|
296
296
|
<a href="#readme-top" style="text-decoration: none; color: blue; font-weight: bold;">
|
|
@@ -0,0 +1,106 @@
|
|
|
1
|
+
pyautogen==0.6.1
|
|
2
|
+
|
|
3
|
+
[anthropic]
|
|
4
|
+
pyautogen[anthropic]==0.6.1
|
|
5
|
+
|
|
6
|
+
[autobuild]
|
|
7
|
+
pyautogen[autobuild]==0.6.1
|
|
8
|
+
|
|
9
|
+
[bedrock]
|
|
10
|
+
pyautogen[bedrock]==0.6.1
|
|
11
|
+
|
|
12
|
+
[blendsearch]
|
|
13
|
+
pyautogen[blendsearch]==0.6.1
|
|
14
|
+
|
|
15
|
+
[captainagent]
|
|
16
|
+
pyautogen[captainagent]==0.6.1
|
|
17
|
+
|
|
18
|
+
[cerebras]
|
|
19
|
+
pyautogen[cerebras]==0.6.1
|
|
20
|
+
|
|
21
|
+
[cohere]
|
|
22
|
+
pyautogen[cohere]==0.6.1
|
|
23
|
+
|
|
24
|
+
[cosmosdb]
|
|
25
|
+
pyautogen[cosmosdb]==0.6.1
|
|
26
|
+
|
|
27
|
+
[gemini]
|
|
28
|
+
pyautogen[gemini]==0.6.1
|
|
29
|
+
|
|
30
|
+
[graph]
|
|
31
|
+
pyautogen[graph]==0.6.1
|
|
32
|
+
|
|
33
|
+
[graph-rag-falkor-db]
|
|
34
|
+
pyautogen[graph-rag-falkor-db]==0.6.1
|
|
35
|
+
|
|
36
|
+
[groq]
|
|
37
|
+
pyautogen[groq]==0.6.1
|
|
38
|
+
|
|
39
|
+
[interop]
|
|
40
|
+
pyautogen[interop]==0.6.1
|
|
41
|
+
|
|
42
|
+
[interop-crewai]
|
|
43
|
+
pyautogen[interop-crewai]==0.6.1
|
|
44
|
+
|
|
45
|
+
[interop-langchain]
|
|
46
|
+
pyautogen[interop-langchain]==0.6.1
|
|
47
|
+
|
|
48
|
+
[interop-pydantic-ai]
|
|
49
|
+
pyautogen[interop-pydantic-ai]==0.6.1
|
|
50
|
+
|
|
51
|
+
[jupyter-executor]
|
|
52
|
+
pyautogen[jupyter-executor]==0.6.1
|
|
53
|
+
|
|
54
|
+
[lmm]
|
|
55
|
+
pyautogen[lmm]==0.6.1
|
|
56
|
+
|
|
57
|
+
[long-context]
|
|
58
|
+
pyautogen[long-context]==0.6.1
|
|
59
|
+
|
|
60
|
+
[mathchat]
|
|
61
|
+
pyautogen[mathchat]==0.6.1
|
|
62
|
+
|
|
63
|
+
[mistral]
|
|
64
|
+
pyautogen[mistral]==0.6.1
|
|
65
|
+
|
|
66
|
+
[neo4j]
|
|
67
|
+
pyautogen[neo4j]==0.6.1
|
|
68
|
+
|
|
69
|
+
[ollama]
|
|
70
|
+
pyautogen[ollama]==0.6.1
|
|
71
|
+
|
|
72
|
+
[redis]
|
|
73
|
+
pyautogen[redis]==0.6.1
|
|
74
|
+
|
|
75
|
+
[retrievechat]
|
|
76
|
+
pyautogen[retrievechat]==0.6.1
|
|
77
|
+
|
|
78
|
+
[retrievechat-mongodb]
|
|
79
|
+
pyautogen[retrievechat-mongodb]==0.6.1
|
|
80
|
+
|
|
81
|
+
[retrievechat-pgvector]
|
|
82
|
+
pyautogen[retrievechat-pgvector]==0.6.1
|
|
83
|
+
|
|
84
|
+
[retrievechat-qdrant]
|
|
85
|
+
pyautogen[retrievechat-qdrant]==0.6.1
|
|
86
|
+
|
|
87
|
+
[teachable]
|
|
88
|
+
pyautogen[teachable]==0.6.1
|
|
89
|
+
|
|
90
|
+
[test]
|
|
91
|
+
pyautogen[test]==0.6.1
|
|
92
|
+
|
|
93
|
+
[together]
|
|
94
|
+
pyautogen[together]==0.6.1
|
|
95
|
+
|
|
96
|
+
[twilio]
|
|
97
|
+
pyautogen[twilio]==0.6.1
|
|
98
|
+
|
|
99
|
+
[types]
|
|
100
|
+
pyautogen[types]==0.6.1
|
|
101
|
+
|
|
102
|
+
[websockets]
|
|
103
|
+
pyautogen[websockets]==0.6.1
|
|
104
|
+
|
|
105
|
+
[websurfer]
|
|
106
|
+
pyautogen[websurfer]==0.6.1
|
|
@@ -62,11 +62,13 @@ files = [
|
|
|
62
62
|
"autogen/io",
|
|
63
63
|
"autogen/tools",
|
|
64
64
|
"autogen/interop",
|
|
65
|
+
"autogen/agentchat/realtime_agent",
|
|
65
66
|
"test/test_pydantic.py",
|
|
66
67
|
"test/test_function_utils.py",
|
|
67
68
|
"test/io",
|
|
68
69
|
"test/tools",
|
|
69
70
|
"test/interop",
|
|
71
|
+
"test/agentchat/realtime_agent",
|
|
70
72
|
]
|
|
71
73
|
exclude = [
|
|
72
74
|
"autogen/math_utils\\.py",
|
|
@@ -25,7 +25,7 @@ __version__ = version["__version__"]
|
|
|
25
25
|
current_os = platform.system()
|
|
26
26
|
|
|
27
27
|
install_requires = [
|
|
28
|
-
"openai>=1.
|
|
28
|
+
"openai>=1.58",
|
|
29
29
|
"diskcache",
|
|
30
30
|
"termcolor",
|
|
31
31
|
"flaml",
|
|
@@ -51,6 +51,7 @@ test = [
|
|
|
51
51
|
"pytest-asyncio",
|
|
52
52
|
"pytest>=8,<9",
|
|
53
53
|
"pandas",
|
|
54
|
+
"fastapi>=0.115.0,<1",
|
|
54
55
|
]
|
|
55
56
|
|
|
56
57
|
jupyter_executor = [
|
|
@@ -82,6 +83,7 @@ neo4j = [
|
|
|
82
83
|
"llama-index-core==0.12.5",
|
|
83
84
|
]
|
|
84
85
|
|
|
86
|
+
# used for agentchat_realtime_swarm notebook and realtime agent twilio demo
|
|
85
87
|
twilio = ["fastapi>=0.115.0,<1", "uvicorn>=0.30.6,<1", "twilio>=9.3.2"]
|
|
86
88
|
|
|
87
89
|
interop_crewai = ["crewai[tools]>=0.86,<1; python_version>='3.10' and python_version<'3.13'"]
|
|
@@ -119,7 +121,14 @@ extra_require = {
|
|
|
119
121
|
"teachable": ["chromadb"],
|
|
120
122
|
"lmm": ["replicate", "pillow"],
|
|
121
123
|
"graph": ["networkx", "matplotlib"],
|
|
122
|
-
"gemini": [
|
|
124
|
+
"gemini": [
|
|
125
|
+
"google-generativeai>=0.5,<1",
|
|
126
|
+
"google-cloud-aiplatform",
|
|
127
|
+
"google-auth",
|
|
128
|
+
"pillow",
|
|
129
|
+
"pydantic",
|
|
130
|
+
"jsonschema",
|
|
131
|
+
],
|
|
123
132
|
"together": ["together>=1.2"],
|
|
124
133
|
"websurfer": ["beautifulsoup4", "markdownify", "pdfminer.six", "pathvalidate"],
|
|
125
134
|
"redis": ["redis"],
|
|
@@ -240,7 +240,7 @@ print(f"Text: {text}")
|
|
|
240
240
|
codeblocks = extract_code(
|
|
241
241
|
"""
|
|
242
242
|
Example:
|
|
243
|
-
```
|
|
243
|
+
```python
|
|
244
244
|
def scrape(url):
|
|
245
245
|
import requests
|
|
246
246
|
from bs4 import BeautifulSoup
|
|
@@ -251,7 +251,7 @@ def scrape(url):
|
|
|
251
251
|
return title, text
|
|
252
252
|
```
|
|
253
253
|
Test:
|
|
254
|
-
```
|
|
254
|
+
```python
|
|
255
255
|
url = "https://en.wikipedia.org/wiki/Web_scraping"
|
|
256
256
|
title, text = scrape(url)
|
|
257
257
|
print(f"Title: {title}")
|
|
@@ -285,7 +285,7 @@ Example:
|
|
|
285
285
|
codeblocks = extract_code(
|
|
286
286
|
"""
|
|
287
287
|
Example:
|
|
288
|
-
```
|
|
288
|
+
```python
|
|
289
289
|
def scrape(url):
|
|
290
290
|
import requests
|
|
291
291
|
from bs4 import BeautifulSoup
|
|
@@ -7,11 +7,10 @@
|
|
|
7
7
|
import asyncio
|
|
8
8
|
import inspect
|
|
9
9
|
import unittest.mock
|
|
10
|
-
from typing import Any, Dict, List, Literal, Optional, Tuple
|
|
10
|
+
from typing import Annotated, Any, Dict, List, Literal, Optional, Tuple
|
|
11
11
|
|
|
12
12
|
import pytest
|
|
13
13
|
from pydantic import BaseModel, Field
|
|
14
|
-
from typing_extensions import Annotated
|
|
15
14
|
|
|
16
15
|
from autogen._pydantic import PYDANTIC_V1, model_dump
|
|
17
16
|
from autogen.function_utils import (
|
|
@@ -40,7 +39,7 @@ def g( # type: ignore[empty-body]
|
|
|
40
39
|
b: int = 2,
|
|
41
40
|
c: Annotated[float, "Parameter c"] = 0.1,
|
|
42
41
|
*,
|
|
43
|
-
d:
|
|
42
|
+
d: dict[str, tuple[Optional[int], list[float]]],
|
|
44
43
|
) -> str:
|
|
45
44
|
pass
|
|
46
45
|
|
|
@@ -50,7 +49,7 @@ async def a_g( # type: ignore[empty-body]
|
|
|
50
49
|
b: int = 2,
|
|
51
50
|
c: Annotated[float, "Parameter c"] = 0.1,
|
|
52
51
|
*,
|
|
53
|
-
d:
|
|
52
|
+
d: dict[str, tuple[Optional[int], list[float]]],
|
|
54
53
|
) -> str:
|
|
55
54
|
pass
|
|
56
55
|
|
|
@@ -89,7 +88,7 @@ def test_get_parameter_json_schema() -> None:
|
|
|
89
88
|
b: float
|
|
90
89
|
c: str
|
|
91
90
|
|
|
92
|
-
expected:
|
|
91
|
+
expected: dict[str, Any] = {
|
|
93
92
|
"description": "b",
|
|
94
93
|
"properties": {"b": {"title": "B", "type": "number"}, "c": {"title": "C", "type": "string"}},
|
|
95
94
|
"required": ["b", "c"],
|
|
@@ -367,7 +366,7 @@ def test_load_basemodels_if_needed_sync() -> None:
|
|
|
367
366
|
def f(
|
|
368
367
|
base: Annotated[Currency, "Base currency"],
|
|
369
368
|
quote_currency: Annotated[CurrencySymbol, "Quote currency"] = "EUR",
|
|
370
|
-
) ->
|
|
369
|
+
) -> tuple[Currency, CurrencySymbol]:
|
|
371
370
|
return base, quote_currency
|
|
372
371
|
|
|
373
372
|
assert not inspect.iscoroutinefunction(f)
|
|
@@ -385,7 +384,7 @@ async def test_load_basemodels_if_needed_async() -> None:
|
|
|
385
384
|
async def f(
|
|
386
385
|
base: Annotated[Currency, "Base currency"],
|
|
387
386
|
quote_currency: Annotated[CurrencySymbol, "Quote currency"] = "EUR",
|
|
388
|
-
) ->
|
|
387
|
+
) -> tuple[Currency, CurrencySymbol]:
|
|
389
388
|
return base, quote_currency
|
|
390
389
|
|
|
391
390
|
assert inspect.iscoroutinefunction(f)
|
|
@@ -4,10 +4,9 @@
|
|
|
4
4
|
#
|
|
5
5
|
# Portions derived from https://github.com/microsoft/autogen are under the MIT License.
|
|
6
6
|
# SPDX-License-Identifier: MIT
|
|
7
|
-
from typing import Dict, List, Optional, Tuple, Union
|
|
7
|
+
from typing import Annotated, Dict, List, Optional, Tuple, Union
|
|
8
8
|
|
|
9
9
|
from pydantic import BaseModel, Field
|
|
10
|
-
from typing_extensions import Annotated
|
|
11
10
|
|
|
12
11
|
from autogen._pydantic import model_dump, model_dump_json, type2schema
|
|
13
12
|
|
|
@@ -19,14 +18,14 @@ def test_type2schema() -> None:
|
|
|
19
18
|
assert type2schema(bool) == {"type": "boolean"}
|
|
20
19
|
assert type2schema(None) == {"type": "null"}
|
|
21
20
|
assert type2schema(Optional[int]) == {"anyOf": [{"type": "integer"}, {"type": "null"}]}
|
|
22
|
-
assert type2schema(
|
|
23
|
-
assert type2schema(
|
|
21
|
+
assert type2schema(list[int]) == {"items": {"type": "integer"}, "type": "array"}
|
|
22
|
+
assert type2schema(tuple[int, float, str]) == {
|
|
24
23
|
"maxItems": 3,
|
|
25
24
|
"minItems": 3,
|
|
26
25
|
"prefixItems": [{"type": "integer"}, {"type": "number"}, {"type": "string"}],
|
|
27
26
|
"type": "array",
|
|
28
27
|
}
|
|
29
|
-
assert type2schema(
|
|
28
|
+
assert type2schema(dict[str, int]) == {"additionalProperties": {"type": "integer"}, "type": "object"}
|
|
30
29
|
assert type2schema(Annotated[str, "some text"]) == {"type": "string"}
|
|
31
30
|
assert type2schema(Union[int, float]) == {"anyOf": [{"type": "integer"}, {"type": "number"}]}
|
|
32
31
|
|
|
@@ -1,106 +0,0 @@
|
|
|
1
|
-
pyautogen==0.6.0b1
|
|
2
|
-
|
|
3
|
-
[anthropic]
|
|
4
|
-
pyautogen[anthropic]==0.6.0b1
|
|
5
|
-
|
|
6
|
-
[autobuild]
|
|
7
|
-
pyautogen[autobuild]==0.6.0b1
|
|
8
|
-
|
|
9
|
-
[bedrock]
|
|
10
|
-
pyautogen[bedrock]==0.6.0b1
|
|
11
|
-
|
|
12
|
-
[blendsearch]
|
|
13
|
-
pyautogen[blendsearch]==0.6.0b1
|
|
14
|
-
|
|
15
|
-
[captainagent]
|
|
16
|
-
pyautogen[captainagent]==0.6.0b1
|
|
17
|
-
|
|
18
|
-
[cerebras]
|
|
19
|
-
pyautogen[cerebras]==0.6.0b1
|
|
20
|
-
|
|
21
|
-
[cohere]
|
|
22
|
-
pyautogen[cohere]==0.6.0b1
|
|
23
|
-
|
|
24
|
-
[cosmosdb]
|
|
25
|
-
pyautogen[cosmosdb]==0.6.0b1
|
|
26
|
-
|
|
27
|
-
[gemini]
|
|
28
|
-
pyautogen[gemini]==0.6.0b1
|
|
29
|
-
|
|
30
|
-
[graph]
|
|
31
|
-
pyautogen[graph]==0.6.0b1
|
|
32
|
-
|
|
33
|
-
[graph-rag-falkor-db]
|
|
34
|
-
pyautogen[graph-rag-falkor-db]==0.6.0b1
|
|
35
|
-
|
|
36
|
-
[groq]
|
|
37
|
-
pyautogen[groq]==0.6.0b1
|
|
38
|
-
|
|
39
|
-
[interop]
|
|
40
|
-
pyautogen[interop]==0.6.0b1
|
|
41
|
-
|
|
42
|
-
[interop-crewai]
|
|
43
|
-
pyautogen[interop-crewai]==0.6.0b1
|
|
44
|
-
|
|
45
|
-
[interop-langchain]
|
|
46
|
-
pyautogen[interop-langchain]==0.6.0b1
|
|
47
|
-
|
|
48
|
-
[interop-pydantic-ai]
|
|
49
|
-
pyautogen[interop-pydantic-ai]==0.6.0b1
|
|
50
|
-
|
|
51
|
-
[jupyter-executor]
|
|
52
|
-
pyautogen[jupyter-executor]==0.6.0b1
|
|
53
|
-
|
|
54
|
-
[lmm]
|
|
55
|
-
pyautogen[lmm]==0.6.0b1
|
|
56
|
-
|
|
57
|
-
[long-context]
|
|
58
|
-
pyautogen[long-context]==0.6.0b1
|
|
59
|
-
|
|
60
|
-
[mathchat]
|
|
61
|
-
pyautogen[mathchat]==0.6.0b1
|
|
62
|
-
|
|
63
|
-
[mistral]
|
|
64
|
-
pyautogen[mistral]==0.6.0b1
|
|
65
|
-
|
|
66
|
-
[neo4j]
|
|
67
|
-
pyautogen[neo4j]==0.6.0b1
|
|
68
|
-
|
|
69
|
-
[ollama]
|
|
70
|
-
pyautogen[ollama]==0.6.0b1
|
|
71
|
-
|
|
72
|
-
[redis]
|
|
73
|
-
pyautogen[redis]==0.6.0b1
|
|
74
|
-
|
|
75
|
-
[retrievechat]
|
|
76
|
-
pyautogen[retrievechat]==0.6.0b1
|
|
77
|
-
|
|
78
|
-
[retrievechat-mongodb]
|
|
79
|
-
pyautogen[retrievechat-mongodb]==0.6.0b1
|
|
80
|
-
|
|
81
|
-
[retrievechat-pgvector]
|
|
82
|
-
pyautogen[retrievechat-pgvector]==0.6.0b1
|
|
83
|
-
|
|
84
|
-
[retrievechat-qdrant]
|
|
85
|
-
pyautogen[retrievechat-qdrant]==0.6.0b1
|
|
86
|
-
|
|
87
|
-
[teachable]
|
|
88
|
-
pyautogen[teachable]==0.6.0b1
|
|
89
|
-
|
|
90
|
-
[test]
|
|
91
|
-
pyautogen[test]==0.6.0b1
|
|
92
|
-
|
|
93
|
-
[together]
|
|
94
|
-
pyautogen[together]==0.6.0b1
|
|
95
|
-
|
|
96
|
-
[twilio]
|
|
97
|
-
pyautogen[twilio]==0.6.0b1
|
|
98
|
-
|
|
99
|
-
[types]
|
|
100
|
-
pyautogen[types]==0.6.0b1
|
|
101
|
-
|
|
102
|
-
[websockets]
|
|
103
|
-
pyautogen[websockets]==0.6.0b1
|
|
104
|
-
|
|
105
|
-
[websurfer]
|
|
106
|
-
pyautogen[websurfer]==0.6.0b1
|
|
File without changes
|
|
File without changes
|
|
File without changes
|
|
File without changes
|
|
File without changes
|
|
File without changes
|
|
File without changes
|
|
File without changes
|
|
File without changes
|
|
File without changes
|
|
File without changes
|
|
File without changes
|