ag2 0.7.3__tar.gz → 0.7.4b1__tar.gz

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.

Potentially problematic release.


This version of ag2 might be problematic. Click here for more details.

@@ -1,6 +1,6 @@
1
1
  Metadata-Version: 2.1
2
2
  Name: ag2
3
- Version: 0.7.3
3
+ Version: 0.7.4b1
4
4
  Summary: Alias package for pyautogen
5
5
  Home-page: https://github.com/ag2ai/ag2
6
6
  Author: Chi Wang & Qingyun Wu
@@ -49,6 +49,9 @@ Provides-Extra: groq
49
49
  Provides-Extra: cohere
50
50
  Provides-Extra: ollama
51
51
  Provides-Extra: bedrock
52
+ Provides-Extra: commsagent-discord
53
+ Provides-Extra: commsagent-slack
54
+ Provides-Extra: commsagent-telegram
52
55
  Provides-Extra: test
53
56
  Provides-Extra: docs
54
57
  Provides-Extra: types
@@ -64,12 +67,22 @@ License-File: NOTICE.md
64
67
  [![Build](https://github.com/ag2ai/ag2/actions/workflows/python-package.yml/badge.svg)](https://github.com/ag2ai/ag2/actions/workflows/python-package.yml)
65
68
  ![Python Version](https://img.shields.io/badge/3.9%20%7C%203.10%20%7C%203.11%20%7C%203.12-blue)
66
69
  [![Discord](https://img.shields.io/discord/1153072414184452236?logo=discord&style=flat)](https://discord.gg/pAbnFJrkgZ)
67
- [![Twitter](https://img.shields.io/twitter/url/https/twitter.com/cloudposse.svg?style=social&label=Follow%20%40ag2ai)](https://x.com/Chi_Wang_)
70
+ [![X](https://img.shields.io/twitter/url/https/twitter.com/cloudposse.svg?style=social&label=Follow%20%40ag2oss)](https://x.com/ag2oss)
68
71
 
69
72
  <!-- [![NuGet version](https://badge.fury.io/nu/AutoGen.Core.svg)](https://badge.fury.io/nu/AutoGen.Core) -->
70
73
 
71
74
  # [AG2](https://github.com/ag2ai/ag2)
72
75
 
76
+ ## Key Features
77
+ - 🤖 Multi-Agent Framework - Build and orchestrate AI agent interactions
78
+ - 🔧 Flexible Integration - Support for various LLMs (OpenAI, Anthropic, Gemini, etc.)
79
+ - 🛠 Tool Usage - Agents can use external tools and execute code
80
+ - 👥 Human-in-the-Loop - Seamless human participation when needed
81
+ - 🔄 Rich Orchestration Patterns - Agents can be organized in any form you like
82
+ - 🎯 Future-Oriented - Designed for solving difficult problems and harnessing latest and future technology
83
+
84
+ [📚 Documentation](https://docs.ag2.ai/) | [💡 Examples](https://github.com/ag2ai/build-with-ag2) | [🤝 Contributing](https://docs.ag2.ai/docs/contributor-guide/contributing)
85
+
73
86
  [📚 Cite paper](#related-papers).
74
87
 
75
88
  <!-- <p align="center">
@@ -138,7 +151,11 @@ We adopt the Apache 2.0 license from v0.3. This enhances our commitment to open-
138
151
 
139
152
  ## What is AG2
140
153
 
141
- AG2 (formerly AutoGen) is an open-source programming framework for building AI agents and facilitating cooperation among multiple agents to solve tasks. AG2 aims to streamline the development and research of agentic AI, much like PyTorch does for Deep Learning. It offers features such as agents capable of interacting with each other, facilitates the use of various large language models (LLMs) and tool use support, autonomous and human-in-the-loop workflows, and multi-agent conversation patterns.
154
+ AG2 (formerly AutoGen) is an open-source AgentOS for building AI agents and facilitating cooperation among multiple agents to solve tasks. AG2 provides fundamental building blocks needed to create, deploy, and manage AI agents that can work together to solve complex problems.
155
+
156
+ ### Core Concepts
157
+ - **Agents**: Stateful entities that can send messages, receive messages, and generate replies using underlying capabilities powered by LLMs, non-LLM tools, or human inputs. Depending on the underlying capability, an agent may reason, plan, execute tasks or involve other agents before generating a reply.
158
+ - **Conversations**: Structured communication patterns between agents.
142
159
 
143
160
  **Open Source Statement**: The project welcomes contributions from developers and organizations worldwide. Our goal is to foster a collaborative and inclusive community where diverse perspectives and expertise can drive innovation and enhance the project's capabilities. Whether you are an individual contributor or represent an organization, we invite you to join us in shaping the future of this project. Together, we can build something truly remarkable.
144
161
 
@@ -202,7 +219,7 @@ Find more options in [Installation](https://docs.ag2.ai/docs/Installation#option
202
219
 
203
220
  Even if you are installing and running AG2 locally outside of docker, the recommendation and default behavior of agents is to perform [code execution](https://docs.ag2.ai/docs/FAQ#if-you-want-to-run-code-execution-in-docker) in docker. Find more instructions and how to change the default behaviour [here](https://docs.ag2.ai/docs/FAQ#if-you-want-to-run-code-execution-locally).
204
221
 
205
- For LLM inference configurations, check the [FAQs](https://docs.ag2.ai/docs/FAQ#set-your-api-endpoints).
222
+ For LLM inference configurations, check the [FAQs](https://docs.ag2.ai/docs/user-guide/advanced-concepts/llm-configuration-deep-dive#llm-configuration).
206
223
 
207
224
  <p align="right" style="font-size: 14px; color: #555; margin-top: 20px;">
208
225
  <a href="#readme-top" style="text-decoration: none; color: blue; font-weight: bold;">
@@ -226,7 +243,7 @@ For [example](https://github.com/ag2ai/ag2/blob/main/test/twoagent.py),
226
243
  ```python
227
244
  from autogen import AssistantAgent, UserProxyAgent, config_list_from_json
228
245
  # Load LLM inference endpoints from an env variable or a file
229
- # See https://docs.ag2.ai/docs/FAQ#set-your-api-endpoints
246
+ # See https://docs.ag2.ai/docs/user-guide/advanced-concepts/llm-configuration-deep-dive#llm-configuration
230
247
  # and OAI_CONFIG_LIST_sample
231
248
  config_list = config_list_from_json(env_or_file="OAI_CONFIG_LIST")
232
249
  # You can also set config_list directly as a list, for example, config_list = [{'model': 'gpt-4o', 'api_key': '<your OpenAI API key here>'},]
@@ -5,12 +5,22 @@
5
5
  [![Build](https://github.com/ag2ai/ag2/actions/workflows/python-package.yml/badge.svg)](https://github.com/ag2ai/ag2/actions/workflows/python-package.yml)
6
6
  ![Python Version](https://img.shields.io/badge/3.9%20%7C%203.10%20%7C%203.11%20%7C%203.12-blue)
7
7
  [![Discord](https://img.shields.io/discord/1153072414184452236?logo=discord&style=flat)](https://discord.gg/pAbnFJrkgZ)
8
- [![Twitter](https://img.shields.io/twitter/url/https/twitter.com/cloudposse.svg?style=social&label=Follow%20%40ag2ai)](https://x.com/Chi_Wang_)
8
+ [![X](https://img.shields.io/twitter/url/https/twitter.com/cloudposse.svg?style=social&label=Follow%20%40ag2oss)](https://x.com/ag2oss)
9
9
 
10
10
  <!-- [![NuGet version](https://badge.fury.io/nu/AutoGen.Core.svg)](https://badge.fury.io/nu/AutoGen.Core) -->
11
11
 
12
12
  # [AG2](https://github.com/ag2ai/ag2)
13
13
 
14
+ ## Key Features
15
+ - 🤖 Multi-Agent Framework - Build and orchestrate AI agent interactions
16
+ - 🔧 Flexible Integration - Support for various LLMs (OpenAI, Anthropic, Gemini, etc.)
17
+ - 🛠 Tool Usage - Agents can use external tools and execute code
18
+ - 👥 Human-in-the-Loop - Seamless human participation when needed
19
+ - 🔄 Rich Orchestration Patterns - Agents can be organized in any form you like
20
+ - 🎯 Future-Oriented - Designed for solving difficult problems and harnessing latest and future technology
21
+
22
+ [📚 Documentation](https://docs.ag2.ai/) | [💡 Examples](https://github.com/ag2ai/build-with-ag2) | [🤝 Contributing](https://docs.ag2.ai/docs/contributor-guide/contributing)
23
+
14
24
  [📚 Cite paper](#related-papers).
15
25
 
16
26
  <!-- <p align="center">
@@ -79,7 +89,11 @@ We adopt the Apache 2.0 license from v0.3. This enhances our commitment to open-
79
89
 
80
90
  ## What is AG2
81
91
 
82
- AG2 (formerly AutoGen) is an open-source programming framework for building AI agents and facilitating cooperation among multiple agents to solve tasks. AG2 aims to streamline the development and research of agentic AI, much like PyTorch does for Deep Learning. It offers features such as agents capable of interacting with each other, facilitates the use of various large language models (LLMs) and tool use support, autonomous and human-in-the-loop workflows, and multi-agent conversation patterns.
92
+ AG2 (formerly AutoGen) is an open-source AgentOS for building AI agents and facilitating cooperation among multiple agents to solve tasks. AG2 provides fundamental building blocks needed to create, deploy, and manage AI agents that can work together to solve complex problems.
93
+
94
+ ### Core Concepts
95
+ - **Agents**: Stateful entities that can send messages, receive messages, and generate replies using underlying capabilities powered by LLMs, non-LLM tools, or human inputs. Depending on the underlying capability, an agent may reason, plan, execute tasks or involve other agents before generating a reply.
96
+ - **Conversations**: Structured communication patterns between agents.
83
97
 
84
98
  **Open Source Statement**: The project welcomes contributions from developers and organizations worldwide. Our goal is to foster a collaborative and inclusive community where diverse perspectives and expertise can drive innovation and enhance the project's capabilities. Whether you are an individual contributor or represent an organization, we invite you to join us in shaping the future of this project. Together, we can build something truly remarkable.
85
99
 
@@ -143,7 +157,7 @@ Find more options in [Installation](https://docs.ag2.ai/docs/Installation#option
143
157
 
144
158
  Even if you are installing and running AG2 locally outside of docker, the recommendation and default behavior of agents is to perform [code execution](https://docs.ag2.ai/docs/FAQ#if-you-want-to-run-code-execution-in-docker) in docker. Find more instructions and how to change the default behaviour [here](https://docs.ag2.ai/docs/FAQ#if-you-want-to-run-code-execution-locally).
145
159
 
146
- For LLM inference configurations, check the [FAQs](https://docs.ag2.ai/docs/FAQ#set-your-api-endpoints).
160
+ For LLM inference configurations, check the [FAQs](https://docs.ag2.ai/docs/user-guide/advanced-concepts/llm-configuration-deep-dive#llm-configuration).
147
161
 
148
162
  <p align="right" style="font-size: 14px; color: #555; margin-top: 20px;">
149
163
  <a href="#readme-top" style="text-decoration: none; color: blue; font-weight: bold;">
@@ -167,7 +181,7 @@ For [example](https://github.com/ag2ai/ag2/blob/main/test/twoagent.py),
167
181
  ```python
168
182
  from autogen import AssistantAgent, UserProxyAgent, config_list_from_json
169
183
  # Load LLM inference endpoints from an env variable or a file
170
- # See https://docs.ag2.ai/docs/FAQ#set-your-api-endpoints
184
+ # See https://docs.ag2.ai/docs/user-guide/advanced-concepts/llm-configuration-deep-dive#llm-configuration
171
185
  # and OAI_CONFIG_LIST_sample
172
186
  config_list = config_list_from_json(env_or_file="OAI_CONFIG_LIST")
173
187
  # You can also set config_list directly as a list, for example, config_list = [{'model': 'gpt-4o', 'api_key': '<your OpenAI API key here>'},]
@@ -1,6 +1,6 @@
1
1
  Metadata-Version: 2.1
2
2
  Name: ag2
3
- Version: 0.7.3
3
+ Version: 0.7.4b1
4
4
  Summary: Alias package for pyautogen
5
5
  Home-page: https://github.com/ag2ai/ag2
6
6
  Author: Chi Wang & Qingyun Wu
@@ -49,6 +49,9 @@ Provides-Extra: groq
49
49
  Provides-Extra: cohere
50
50
  Provides-Extra: ollama
51
51
  Provides-Extra: bedrock
52
+ Provides-Extra: commsagent-discord
53
+ Provides-Extra: commsagent-slack
54
+ Provides-Extra: commsagent-telegram
52
55
  Provides-Extra: test
53
56
  Provides-Extra: docs
54
57
  Provides-Extra: types
@@ -64,12 +67,22 @@ License-File: NOTICE.md
64
67
  [![Build](https://github.com/ag2ai/ag2/actions/workflows/python-package.yml/badge.svg)](https://github.com/ag2ai/ag2/actions/workflows/python-package.yml)
65
68
  ![Python Version](https://img.shields.io/badge/3.9%20%7C%203.10%20%7C%203.11%20%7C%203.12-blue)
66
69
  [![Discord](https://img.shields.io/discord/1153072414184452236?logo=discord&style=flat)](https://discord.gg/pAbnFJrkgZ)
67
- [![Twitter](https://img.shields.io/twitter/url/https/twitter.com/cloudposse.svg?style=social&label=Follow%20%40ag2ai)](https://x.com/Chi_Wang_)
70
+ [![X](https://img.shields.io/twitter/url/https/twitter.com/cloudposse.svg?style=social&label=Follow%20%40ag2oss)](https://x.com/ag2oss)
68
71
 
69
72
  <!-- [![NuGet version](https://badge.fury.io/nu/AutoGen.Core.svg)](https://badge.fury.io/nu/AutoGen.Core) -->
70
73
 
71
74
  # [AG2](https://github.com/ag2ai/ag2)
72
75
 
76
+ ## Key Features
77
+ - 🤖 Multi-Agent Framework - Build and orchestrate AI agent interactions
78
+ - 🔧 Flexible Integration - Support for various LLMs (OpenAI, Anthropic, Gemini, etc.)
79
+ - 🛠 Tool Usage - Agents can use external tools and execute code
80
+ - 👥 Human-in-the-Loop - Seamless human participation when needed
81
+ - 🔄 Rich Orchestration Patterns - Agents can be organized in any form you like
82
+ - 🎯 Future-Oriented - Designed for solving difficult problems and harnessing latest and future technology
83
+
84
+ [📚 Documentation](https://docs.ag2.ai/) | [💡 Examples](https://github.com/ag2ai/build-with-ag2) | [🤝 Contributing](https://docs.ag2.ai/docs/contributor-guide/contributing)
85
+
73
86
  [📚 Cite paper](#related-papers).
74
87
 
75
88
  <!-- <p align="center">
@@ -138,7 +151,11 @@ We adopt the Apache 2.0 license from v0.3. This enhances our commitment to open-
138
151
 
139
152
  ## What is AG2
140
153
 
141
- AG2 (formerly AutoGen) is an open-source programming framework for building AI agents and facilitating cooperation among multiple agents to solve tasks. AG2 aims to streamline the development and research of agentic AI, much like PyTorch does for Deep Learning. It offers features such as agents capable of interacting with each other, facilitates the use of various large language models (LLMs) and tool use support, autonomous and human-in-the-loop workflows, and multi-agent conversation patterns.
154
+ AG2 (formerly AutoGen) is an open-source AgentOS for building AI agents and facilitating cooperation among multiple agents to solve tasks. AG2 provides fundamental building blocks needed to create, deploy, and manage AI agents that can work together to solve complex problems.
155
+
156
+ ### Core Concepts
157
+ - **Agents**: Stateful entities that can send messages, receive messages, and generate replies using underlying capabilities powered by LLMs, non-LLM tools, or human inputs. Depending on the underlying capability, an agent may reason, plan, execute tasks or involve other agents before generating a reply.
158
+ - **Conversations**: Structured communication patterns between agents.
142
159
 
143
160
  **Open Source Statement**: The project welcomes contributions from developers and organizations worldwide. Our goal is to foster a collaborative and inclusive community where diverse perspectives and expertise can drive innovation and enhance the project's capabilities. Whether you are an individual contributor or represent an organization, we invite you to join us in shaping the future of this project. Together, we can build something truly remarkable.
144
161
 
@@ -202,7 +219,7 @@ Find more options in [Installation](https://docs.ag2.ai/docs/Installation#option
202
219
 
203
220
  Even if you are installing and running AG2 locally outside of docker, the recommendation and default behavior of agents is to perform [code execution](https://docs.ag2.ai/docs/FAQ#if-you-want-to-run-code-execution-in-docker) in docker. Find more instructions and how to change the default behaviour [here](https://docs.ag2.ai/docs/FAQ#if-you-want-to-run-code-execution-locally).
204
221
 
205
- For LLM inference configurations, check the [FAQs](https://docs.ag2.ai/docs/FAQ#set-your-api-endpoints).
222
+ For LLM inference configurations, check the [FAQs](https://docs.ag2.ai/docs/user-guide/advanced-concepts/llm-configuration-deep-dive#llm-configuration).
206
223
 
207
224
  <p align="right" style="font-size: 14px; color: #555; margin-top: 20px;">
208
225
  <a href="#readme-top" style="text-decoration: none; color: blue; font-weight: bold;">
@@ -226,7 +243,7 @@ For [example](https://github.com/ag2ai/ag2/blob/main/test/twoagent.py),
226
243
  ```python
227
244
  from autogen import AssistantAgent, UserProxyAgent, config_list_from_json
228
245
  # Load LLM inference endpoints from an env variable or a file
229
- # See https://docs.ag2.ai/docs/FAQ#set-your-api-endpoints
246
+ # See https://docs.ag2.ai/docs/user-guide/advanced-concepts/llm-configuration-deep-dive#llm-configuration
230
247
  # and OAI_CONFIG_LIST_sample
231
248
  config_list = config_list_from_json(env_or_file="OAI_CONFIG_LIST")
232
249
  # You can also set config_list directly as a list, for example, config_list = [{'model': 'gpt-4o', 'api_key': '<your OpenAI API key here>'},]
@@ -16,6 +16,5 @@ test/test_import.py
16
16
  test/test_import_utils.py
17
17
  test/test_logging.py
18
18
  test/test_notebook.py
19
- test/test_pydantic.py
20
19
  test/test_retrieve_utils.py
21
20
  test/test_token_count.py
@@ -0,0 +1,136 @@
1
+ pyautogen==0.7.4b1
2
+
3
+ [anthropic]
4
+ pyautogen[anthropic]==0.7.4b1
5
+
6
+ [autobuild]
7
+ pyautogen[autobuild]==0.7.4b1
8
+
9
+ [bedrock]
10
+ pyautogen[bedrock]==0.7.4b1
11
+
12
+ [blendsearch]
13
+ pyautogen[blendsearch]==0.7.4b1
14
+
15
+ [browser-use]
16
+ pyautogen[browser-use]==0.7.4b1
17
+
18
+ [captainagent]
19
+ pyautogen[captainagent]==0.7.4b1
20
+
21
+ [cerebras]
22
+ pyautogen[cerebras]==0.7.4b1
23
+
24
+ [cohere]
25
+ pyautogen[cohere]==0.7.4b1
26
+
27
+ [commsagent-discord]
28
+ pyautogen[commsagent-discord]==0.7.4b1
29
+
30
+ [commsagent-slack]
31
+ pyautogen[commsagent-slack]==0.7.4b1
32
+
33
+ [commsagent-telegram]
34
+ pyautogen[commsagent-telegram]==0.7.4b1
35
+
36
+ [cosmosdb]
37
+ pyautogen[cosmosdb]==0.7.4b1
38
+
39
+ [crawl4ai]
40
+ pyautogen[crawl4ai]==0.7.4b1
41
+
42
+ [dev]
43
+ pyautogen[dev]==0.7.4b1
44
+
45
+ [docs]
46
+ pyautogen[docs]==0.7.4b1
47
+
48
+ [flaml]
49
+ pyautogen[flaml]==0.7.4b1
50
+
51
+ [gemini]
52
+ pyautogen[gemini]==0.7.4b1
53
+
54
+ [graph]
55
+ pyautogen[graph]==0.7.4b1
56
+
57
+ [graph-rag-falkor-db]
58
+ pyautogen[graph-rag-falkor-db]==0.7.4b1
59
+
60
+ [groq]
61
+ pyautogen[groq]==0.7.4b1
62
+
63
+ [interop]
64
+ pyautogen[interop]==0.7.4b1
65
+
66
+ [interop-crewai]
67
+ pyautogen[interop-crewai]==0.7.4b1
68
+
69
+ [interop-langchain]
70
+ pyautogen[interop-langchain]==0.7.4b1
71
+
72
+ [interop-pydantic-ai]
73
+ pyautogen[interop-pydantic-ai]==0.7.4b1
74
+
75
+ [jupyter-executor]
76
+ pyautogen[jupyter-executor]==0.7.4b1
77
+
78
+ [lint]
79
+ pyautogen[lint]==0.7.4b1
80
+
81
+ [lmm]
82
+ pyautogen[lmm]==0.7.4b1
83
+
84
+ [long-context]
85
+ pyautogen[long-context]==0.7.4b1
86
+
87
+ [mathchat]
88
+ pyautogen[mathchat]==0.7.4b1
89
+
90
+ [mistral]
91
+ pyautogen[mistral]==0.7.4b1
92
+
93
+ [neo4j]
94
+ pyautogen[neo4j]==0.7.4b1
95
+
96
+ [ollama]
97
+ pyautogen[ollama]==0.7.4b1
98
+
99
+ [rag]
100
+ pyautogen[rag]==0.7.4b1
101
+
102
+ [redis]
103
+ pyautogen[redis]==0.7.4b1
104
+
105
+ [retrievechat]
106
+ pyautogen[retrievechat]==0.7.4b1
107
+
108
+ [retrievechat-mongodb]
109
+ pyautogen[retrievechat-mongodb]==0.7.4b1
110
+
111
+ [retrievechat-pgvector]
112
+ pyautogen[retrievechat-pgvector]==0.7.4b1
113
+
114
+ [retrievechat-qdrant]
115
+ pyautogen[retrievechat-qdrant]==0.7.4b1
116
+
117
+ [teachable]
118
+ pyautogen[teachable]==0.7.4b1
119
+
120
+ [test]
121
+ pyautogen[test]==0.7.4b1
122
+
123
+ [together]
124
+ pyautogen[together]==0.7.4b1
125
+
126
+ [twilio]
127
+ pyautogen[twilio]==0.7.4b1
128
+
129
+ [types]
130
+ pyautogen[types]==0.7.4b1
131
+
132
+ [websockets]
133
+ pyautogen[websockets]==0.7.4b1
134
+
135
+ [websurfer]
136
+ pyautogen[websurfer]==0.7.4b1
@@ -11,8 +11,7 @@ name = "pyautogen"
11
11
  description = "A programming framework for agentic AI"
12
12
  readme = "README.md"
13
13
  authors = [
14
- {name = "Chi Wang", email = "support@ag2.ai"},
15
- {name = "Qingyun Wu", email = "support@ag2.ai"},
14
+ {name = "Chi Wang & Qingyun Wu", email = "support@ag2.ai"}
16
15
  ]
17
16
 
18
17
  keywords = [
@@ -86,8 +85,8 @@ jupyter-executor = [
86
85
  ]
87
86
 
88
87
  retrievechat = [
89
- "protobuf==4.25.3",
90
- "chromadb==0.5.3",
88
+ "protobuf==5.29.3",
89
+ "chromadb==0.6.3",
91
90
  "sentence_transformers",
92
91
  "pypdf",
93
92
  "ipython",
@@ -114,7 +113,7 @@ retrievechat-qdrant = [
114
113
  ]
115
114
 
116
115
  graph-rag-falkor-db = [
117
- "graphrag_sdk==0.3.3",
116
+ "graphrag_sdk==0.6.0",
118
117
  "falkordb>=1.0.10"
119
118
  ]
120
119
 
@@ -135,10 +134,10 @@ browser-use = [
135
134
 
136
135
  neo4j = [
137
136
  "docx2txt==0.8",
138
- "llama-index==0.12.11",
139
- "llama-index-graph-stores-neo4j==0.4.5",
140
- "llama-index-core==0.12.11",
141
- "llama-index-readers-web==0.3.4",
137
+ "llama-index==0.12.16",
138
+ "llama-index-graph-stores-neo4j==0.4.6",
139
+ "llama-index-core==0.12.16",
140
+ "llama-index-readers-web==0.3.5",
142
141
  ]
143
142
 
144
143
  # used for agentchat_realtime_swarm notebook and realtime agent twilio demo
@@ -155,7 +154,7 @@ interop-crewai = [
155
154
  "litellm<1.57.5; sys_platform=='win32'",
156
155
  ]
157
156
  interop-langchain = ["langchain-community>=0.3.12,<1"]
158
- interop-pydantic-ai = ["pydantic-ai==0.0.13"]
157
+ interop-pydantic-ai = ["pydantic-ai==0.0.22"]
159
158
  interop =[
160
159
  "pyautogen[interop-crewai, interop-langchain, interop-pydantic-ai]",
161
160
  ]
@@ -164,7 +163,7 @@ interop =[
164
163
  autobuild = ["chromadb", "sentence-transformers", "huggingface-hub", "pysqlite3-binary"]
165
164
 
166
165
  blendsearch = ["flaml[blendsearch]"]
167
- mathchat = ["sympy", "pydantic==1.10.9", "wolframalpha"]
166
+ mathchat = ["sympy", "wolframalpha"]
168
167
  captainagent = ["pyautogen[autobuild]", "pandas"]
169
168
  teachable = ["chromadb"]
170
169
  lmm = ["replicate", "pillow"]
@@ -190,18 +189,22 @@ cohere = ["cohere>=5.5.8"]
190
189
  ollama = ["ollama>=0.4.5", "fix_busted_json>=0.0.18"]
191
190
  bedrock = ["boto3>=1.34.149"]
192
191
 
192
+ commsagent-discord = ["discord.py>=2.4.0,<2.5"]
193
+ commsagent-slack = ["slack_sdk>=3.33.0,<3.40"]
194
+ commsagent-telegram = ["telethon>=1.38.1, <2"]
195
+
193
196
  ## dev dependencies
194
197
 
195
198
  # test dependencies
196
199
  test = [
197
200
  "ipykernel==6.29.5",
198
- "nbconvert==7.16.5",
201
+ "nbconvert==7.16.6",
199
202
  "nbformat==5.10.4",
200
203
  "pytest-cov==6.0.0",
201
- "pytest-asyncio==0.25.2",
204
+ "pytest-asyncio==0.25.3",
202
205
  "pytest==8.3.4",
203
206
  "pandas==2.2.3",
204
- "fastapi==0.115.6",
207
+ "fastapi==0.115.8",
205
208
  ]
206
209
 
207
210
  # docs dependencies
@@ -214,13 +217,13 @@ docs = [
214
217
  ]
215
218
 
216
219
  types = [
217
- "mypy==1.14.1",
220
+ "mypy==1.15.0",
218
221
  "pyautogen[test, jupyter-executor, interop]",
219
222
  ]
220
223
 
221
224
  lint = [
222
- "ruff==0.9.2",
223
- "codespell==2.3.0",
225
+ "ruff==0.9.4",
226
+ "codespell==2.4.1",
224
227
  "pyupgrade-directories==0.3.0",
225
228
  ]
226
229
 
@@ -229,7 +232,7 @@ dev = [
229
232
  "pyautogen[lint,test,types,docs]",
230
233
  "pre-commit==4.1.0",
231
234
  "detect-secrets==1.5.0",
232
- "uv==0.5.21",
235
+ "uv==0.5.29",
233
236
  ]
234
237
 
235
238
 
@@ -369,7 +372,6 @@ files = [
369
372
  "autogen/exception_utils.py",
370
373
  "autogen/coding",
371
374
  "autogen/oai/openai_utils.py",
372
- "autogen/_pydantic.py",
373
375
  "autogen/io",
374
376
  "autogen/tools",
375
377
  "autogen/interop",
@@ -378,7 +380,6 @@ files = [
378
380
  "autogen/import_utils.py",
379
381
  "autogen/agentchat/contrib/rag",
380
382
  "website/*.py",
381
- "test/test_pydantic.py",
382
383
  "test/io",
383
384
  "test/tools",
384
385
  "test/interop",
@@ -65,6 +65,9 @@ setuptools.setup(
65
65
  "cohere": ["pyautogen[cohere]==" + __version__],
66
66
  "ollama": ["pyautogen[ollama]==" + __version__],
67
67
  "bedrock": ["pyautogen[bedrock]==" + __version__],
68
+ "commsagent-discord": ["pyautogen[commsagent-discord]==" + __version__],
69
+ "commsagent-slack": ["pyautogen[commsagent-slack]==" + __version__],
70
+ "commsagent-telegram": ["pyautogen[commsagent-telegram]==" + __version__],
68
71
  "test": ["pyautogen[test]==" + __version__],
69
72
  "docs": ["pyautogen[docs]==" + __version__],
70
73
  "types": ["pyautogen[types]==" + __version__],
@@ -18,7 +18,7 @@ import requests
18
18
  from autogen.browser_utils import SimpleTextBrowser
19
19
  from autogen.import_utils import optional_import_block, skip_on_missing_imports
20
20
 
21
- BLOG_POST_URL = "https://docs.ag2.ai/blog/2023-04-21-LLM-tuning-math"
21
+ BLOG_POST_URL = "https://docs.ag2.ai/docs/blog/2023-04-21-LLM-tuning-math"
22
22
  BLOG_POST_TITLE = "Does Model and Inference Parameter Matter in LLM Applications? - A Case Study for MATH - AG2"
23
23
  BLOG_POST_STRING = "Large language models (LLMs) are powerful tools that can generate natural language texts for various applications, such as chatbots, summarization, translation, and more. GPT-4 is currently the state of the art LLM in the world. Is model selection irrelevant? What about inference parameters?"
24
24
 
@@ -0,0 +1,74 @@
1
+ # Copyright (c) 2023 - 2025, AG2ai, Inc., AG2ai open-source projects maintainers and core contributors
2
+ #
3
+ # SPDX-License-Identifier: Apache-2.0
4
+ #
5
+ # Portions derived from https://github.com/microsoft/autogen are under the MIT License.
6
+ # SPDX-License-Identifier: MIT
7
+
8
+
9
+ import os
10
+ import subprocess
11
+
12
+ import pytest
13
+
14
+ from .conftest import Credentials, Secrets, credentials_all_llms, suppress_gemini_resource_exhausted
15
+
16
+
17
+ @pytest.mark.parametrize("credentials_from_test_param", credentials_all_llms, indirect=True)
18
+ @suppress_gemini_resource_exhausted
19
+ def test_credentials_from_test_param_fixture(
20
+ credentials_from_test_param: Credentials,
21
+ request: pytest.FixtureRequest,
22
+ ) -> None:
23
+ # Get the parameter name request node
24
+ current_llm = request.node.callspec.id
25
+
26
+ assert current_llm is not None
27
+ assert isinstance(credentials_from_test_param, Credentials)
28
+
29
+ first_config = credentials_from_test_param.config_list[0]
30
+ if "gpt_4" in current_llm:
31
+ if "api_type" in first_config:
32
+ assert first_config["api_type"] == "openai"
33
+ elif "gemini" in current_llm:
34
+ assert first_config["api_type"] == "google"
35
+ elif "anthropic" in current_llm:
36
+ assert first_config["api_type"] == "anthropic"
37
+ else:
38
+ assert False, f"Unknown LLM fixture: {current_llm}"
39
+
40
+
41
+ class TestSecrets:
42
+ def test_sanitize_secrets(self):
43
+ Secrets.add_secret("mysecret")
44
+ data = "This contains mysecret and ysecre and somemysecreand should be sanitized."
45
+ sanitized = Secrets.sanitize_secrets(data)
46
+ assert sanitized == "This contains ***** and ***** and some*****and should be sanitized."
47
+
48
+ @pytest.mark.skipif(
49
+ not os.getenv("RUN_SANITIZATION_TEST"),
50
+ reason="Skipping sensitive tests. Set RUN_SANITIZATION_TEST=1 to run.",
51
+ )
52
+ def test_raise_exception_with_secret(self):
53
+ Secrets.add_secret("mysecret")
54
+ raise Exception("This is a test exception. mysecret exposed!!!")
55
+
56
+ def test_sensitive_output_is_sanitized(self):
57
+ # Run pytest for the sensitive tests and capture the output
58
+ result = subprocess.run(
59
+ [
60
+ "pytest",
61
+ "-s",
62
+ "test/test_conftest.py::TestSecrets::test_raise_exception_with_secret",
63
+ ],
64
+ env={**os.environ, "RUN_SANITIZATION_TEST": "1"},
65
+ stdout=subprocess.PIPE,
66
+ stderr=subprocess.PIPE,
67
+ text=True,
68
+ )
69
+
70
+ # Combine stdout and stderr to search for secrets
71
+ output = result.stdout + result.stderr
72
+
73
+ assert "mysecret" not in output, "Secret exposed in test output!"
74
+ assert "*****" in output, "Sanitization is not working as expected!"
@@ -295,9 +295,9 @@ def test_to_dict():
295
295
  assert result["foo_val"] == expected_foo_val_field
296
296
  assert result["o"] == expected_o_field
297
297
  assert len(result["agents"]) == 2
298
- for agent in result["agents"]:
299
- assert "autogen.ConversableAgent" in agent
300
- assert "autogen.ConversableAgent" in result["first_agent"]
298
+ assert result["agents"][0] == "alice"
299
+ assert result["agents"][1] == "bob"
300
+ assert "alice" in result["first_agent"]
301
301
 
302
302
 
303
303
  @patch("logging.Logger.error")
@@ -1,127 +0,0 @@
1
- pyautogen==0.7.3
2
-
3
- [anthropic]
4
- pyautogen[anthropic]==0.7.3
5
-
6
- [autobuild]
7
- pyautogen[autobuild]==0.7.3
8
-
9
- [bedrock]
10
- pyautogen[bedrock]==0.7.3
11
-
12
- [blendsearch]
13
- pyautogen[blendsearch]==0.7.3
14
-
15
- [browser-use]
16
- pyautogen[browser-use]==0.7.3
17
-
18
- [captainagent]
19
- pyautogen[captainagent]==0.7.3
20
-
21
- [cerebras]
22
- pyautogen[cerebras]==0.7.3
23
-
24
- [cohere]
25
- pyautogen[cohere]==0.7.3
26
-
27
- [cosmosdb]
28
- pyautogen[cosmosdb]==0.7.3
29
-
30
- [crawl4ai]
31
- pyautogen[crawl4ai]==0.7.3
32
-
33
- [dev]
34
- pyautogen[dev]==0.7.3
35
-
36
- [docs]
37
- pyautogen[docs]==0.7.3
38
-
39
- [flaml]
40
- pyautogen[flaml]==0.7.3
41
-
42
- [gemini]
43
- pyautogen[gemini]==0.7.3
44
-
45
- [graph]
46
- pyautogen[graph]==0.7.3
47
-
48
- [graph-rag-falkor-db]
49
- pyautogen[graph-rag-falkor-db]==0.7.3
50
-
51
- [groq]
52
- pyautogen[groq]==0.7.3
53
-
54
- [interop]
55
- pyautogen[interop]==0.7.3
56
-
57
- [interop-crewai]
58
- pyautogen[interop-crewai]==0.7.3
59
-
60
- [interop-langchain]
61
- pyautogen[interop-langchain]==0.7.3
62
-
63
- [interop-pydantic-ai]
64
- pyautogen[interop-pydantic-ai]==0.7.3
65
-
66
- [jupyter-executor]
67
- pyautogen[jupyter-executor]==0.7.3
68
-
69
- [lint]
70
- pyautogen[lint]==0.7.3
71
-
72
- [lmm]
73
- pyautogen[lmm]==0.7.3
74
-
75
- [long-context]
76
- pyautogen[long-context]==0.7.3
77
-
78
- [mathchat]
79
- pyautogen[mathchat]==0.7.3
80
-
81
- [mistral]
82
- pyautogen[mistral]==0.7.3
83
-
84
- [neo4j]
85
- pyautogen[neo4j]==0.7.3
86
-
87
- [ollama]
88
- pyautogen[ollama]==0.7.3
89
-
90
- [rag]
91
- pyautogen[rag]==0.7.3
92
-
93
- [redis]
94
- pyautogen[redis]==0.7.3
95
-
96
- [retrievechat]
97
- pyautogen[retrievechat]==0.7.3
98
-
99
- [retrievechat-mongodb]
100
- pyautogen[retrievechat-mongodb]==0.7.3
101
-
102
- [retrievechat-pgvector]
103
- pyautogen[retrievechat-pgvector]==0.7.3
104
-
105
- [retrievechat-qdrant]
106
- pyautogen[retrievechat-qdrant]==0.7.3
107
-
108
- [teachable]
109
- pyautogen[teachable]==0.7.3
110
-
111
- [test]
112
- pyautogen[test]==0.7.3
113
-
114
- [together]
115
- pyautogen[together]==0.7.3
116
-
117
- [twilio]
118
- pyautogen[twilio]==0.7.3
119
-
120
- [types]
121
- pyautogen[types]==0.7.3
122
-
123
- [websockets]
124
- pyautogen[websockets]==0.7.3
125
-
126
- [websurfer]
127
- pyautogen[websurfer]==0.7.3
@@ -1,35 +0,0 @@
1
- # Copyright (c) 2023 - 2025, AG2ai, Inc., AG2ai open-source projects maintainers and core contributors
2
- #
3
- # SPDX-License-Identifier: Apache-2.0
4
- #
5
- # Portions derived from https://github.com/microsoft/autogen are under the MIT License.
6
- # SPDX-License-Identifier: MIT
7
-
8
-
9
- import pytest
10
-
11
- from .conftest import Credentials, credentials_all_llms, suppress_gemini_resource_exhausted
12
-
13
-
14
- @pytest.mark.parametrize("credentials_from_test_param", credentials_all_llms, indirect=True)
15
- @suppress_gemini_resource_exhausted
16
- def test_credentials_from_test_param_fixture(
17
- credentials_from_test_param: Credentials,
18
- request: pytest.FixtureRequest,
19
- ) -> None:
20
- # Get the parameter name request node
21
- current_llm = request.node.callspec.id
22
-
23
- assert current_llm is not None
24
- assert isinstance(credentials_from_test_param, Credentials)
25
-
26
- first_config = credentials_from_test_param.config_list[0]
27
- if "gpt_4" in current_llm:
28
- if "api_type" in first_config:
29
- assert first_config["api_type"] == "openai"
30
- elif "gemini" in current_llm:
31
- assert first_config["api_type"] == "google"
32
- elif "anthropic" in current_llm:
33
- assert first_config["api_type"] == "anthropic"
34
- else:
35
- assert False, f"Unknown LLM fixture: {current_llm}"
@@ -1,46 +0,0 @@
1
- # Copyright (c) 2023 - 2025, AG2ai, Inc., AG2ai open-source projects maintainers and core contributors
2
- #
3
- # SPDX-License-Identifier: Apache-2.0
4
- #
5
- # Portions derived from https://github.com/microsoft/autogen are under the MIT License.
6
- # SPDX-License-Identifier: MIT
7
- from typing import Annotated, Optional, Union
8
-
9
- from pydantic import BaseModel
10
-
11
- from autogen._pydantic import model_dump, model_dump_json, type2schema
12
-
13
-
14
- def test_type2schema() -> None:
15
- assert type2schema(str) == {"type": "string"}
16
- assert type2schema(int) == {"type": "integer"}
17
- assert type2schema(float) == {"type": "number"}
18
- assert type2schema(bool) == {"type": "boolean"}
19
- assert type2schema(None) == {"type": "null"}
20
- assert type2schema(Optional[int]) == {"anyOf": [{"type": "integer"}, {"type": "null"}]}
21
- assert type2schema(list[int]) == {"items": {"type": "integer"}, "type": "array"}
22
- assert type2schema(tuple[int, float, str]) == {
23
- "maxItems": 3,
24
- "minItems": 3,
25
- "prefixItems": [{"type": "integer"}, {"type": "number"}, {"type": "string"}],
26
- "type": "array",
27
- }
28
- assert type2schema(dict[str, int]) == {"additionalProperties": {"type": "integer"}, "type": "object"}
29
- assert type2schema(Annotated[str, "some text"]) == {"type": "string"}
30
- assert type2schema(Union[int, float]) == {"anyOf": [{"type": "integer"}, {"type": "number"}]}
31
-
32
-
33
- def test_model_dump() -> None:
34
- class A(BaseModel):
35
- a: str
36
- b: int = 2
37
-
38
- assert model_dump(A(a="aaa")) == {"a": "aaa", "b": 2}
39
-
40
-
41
- def test_model_dump_json() -> None:
42
- class A(BaseModel):
43
- a: str
44
- b: int = 2
45
-
46
- assert model_dump_json(A(a="aaa")).replace(" ", "") == '{"a":"aaa","b":2}'
File without changes
File without changes
File without changes
File without changes
File without changes
File without changes
File without changes
File without changes
File without changes
File without changes
File without changes