ag2 0.7.3__tar.gz → 0.7.4b2__tar.gz

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.

Potentially problematic release.


This version of ag2 might be problematic. Click here for more details.

@@ -1,6 +1,6 @@
1
1
  Metadata-Version: 2.1
2
2
  Name: ag2
3
- Version: 0.7.3
3
+ Version: 0.7.4b2
4
4
  Summary: Alias package for pyautogen
5
5
  Home-page: https://github.com/ag2ai/ag2
6
6
  Author: Chi Wang & Qingyun Wu
@@ -49,6 +49,9 @@ Provides-Extra: groq
49
49
  Provides-Extra: cohere
50
50
  Provides-Extra: ollama
51
51
  Provides-Extra: bedrock
52
+ Provides-Extra: commsagent-discord
53
+ Provides-Extra: commsagent-slack
54
+ Provides-Extra: commsagent-telegram
52
55
  Provides-Extra: test
53
56
  Provides-Extra: docs
54
57
  Provides-Extra: types
@@ -62,14 +65,24 @@ License-File: NOTICE.md
62
65
  ![Pypi Downloads](https://img.shields.io/pypi/dm/pyautogen?label=PyPI%20downloads)
63
66
  [![PyPI version](https://badge.fury.io/py/autogen.svg)](https://badge.fury.io/py/autogen)
64
67
  [![Build](https://github.com/ag2ai/ag2/actions/workflows/python-package.yml/badge.svg)](https://github.com/ag2ai/ag2/actions/workflows/python-package.yml)
65
- ![Python Version](https://img.shields.io/badge/3.9%20%7C%203.10%20%7C%203.11%20%7C%203.12-blue)
68
+ ![Python Version](https://img.shields.io/pypi/pyversions/pyautogen?logoColor=blue)
66
69
  [![Discord](https://img.shields.io/discord/1153072414184452236?logo=discord&style=flat)](https://discord.gg/pAbnFJrkgZ)
67
- [![Twitter](https://img.shields.io/twitter/url/https/twitter.com/cloudposse.svg?style=social&label=Follow%20%40ag2ai)](https://x.com/Chi_Wang_)
70
+ [![X](https://img.shields.io/twitter/url/https/twitter.com/cloudposse.svg?style=social&label=Follow%20%40ag2oss)](https://x.com/ag2oss)
68
71
 
69
72
  <!-- [![NuGet version](https://badge.fury.io/nu/AutoGen.Core.svg)](https://badge.fury.io/nu/AutoGen.Core) -->
70
73
 
71
74
  # [AG2](https://github.com/ag2ai/ag2)
72
75
 
76
+ ## Key Features
77
+ - 🤖 Multi-Agent Framework - Build and orchestrate AI agent interactions
78
+ - 🔧 Flexible Integration - Support for various LLMs (OpenAI, Anthropic, Gemini, etc.)
79
+ - 🛠 Tool Usage - Agents can use external tools and execute code
80
+ - 👥 Human-in-the-Loop - Seamless human participation when needed
81
+ - 🔄 Rich Orchestration Patterns - Agents can be organized in any form you like
82
+ - 🎯 Future-Oriented - Designed for solving difficult problems and harnessing latest and future technology
83
+
84
+ [📚 Documentation](https://docs.ag2.ai/) | [💡 Examples](https://github.com/ag2ai/build-with-ag2) | [🤝 Contributing](https://docs.ag2.ai/docs/contributor-guide/contributing)
85
+
73
86
  [📚 Cite paper](#related-papers).
74
87
 
75
88
  <!-- <p align="center">
@@ -138,7 +151,11 @@ We adopt the Apache 2.0 license from v0.3. This enhances our commitment to open-
138
151
 
139
152
  ## What is AG2
140
153
 
141
- AG2 (formerly AutoGen) is an open-source programming framework for building AI agents and facilitating cooperation among multiple agents to solve tasks. AG2 aims to streamline the development and research of agentic AI, much like PyTorch does for Deep Learning. It offers features such as agents capable of interacting with each other, facilitates the use of various large language models (LLMs) and tool use support, autonomous and human-in-the-loop workflows, and multi-agent conversation patterns.
154
+ AG2 (formerly AutoGen) is an open-source AgentOS for building AI agents and facilitating cooperation among multiple agents to solve tasks. AG2 provides fundamental building blocks needed to create, deploy, and manage AI agents that can work together to solve complex problems.
155
+
156
+ ### Core Concepts
157
+ - **Agents**: Stateful entities that can send messages, receive messages, and generate replies using underlying capabilities powered by LLMs, non-LLM tools, or human inputs. Depending on the underlying capability, an agent may reason, plan, execute tasks or involve other agents before generating a reply.
158
+ - **Conversations**: Structured communication patterns between agents.
142
159
 
143
160
  **Open Source Statement**: The project welcomes contributions from developers and organizations worldwide. Our goal is to foster a collaborative and inclusive community where diverse perspectives and expertise can drive innovation and enhance the project's capabilities. Whether you are an individual contributor or represent an organization, we invite you to join us in shaping the future of this project. Together, we can build something truly remarkable.
144
161
 
@@ -179,7 +196,7 @@ _NOTE_: OAI_CONFIG_LIST_sample lists gpt-4o as the default model. If you use a d
179
196
 
180
197
  ### Option 1. Install and Run AG2 in Docker
181
198
 
182
- Find detailed instructions for users [here](https://docs.ag2.ai/docs/installation/Docker#step-1-install-docker), and for developers [here](https://docs.ag2.ai/docs/contributor-guide/docker).
199
+ Find detailed instructions for users [here](https://docs.ag2.ai/docs/installation/Docker#step-1-install-docker), and for developers [here](https://docs.ag2.ai/docs/contributor-guide/setup-development-environment).
183
200
 
184
201
  ### Option 2. Install AG2 Locally
185
202
 
@@ -202,7 +219,7 @@ Find more options in [Installation](https://docs.ag2.ai/docs/Installation#option
202
219
 
203
220
  Even if you are installing and running AG2 locally outside of docker, the recommendation and default behavior of agents is to perform [code execution](https://docs.ag2.ai/docs/FAQ#if-you-want-to-run-code-execution-in-docker) in docker. Find more instructions and how to change the default behaviour [here](https://docs.ag2.ai/docs/FAQ#if-you-want-to-run-code-execution-locally).
204
221
 
205
- For LLM inference configurations, check the [FAQs](https://docs.ag2.ai/docs/FAQ#set-your-api-endpoints).
222
+ For LLM inference configurations, check the [FAQs](https://docs.ag2.ai/docs/user-guide/advanced-concepts/llm-configuration-deep-dive#llm-configuration).
206
223
 
207
224
  <p align="right" style="font-size: 14px; color: #555; margin-top: 20px;">
208
225
  <a href="#readme-top" style="text-decoration: none; color: blue; font-weight: bold;">
@@ -226,7 +243,7 @@ For [example](https://github.com/ag2ai/ag2/blob/main/test/twoagent.py),
226
243
  ```python
227
244
  from autogen import AssistantAgent, UserProxyAgent, config_list_from_json
228
245
  # Load LLM inference endpoints from an env variable or a file
229
- # See https://docs.ag2.ai/docs/FAQ#set-your-api-endpoints
246
+ # See https://docs.ag2.ai/docs/user-guide/advanced-concepts/llm-configuration-deep-dive#llm-configuration
230
247
  # and OAI_CONFIG_LIST_sample
231
248
  config_list = config_list_from_json(env_or_file="OAI_CONFIG_LIST")
232
249
  # You can also set config_list directly as a list, for example, config_list = [{'model': 'gpt-4o', 'api_key': '<your OpenAI API key here>'},]
@@ -258,7 +275,7 @@ Please find more [code examples](https://docs.ag2.ai/docs/Examples#automated-mul
258
275
 
259
276
  ## Enhanced LLM Inferences
260
277
 
261
- AG2 also helps maximize the utility out of the expensive LLMs such as gpt-4o. It offers [enhanced LLM inference](https://docs.ag2.ai/docs/Use-Cases/enhanced_inference#api-unification) with powerful functionalities like caching, error handling, multi-config inference and templating.
278
+ AG2 also helps maximize the utility out of the expensive LLMs such as gpt-4o. It offers enhanced LLM inference with powerful functionalities like caching, error handling, multi-config inference and templating.
262
279
 
263
280
  <!-- For example, you can optimize generations by LLM with your own tuning data, success metrics, and budgets.
264
281
 
@@ -3,14 +3,24 @@
3
3
  ![Pypi Downloads](https://img.shields.io/pypi/dm/pyautogen?label=PyPI%20downloads)
4
4
  [![PyPI version](https://badge.fury.io/py/autogen.svg)](https://badge.fury.io/py/autogen)
5
5
  [![Build](https://github.com/ag2ai/ag2/actions/workflows/python-package.yml/badge.svg)](https://github.com/ag2ai/ag2/actions/workflows/python-package.yml)
6
- ![Python Version](https://img.shields.io/badge/3.9%20%7C%203.10%20%7C%203.11%20%7C%203.12-blue)
6
+ ![Python Version](https://img.shields.io/pypi/pyversions/pyautogen?logoColor=blue)
7
7
  [![Discord](https://img.shields.io/discord/1153072414184452236?logo=discord&style=flat)](https://discord.gg/pAbnFJrkgZ)
8
- [![Twitter](https://img.shields.io/twitter/url/https/twitter.com/cloudposse.svg?style=social&label=Follow%20%40ag2ai)](https://x.com/Chi_Wang_)
8
+ [![X](https://img.shields.io/twitter/url/https/twitter.com/cloudposse.svg?style=social&label=Follow%20%40ag2oss)](https://x.com/ag2oss)
9
9
 
10
10
  <!-- [![NuGet version](https://badge.fury.io/nu/AutoGen.Core.svg)](https://badge.fury.io/nu/AutoGen.Core) -->
11
11
 
12
12
  # [AG2](https://github.com/ag2ai/ag2)
13
13
 
14
+ ## Key Features
15
+ - 🤖 Multi-Agent Framework - Build and orchestrate AI agent interactions
16
+ - 🔧 Flexible Integration - Support for various LLMs (OpenAI, Anthropic, Gemini, etc.)
17
+ - 🛠 Tool Usage - Agents can use external tools and execute code
18
+ - 👥 Human-in-the-Loop - Seamless human participation when needed
19
+ - 🔄 Rich Orchestration Patterns - Agents can be organized in any form you like
20
+ - 🎯 Future-Oriented - Designed for solving difficult problems and harnessing latest and future technology
21
+
22
+ [📚 Documentation](https://docs.ag2.ai/) | [💡 Examples](https://github.com/ag2ai/build-with-ag2) | [🤝 Contributing](https://docs.ag2.ai/docs/contributor-guide/contributing)
23
+
14
24
  [📚 Cite paper](#related-papers).
15
25
 
16
26
  <!-- <p align="center">
@@ -79,7 +89,11 @@ We adopt the Apache 2.0 license from v0.3. This enhances our commitment to open-
79
89
 
80
90
  ## What is AG2
81
91
 
82
- AG2 (formerly AutoGen) is an open-source programming framework for building AI agents and facilitating cooperation among multiple agents to solve tasks. AG2 aims to streamline the development and research of agentic AI, much like PyTorch does for Deep Learning. It offers features such as agents capable of interacting with each other, facilitates the use of various large language models (LLMs) and tool use support, autonomous and human-in-the-loop workflows, and multi-agent conversation patterns.
92
+ AG2 (formerly AutoGen) is an open-source AgentOS for building AI agents and facilitating cooperation among multiple agents to solve tasks. AG2 provides fundamental building blocks needed to create, deploy, and manage AI agents that can work together to solve complex problems.
93
+
94
+ ### Core Concepts
95
+ - **Agents**: Stateful entities that can send messages, receive messages, and generate replies using underlying capabilities powered by LLMs, non-LLM tools, or human inputs. Depending on the underlying capability, an agent may reason, plan, execute tasks or involve other agents before generating a reply.
96
+ - **Conversations**: Structured communication patterns between agents.
83
97
 
84
98
  **Open Source Statement**: The project welcomes contributions from developers and organizations worldwide. Our goal is to foster a collaborative and inclusive community where diverse perspectives and expertise can drive innovation and enhance the project's capabilities. Whether you are an individual contributor or represent an organization, we invite you to join us in shaping the future of this project. Together, we can build something truly remarkable.
85
99
 
@@ -120,7 +134,7 @@ _NOTE_: OAI_CONFIG_LIST_sample lists gpt-4o as the default model. If you use a d
120
134
 
121
135
  ### Option 1. Install and Run AG2 in Docker
122
136
 
123
- Find detailed instructions for users [here](https://docs.ag2.ai/docs/installation/Docker#step-1-install-docker), and for developers [here](https://docs.ag2.ai/docs/contributor-guide/docker).
137
+ Find detailed instructions for users [here](https://docs.ag2.ai/docs/installation/Docker#step-1-install-docker), and for developers [here](https://docs.ag2.ai/docs/contributor-guide/setup-development-environment).
124
138
 
125
139
  ### Option 2. Install AG2 Locally
126
140
 
@@ -143,7 +157,7 @@ Find more options in [Installation](https://docs.ag2.ai/docs/Installation#option
143
157
 
144
158
  Even if you are installing and running AG2 locally outside of docker, the recommendation and default behavior of agents is to perform [code execution](https://docs.ag2.ai/docs/FAQ#if-you-want-to-run-code-execution-in-docker) in docker. Find more instructions and how to change the default behaviour [here](https://docs.ag2.ai/docs/FAQ#if-you-want-to-run-code-execution-locally).
145
159
 
146
- For LLM inference configurations, check the [FAQs](https://docs.ag2.ai/docs/FAQ#set-your-api-endpoints).
160
+ For LLM inference configurations, check the [FAQs](https://docs.ag2.ai/docs/user-guide/advanced-concepts/llm-configuration-deep-dive#llm-configuration).
147
161
 
148
162
  <p align="right" style="font-size: 14px; color: #555; margin-top: 20px;">
149
163
  <a href="#readme-top" style="text-decoration: none; color: blue; font-weight: bold;">
@@ -167,7 +181,7 @@ For [example](https://github.com/ag2ai/ag2/blob/main/test/twoagent.py),
167
181
  ```python
168
182
  from autogen import AssistantAgent, UserProxyAgent, config_list_from_json
169
183
  # Load LLM inference endpoints from an env variable or a file
170
- # See https://docs.ag2.ai/docs/FAQ#set-your-api-endpoints
184
+ # See https://docs.ag2.ai/docs/user-guide/advanced-concepts/llm-configuration-deep-dive#llm-configuration
171
185
  # and OAI_CONFIG_LIST_sample
172
186
  config_list = config_list_from_json(env_or_file="OAI_CONFIG_LIST")
173
187
  # You can also set config_list directly as a list, for example, config_list = [{'model': 'gpt-4o', 'api_key': '<your OpenAI API key here>'},]
@@ -199,7 +213,7 @@ Please find more [code examples](https://docs.ag2.ai/docs/Examples#automated-mul
199
213
 
200
214
  ## Enhanced LLM Inferences
201
215
 
202
- AG2 also helps maximize the utility out of the expensive LLMs such as gpt-4o. It offers [enhanced LLM inference](https://docs.ag2.ai/docs/Use-Cases/enhanced_inference#api-unification) with powerful functionalities like caching, error handling, multi-config inference and templating.
216
+ AG2 also helps maximize the utility out of the expensive LLMs such as gpt-4o. It offers enhanced LLM inference with powerful functionalities like caching, error handling, multi-config inference and templating.
203
217
 
204
218
  <!-- For example, you can optimize generations by LLM with your own tuning data, success metrics, and budgets.
205
219
 
@@ -1,6 +1,6 @@
1
1
  Metadata-Version: 2.1
2
2
  Name: ag2
3
- Version: 0.7.3
3
+ Version: 0.7.4b2
4
4
  Summary: Alias package for pyautogen
5
5
  Home-page: https://github.com/ag2ai/ag2
6
6
  Author: Chi Wang & Qingyun Wu
@@ -49,6 +49,9 @@ Provides-Extra: groq
49
49
  Provides-Extra: cohere
50
50
  Provides-Extra: ollama
51
51
  Provides-Extra: bedrock
52
+ Provides-Extra: commsagent-discord
53
+ Provides-Extra: commsagent-slack
54
+ Provides-Extra: commsagent-telegram
52
55
  Provides-Extra: test
53
56
  Provides-Extra: docs
54
57
  Provides-Extra: types
@@ -62,14 +65,24 @@ License-File: NOTICE.md
62
65
  ![Pypi Downloads](https://img.shields.io/pypi/dm/pyautogen?label=PyPI%20downloads)
63
66
  [![PyPI version](https://badge.fury.io/py/autogen.svg)](https://badge.fury.io/py/autogen)
64
67
  [![Build](https://github.com/ag2ai/ag2/actions/workflows/python-package.yml/badge.svg)](https://github.com/ag2ai/ag2/actions/workflows/python-package.yml)
65
- ![Python Version](https://img.shields.io/badge/3.9%20%7C%203.10%20%7C%203.11%20%7C%203.12-blue)
68
+ ![Python Version](https://img.shields.io/pypi/pyversions/pyautogen?logoColor=blue)
66
69
  [![Discord](https://img.shields.io/discord/1153072414184452236?logo=discord&style=flat)](https://discord.gg/pAbnFJrkgZ)
67
- [![Twitter](https://img.shields.io/twitter/url/https/twitter.com/cloudposse.svg?style=social&label=Follow%20%40ag2ai)](https://x.com/Chi_Wang_)
70
+ [![X](https://img.shields.io/twitter/url/https/twitter.com/cloudposse.svg?style=social&label=Follow%20%40ag2oss)](https://x.com/ag2oss)
68
71
 
69
72
  <!-- [![NuGet version](https://badge.fury.io/nu/AutoGen.Core.svg)](https://badge.fury.io/nu/AutoGen.Core) -->
70
73
 
71
74
  # [AG2](https://github.com/ag2ai/ag2)
72
75
 
76
+ ## Key Features
77
+ - 🤖 Multi-Agent Framework - Build and orchestrate AI agent interactions
78
+ - 🔧 Flexible Integration - Support for various LLMs (OpenAI, Anthropic, Gemini, etc.)
79
+ - 🛠 Tool Usage - Agents can use external tools and execute code
80
+ - 👥 Human-in-the-Loop - Seamless human participation when needed
81
+ - 🔄 Rich Orchestration Patterns - Agents can be organized in any form you like
82
+ - 🎯 Future-Oriented - Designed for solving difficult problems and harnessing latest and future technology
83
+
84
+ [📚 Documentation](https://docs.ag2.ai/) | [💡 Examples](https://github.com/ag2ai/build-with-ag2) | [🤝 Contributing](https://docs.ag2.ai/docs/contributor-guide/contributing)
85
+
73
86
  [📚 Cite paper](#related-papers).
74
87
 
75
88
  <!-- <p align="center">
@@ -138,7 +151,11 @@ We adopt the Apache 2.0 license from v0.3. This enhances our commitment to open-
138
151
 
139
152
  ## What is AG2
140
153
 
141
- AG2 (formerly AutoGen) is an open-source programming framework for building AI agents and facilitating cooperation among multiple agents to solve tasks. AG2 aims to streamline the development and research of agentic AI, much like PyTorch does for Deep Learning. It offers features such as agents capable of interacting with each other, facilitates the use of various large language models (LLMs) and tool use support, autonomous and human-in-the-loop workflows, and multi-agent conversation patterns.
154
+ AG2 (formerly AutoGen) is an open-source AgentOS for building AI agents and facilitating cooperation among multiple agents to solve tasks. AG2 provides fundamental building blocks needed to create, deploy, and manage AI agents that can work together to solve complex problems.
155
+
156
+ ### Core Concepts
157
+ - **Agents**: Stateful entities that can send messages, receive messages, and generate replies using underlying capabilities powered by LLMs, non-LLM tools, or human inputs. Depending on the underlying capability, an agent may reason, plan, execute tasks or involve other agents before generating a reply.
158
+ - **Conversations**: Structured communication patterns between agents.
142
159
 
143
160
  **Open Source Statement**: The project welcomes contributions from developers and organizations worldwide. Our goal is to foster a collaborative and inclusive community where diverse perspectives and expertise can drive innovation and enhance the project's capabilities. Whether you are an individual contributor or represent an organization, we invite you to join us in shaping the future of this project. Together, we can build something truly remarkable.
144
161
 
@@ -179,7 +196,7 @@ _NOTE_: OAI_CONFIG_LIST_sample lists gpt-4o as the default model. If you use a d
179
196
 
180
197
  ### Option 1. Install and Run AG2 in Docker
181
198
 
182
- Find detailed instructions for users [here](https://docs.ag2.ai/docs/installation/Docker#step-1-install-docker), and for developers [here](https://docs.ag2.ai/docs/contributor-guide/docker).
199
+ Find detailed instructions for users [here](https://docs.ag2.ai/docs/installation/Docker#step-1-install-docker), and for developers [here](https://docs.ag2.ai/docs/contributor-guide/setup-development-environment).
183
200
 
184
201
  ### Option 2. Install AG2 Locally
185
202
 
@@ -202,7 +219,7 @@ Find more options in [Installation](https://docs.ag2.ai/docs/Installation#option
202
219
 
203
220
  Even if you are installing and running AG2 locally outside of docker, the recommendation and default behavior of agents is to perform [code execution](https://docs.ag2.ai/docs/FAQ#if-you-want-to-run-code-execution-in-docker) in docker. Find more instructions and how to change the default behaviour [here](https://docs.ag2.ai/docs/FAQ#if-you-want-to-run-code-execution-locally).
204
221
 
205
- For LLM inference configurations, check the [FAQs](https://docs.ag2.ai/docs/FAQ#set-your-api-endpoints).
222
+ For LLM inference configurations, check the [FAQs](https://docs.ag2.ai/docs/user-guide/advanced-concepts/llm-configuration-deep-dive#llm-configuration).
206
223
 
207
224
  <p align="right" style="font-size: 14px; color: #555; margin-top: 20px;">
208
225
  <a href="#readme-top" style="text-decoration: none; color: blue; font-weight: bold;">
@@ -226,7 +243,7 @@ For [example](https://github.com/ag2ai/ag2/blob/main/test/twoagent.py),
226
243
  ```python
227
244
  from autogen import AssistantAgent, UserProxyAgent, config_list_from_json
228
245
  # Load LLM inference endpoints from an env variable or a file
229
- # See https://docs.ag2.ai/docs/FAQ#set-your-api-endpoints
246
+ # See https://docs.ag2.ai/docs/user-guide/advanced-concepts/llm-configuration-deep-dive#llm-configuration
230
247
  # and OAI_CONFIG_LIST_sample
231
248
  config_list = config_list_from_json(env_or_file="OAI_CONFIG_LIST")
232
249
  # You can also set config_list directly as a list, for example, config_list = [{'model': 'gpt-4o', 'api_key': '<your OpenAI API key here>'},]
@@ -258,7 +275,7 @@ Please find more [code examples](https://docs.ag2.ai/docs/Examples#automated-mul
258
275
 
259
276
  ## Enhanced LLM Inferences
260
277
 
261
- AG2 also helps maximize the utility out of the expensive LLMs such as gpt-4o. It offers [enhanced LLM inference](https://docs.ag2.ai/docs/Use-Cases/enhanced_inference#api-unification) with powerful functionalities like caching, error handling, multi-config inference and templating.
278
+ AG2 also helps maximize the utility out of the expensive LLMs such as gpt-4o. It offers enhanced LLM inference with powerful functionalities like caching, error handling, multi-config inference and templating.
262
279
 
263
280
  <!-- For example, you can optimize generations by LLM with your own tuning data, success metrics, and budgets.
264
281
 
@@ -16,6 +16,5 @@ test/test_import.py
16
16
  test/test_import_utils.py
17
17
  test/test_logging.py
18
18
  test/test_notebook.py
19
- test/test_pydantic.py
20
19
  test/test_retrieve_utils.py
21
20
  test/test_token_count.py
@@ -0,0 +1,136 @@
1
+ pyautogen==0.7.4b2
2
+
3
+ [anthropic]
4
+ pyautogen[anthropic]==0.7.4b2
5
+
6
+ [autobuild]
7
+ pyautogen[autobuild]==0.7.4b2
8
+
9
+ [bedrock]
10
+ pyautogen[bedrock]==0.7.4b2
11
+
12
+ [blendsearch]
13
+ pyautogen[blendsearch]==0.7.4b2
14
+
15
+ [browser-use]
16
+ pyautogen[browser-use]==0.7.4b2
17
+
18
+ [captainagent]
19
+ pyautogen[captainagent]==0.7.4b2
20
+
21
+ [cerebras]
22
+ pyautogen[cerebras]==0.7.4b2
23
+
24
+ [cohere]
25
+ pyautogen[cohere]==0.7.4b2
26
+
27
+ [commsagent-discord]
28
+ pyautogen[commsagent-discord]==0.7.4b2
29
+
30
+ [commsagent-slack]
31
+ pyautogen[commsagent-slack]==0.7.4b2
32
+
33
+ [commsagent-telegram]
34
+ pyautogen[commsagent-telegram]==0.7.4b2
35
+
36
+ [cosmosdb]
37
+ pyautogen[cosmosdb]==0.7.4b2
38
+
39
+ [crawl4ai]
40
+ pyautogen[crawl4ai]==0.7.4b2
41
+
42
+ [dev]
43
+ pyautogen[dev]==0.7.4b2
44
+
45
+ [docs]
46
+ pyautogen[docs]==0.7.4b2
47
+
48
+ [flaml]
49
+ pyautogen[flaml]==0.7.4b2
50
+
51
+ [gemini]
52
+ pyautogen[gemini]==0.7.4b2
53
+
54
+ [graph]
55
+ pyautogen[graph]==0.7.4b2
56
+
57
+ [graph-rag-falkor-db]
58
+ pyautogen[graph-rag-falkor-db]==0.7.4b2
59
+
60
+ [groq]
61
+ pyautogen[groq]==0.7.4b2
62
+
63
+ [interop]
64
+ pyautogen[interop]==0.7.4b2
65
+
66
+ [interop-crewai]
67
+ pyautogen[interop-crewai]==0.7.4b2
68
+
69
+ [interop-langchain]
70
+ pyautogen[interop-langchain]==0.7.4b2
71
+
72
+ [interop-pydantic-ai]
73
+ pyautogen[interop-pydantic-ai]==0.7.4b2
74
+
75
+ [jupyter-executor]
76
+ pyautogen[jupyter-executor]==0.7.4b2
77
+
78
+ [lint]
79
+ pyautogen[lint]==0.7.4b2
80
+
81
+ [lmm]
82
+ pyautogen[lmm]==0.7.4b2
83
+
84
+ [long-context]
85
+ pyautogen[long-context]==0.7.4b2
86
+
87
+ [mathchat]
88
+ pyautogen[mathchat]==0.7.4b2
89
+
90
+ [mistral]
91
+ pyautogen[mistral]==0.7.4b2
92
+
93
+ [neo4j]
94
+ pyautogen[neo4j]==0.7.4b2
95
+
96
+ [ollama]
97
+ pyautogen[ollama]==0.7.4b2
98
+
99
+ [rag]
100
+ pyautogen[rag]==0.7.4b2
101
+
102
+ [redis]
103
+ pyautogen[redis]==0.7.4b2
104
+
105
+ [retrievechat]
106
+ pyautogen[retrievechat]==0.7.4b2
107
+
108
+ [retrievechat-mongodb]
109
+ pyautogen[retrievechat-mongodb]==0.7.4b2
110
+
111
+ [retrievechat-pgvector]
112
+ pyautogen[retrievechat-pgvector]==0.7.4b2
113
+
114
+ [retrievechat-qdrant]
115
+ pyautogen[retrievechat-qdrant]==0.7.4b2
116
+
117
+ [teachable]
118
+ pyautogen[teachable]==0.7.4b2
119
+
120
+ [test]
121
+ pyautogen[test]==0.7.4b2
122
+
123
+ [together]
124
+ pyautogen[together]==0.7.4b2
125
+
126
+ [twilio]
127
+ pyautogen[twilio]==0.7.4b2
128
+
129
+ [types]
130
+ pyautogen[types]==0.7.4b2
131
+
132
+ [websockets]
133
+ pyautogen[websockets]==0.7.4b2
134
+
135
+ [websurfer]
136
+ pyautogen[websurfer]==0.7.4b2
@@ -11,8 +11,7 @@ name = "pyautogen"
11
11
  description = "A programming framework for agentic AI"
12
12
  readme = "README.md"
13
13
  authors = [
14
- {name = "Chi Wang", email = "support@ag2.ai"},
15
- {name = "Qingyun Wu", email = "support@ag2.ai"},
14
+ {name = "Chi Wang & Qingyun Wu", email = "support@ag2.ai"}
16
15
  ]
17
16
 
18
17
  keywords = [
@@ -86,8 +85,8 @@ jupyter-executor = [
86
85
  ]
87
86
 
88
87
  retrievechat = [
89
- "protobuf==4.25.3",
90
- "chromadb==0.5.3",
88
+ "protobuf==5.29.3",
89
+ "chromadb==0.6.3",
91
90
  "sentence_transformers",
92
91
  "pypdf",
93
92
  "ipython",
@@ -114,14 +113,16 @@ retrievechat-qdrant = [
114
113
  ]
115
114
 
116
115
  graph-rag-falkor-db = [
117
- "graphrag_sdk==0.3.3",
118
- "falkordb>=1.0.10"
116
+ "graphrag_sdk==0.6.0",
117
+ "falkordb>=1.0.10",
119
118
  ]
120
119
 
121
120
  rag = [
122
121
  "docling>=2.15.1,<3",
123
122
  "selenium>=4.28.1,<5",
124
123
  "webdriver-manager==4.0.2",
124
+ "chromadb>=0.5,<1",
125
+ "llama-index>=0.12,<1",
125
126
  ]
126
127
 
127
128
 
@@ -135,10 +136,10 @@ browser-use = [
135
136
 
136
137
  neo4j = [
137
138
  "docx2txt==0.8",
138
- "llama-index==0.12.11",
139
- "llama-index-graph-stores-neo4j==0.4.5",
140
- "llama-index-core==0.12.11",
141
- "llama-index-readers-web==0.3.4",
139
+ "llama-index==0.12.16",
140
+ "llama-index-graph-stores-neo4j==0.4.6",
141
+ "llama-index-core==0.12.16",
142
+ "llama-index-readers-web==0.3.5",
142
143
  ]
143
144
 
144
145
  # used for agentchat_realtime_swarm notebook and realtime agent twilio demo
@@ -155,7 +156,7 @@ interop-crewai = [
155
156
  "litellm<1.57.5; sys_platform=='win32'",
156
157
  ]
157
158
  interop-langchain = ["langchain-community>=0.3.12,<1"]
158
- interop-pydantic-ai = ["pydantic-ai==0.0.13"]
159
+ interop-pydantic-ai = ["pydantic-ai==0.0.23"]
159
160
  interop =[
160
161
  "pyautogen[interop-crewai, interop-langchain, interop-pydantic-ai]",
161
162
  ]
@@ -164,7 +165,7 @@ interop =[
164
165
  autobuild = ["chromadb", "sentence-transformers", "huggingface-hub", "pysqlite3-binary"]
165
166
 
166
167
  blendsearch = ["flaml[blendsearch]"]
167
- mathchat = ["sympy", "pydantic==1.10.9", "wolframalpha"]
168
+ mathchat = ["sympy", "wolframalpha"]
168
169
  captainagent = ["pyautogen[autobuild]", "pandas"]
169
170
  teachable = ["chromadb"]
170
171
  lmm = ["replicate", "pillow"]
@@ -175,6 +176,7 @@ gemini = [
175
176
  "google-auth",
176
177
  "pillow",
177
178
  "jsonschema",
179
+ "jsonref"
178
180
  ]
179
181
  together = ["together>=1.2"]
180
182
  websurfer = ["beautifulsoup4", "markdownify", "pdfminer.six", "pathvalidate"]
@@ -190,18 +192,23 @@ cohere = ["cohere>=5.5.8"]
190
192
  ollama = ["ollama>=0.4.5", "fix_busted_json>=0.0.18"]
191
193
  bedrock = ["boto3>=1.34.149"]
192
194
 
195
+ commsagent-discord = ["discord.py>=2.4.0,<2.5"]
196
+ commsagent-slack = ["slack_sdk>=3.33.0,<3.40"]
197
+ commsagent-telegram = ["telethon>=1.38.1, <2"]
198
+
193
199
  ## dev dependencies
194
200
 
195
201
  # test dependencies
196
202
  test = [
197
203
  "ipykernel==6.29.5",
198
- "nbconvert==7.16.5",
204
+ "nbconvert==7.16.6",
199
205
  "nbformat==5.10.4",
200
206
  "pytest-cov==6.0.0",
201
- "pytest-asyncio==0.25.2",
207
+ "pytest-asyncio==0.25.3",
202
208
  "pytest==8.3.4",
209
+ "mock==5.1.0",
203
210
  "pandas==2.2.3",
204
- "fastapi==0.115.6",
211
+ "fastapi==0.115.8",
205
212
  ]
206
213
 
207
214
  # docs dependencies
@@ -214,13 +221,13 @@ docs = [
214
221
  ]
215
222
 
216
223
  types = [
217
- "mypy==1.14.1",
224
+ "mypy==1.15.0",
218
225
  "pyautogen[test, jupyter-executor, interop]",
219
226
  ]
220
227
 
221
228
  lint = [
222
- "ruff==0.9.2",
223
- "codespell==2.3.0",
229
+ "ruff==0.9.5",
230
+ "codespell==2.4.1",
224
231
  "pyupgrade-directories==0.3.0",
225
232
  ]
226
233
 
@@ -229,7 +236,7 @@ dev = [
229
236
  "pyautogen[lint,test,types,docs]",
230
237
  "pre-commit==4.1.0",
231
238
  "detect-secrets==1.5.0",
232
- "uv==0.5.21",
239
+ "uv==0.5.29",
233
240
  ]
234
241
 
235
242
 
@@ -260,7 +267,7 @@ exclude = ["test", "notebook"]
260
267
 
261
268
 
262
269
  [tool.pytest.ini_options]
263
- addopts = '--cov=. --cov-append --cov-branch --cov-report=xml -m "not conda"'
270
+ addopts = '--cov=autogen --cov-append --cov-branch --cov-report=xml -m "not conda"'
264
271
  testpaths = [
265
272
  "test",
266
273
  ]
@@ -315,6 +322,8 @@ exclude = [
315
322
  "setup_*.py",
316
323
  ]
317
324
 
325
+ preview = true
326
+
318
327
  [tool.ruff.lint]
319
328
  # Enable Pyflakes `E` and `F` codes by default.
320
329
  select = [
@@ -369,7 +378,6 @@ files = [
369
378
  "autogen/exception_utils.py",
370
379
  "autogen/coding",
371
380
  "autogen/oai/openai_utils.py",
372
- "autogen/_pydantic.py",
373
381
  "autogen/io",
374
382
  "autogen/tools",
375
383
  "autogen/interop",
@@ -378,7 +386,6 @@ files = [
378
386
  "autogen/import_utils.py",
379
387
  "autogen/agentchat/contrib/rag",
380
388
  "website/*.py",
381
- "test/test_pydantic.py",
382
389
  "test/io",
383
390
  "test/tools",
384
391
  "test/interop",
@@ -420,3 +427,8 @@ warn_unused_ignores = false
420
427
  disallow_incomplete_defs = true
421
428
  disallow_untyped_decorators = true
422
429
  disallow_any_unimported = true
430
+
431
+ [tool.codespell]
432
+ skip = "*.js,*.map,*.pdf,*.po,*.ts,*.json,*.svg,./notebook,./website/node_modules,.notebook/agentchat_microsoft_fabric.ipynb"
433
+ quiet-level = 3
434
+ ignore-words-list = "ans,linar,nam,tread,ot,assertIn,dependin,socio-economic,ege,leapYear,fO,bu,te,ROUGE,ser,doubleClick,CNa,wOh,Hart,Empress,Chage,mane,digitalize"
@@ -65,6 +65,9 @@ setuptools.setup(
65
65
  "cohere": ["pyautogen[cohere]==" + __version__],
66
66
  "ollama": ["pyautogen[ollama]==" + __version__],
67
67
  "bedrock": ["pyautogen[bedrock]==" + __version__],
68
+ "commsagent-discord": ["pyautogen[commsagent-discord]==" + __version__],
69
+ "commsagent-slack": ["pyautogen[commsagent-slack]==" + __version__],
70
+ "commsagent-telegram": ["pyautogen[commsagent-telegram]==" + __version__],
68
71
  "test": ["pyautogen[test]==" + __version__],
69
72
  "docs": ["pyautogen[docs]==" + __version__],
70
73
  "types": ["pyautogen[types]==" + __version__],
@@ -4,7 +4,7 @@
4
4
  #
5
5
  # Portions derived from https://github.com/microsoft/autogen are under the MIT License.
6
6
  # SPDX-License-Identifier: MIT
7
- #!/usr/bin/env python3 -m pytest
7
+ # !/usr/bin/env python3 -m pytest
8
8
 
9
9
  import hashlib
10
10
  import math
@@ -18,7 +18,7 @@ import requests
18
18
  from autogen.browser_utils import SimpleTextBrowser
19
19
  from autogen.import_utils import optional_import_block, skip_on_missing_imports
20
20
 
21
- BLOG_POST_URL = "https://docs.ag2.ai/blog/2023-04-21-LLM-tuning-math"
21
+ BLOG_POST_URL = "https://docs.ag2.ai/docs/blog/2023-04-21-LLM-tuning-math"
22
22
  BLOG_POST_TITLE = "Does Model and Inference Parameter Matter in LLM Applications? - A Case Study for MATH - AG2"
23
23
  BLOG_POST_STRING = "Large language models (LLMs) are powerful tools that can generate natural language texts for various applications, such as chatbots, summarization, translation, and more. GPT-4 is currently the state of the art LLM in the world. Is model selection irrelevant? What about inference parameters?"
24
24
 
@@ -4,7 +4,7 @@
4
4
  #
5
5
  # Portions derived from https://github.com/microsoft/autogen are under the MIT License.
6
6
  # SPDX-License-Identifier: MIT
7
- #!/usr/bin/env python3 -m pytest
7
+ # !/usr/bin/env python3 -m pytest
8
8
 
9
9
  import os
10
10
  import tempfile
@@ -0,0 +1,74 @@
1
+ # Copyright (c) 2023 - 2025, AG2ai, Inc., AG2ai open-source projects maintainers and core contributors
2
+ #
3
+ # SPDX-License-Identifier: Apache-2.0
4
+ #
5
+ # Portions derived from https://github.com/microsoft/autogen are under the MIT License.
6
+ # SPDX-License-Identifier: MIT
7
+
8
+
9
+ import os
10
+ import subprocess
11
+
12
+ import pytest
13
+
14
+ from .conftest import Credentials, Secrets, credentials_all_llms, suppress_gemini_resource_exhausted
15
+
16
+
17
+ @pytest.mark.parametrize("credentials_from_test_param", credentials_all_llms, indirect=True)
18
+ @suppress_gemini_resource_exhausted
19
+ def test_credentials_from_test_param_fixture(
20
+ credentials_from_test_param: Credentials,
21
+ request: pytest.FixtureRequest,
22
+ ) -> None:
23
+ # Get the parameter name request node
24
+ current_llm = request.node.callspec.id
25
+
26
+ assert current_llm is not None
27
+ assert isinstance(credentials_from_test_param, Credentials)
28
+
29
+ first_config = credentials_from_test_param.config_list[0]
30
+ if "gpt_4" in current_llm:
31
+ if "api_type" in first_config:
32
+ assert first_config["api_type"] == "openai"
33
+ elif "gemini" in current_llm:
34
+ assert first_config["api_type"] == "google"
35
+ elif "anthropic" in current_llm:
36
+ assert first_config["api_type"] == "anthropic"
37
+ else:
38
+ assert False, f"Unknown LLM fixture: {current_llm}"
39
+
40
+
41
+ class TestSecrets:
42
+ def test_sanitize_secrets(self):
43
+ Secrets.add_secret("mysecret")
44
+ data = "This contains mysecret and ysecre and somemysecreand should be sanitized."
45
+ sanitized = Secrets.sanitize_secrets(data)
46
+ assert sanitized == "This contains ***** and ***** and some*****and should be sanitized."
47
+
48
+ @pytest.mark.skipif(
49
+ not os.getenv("RUN_SANITIZATION_TEST"),
50
+ reason="Skipping sensitive tests. Set RUN_SANITIZATION_TEST=1 to run.",
51
+ )
52
+ def test_raise_exception_with_secret(self):
53
+ Secrets.add_secret("mysecret")
54
+ raise Exception("This is a test exception. mysecret exposed!!!")
55
+
56
+ def test_sensitive_output_is_sanitized(self):
57
+ # Run pytest for the sensitive tests and capture the output
58
+ result = subprocess.run(
59
+ [
60
+ "pytest",
61
+ "-s",
62
+ "test/test_conftest.py::TestSecrets::test_raise_exception_with_secret",
63
+ ],
64
+ env={**os.environ, "RUN_SANITIZATION_TEST": "1"},
65
+ stdout=subprocess.PIPE,
66
+ stderr=subprocess.PIPE,
67
+ text=True,
68
+ )
69
+
70
+ # Combine stdout and stderr to search for secrets
71
+ output = result.stdout + result.stderr
72
+
73
+ assert "mysecret" not in output, "Secret exposed in test output!"
74
+ assert "*****" in output, "Sanitization is not working as expected!"
@@ -295,9 +295,9 @@ def test_to_dict():
295
295
  assert result["foo_val"] == expected_foo_val_field
296
296
  assert result["o"] == expected_o_field
297
297
  assert len(result["agents"]) == 2
298
- for agent in result["agents"]:
299
- assert "autogen.ConversableAgent" in agent
300
- assert "autogen.ConversableAgent" in result["first_agent"]
298
+ assert result["agents"][0] == "alice"
299
+ assert result["agents"][1] == "bob"
300
+ assert "alice" in result["first_agent"]
301
301
 
302
302
 
303
303
  @patch("logging.Logger.error")
@@ -4,7 +4,7 @@
4
4
  #
5
5
  # Portions derived from https://github.com/microsoft/autogen are under the MIT License.
6
6
  # SPDX-License-Identifier: MIT
7
- #!/usr/bin/env python3 -m pytest
7
+ # !/usr/bin/env python3 -m pytest
8
8
 
9
9
  import os
10
10
  import sys
@@ -4,7 +4,7 @@
4
4
  #
5
5
  # Portions derived from https://github.com/microsoft/autogen are under the MIT License.
6
6
  # SPDX-License-Identifier: MIT
7
- #!/usr/bin/env python3 -m pytest
7
+ # !/usr/bin/env python3 -m pytest
8
8
 
9
9
  """Unit test for retrieve_utils.py"""
10
10
 
@@ -4,7 +4,7 @@
4
4
  #
5
5
  # Portions derived from https://github.com/microsoft/autogen are under the MIT License.
6
6
  # SPDX-License-Identifier: MIT
7
- #!/usr/bin/env python3 -m pytest
7
+ # !/usr/bin/env python3 -m pytest
8
8
 
9
9
  import pytest
10
10
 
@@ -1,127 +0,0 @@
1
- pyautogen==0.7.3
2
-
3
- [anthropic]
4
- pyautogen[anthropic]==0.7.3
5
-
6
- [autobuild]
7
- pyautogen[autobuild]==0.7.3
8
-
9
- [bedrock]
10
- pyautogen[bedrock]==0.7.3
11
-
12
- [blendsearch]
13
- pyautogen[blendsearch]==0.7.3
14
-
15
- [browser-use]
16
- pyautogen[browser-use]==0.7.3
17
-
18
- [captainagent]
19
- pyautogen[captainagent]==0.7.3
20
-
21
- [cerebras]
22
- pyautogen[cerebras]==0.7.3
23
-
24
- [cohere]
25
- pyautogen[cohere]==0.7.3
26
-
27
- [cosmosdb]
28
- pyautogen[cosmosdb]==0.7.3
29
-
30
- [crawl4ai]
31
- pyautogen[crawl4ai]==0.7.3
32
-
33
- [dev]
34
- pyautogen[dev]==0.7.3
35
-
36
- [docs]
37
- pyautogen[docs]==0.7.3
38
-
39
- [flaml]
40
- pyautogen[flaml]==0.7.3
41
-
42
- [gemini]
43
- pyautogen[gemini]==0.7.3
44
-
45
- [graph]
46
- pyautogen[graph]==0.7.3
47
-
48
- [graph-rag-falkor-db]
49
- pyautogen[graph-rag-falkor-db]==0.7.3
50
-
51
- [groq]
52
- pyautogen[groq]==0.7.3
53
-
54
- [interop]
55
- pyautogen[interop]==0.7.3
56
-
57
- [interop-crewai]
58
- pyautogen[interop-crewai]==0.7.3
59
-
60
- [interop-langchain]
61
- pyautogen[interop-langchain]==0.7.3
62
-
63
- [interop-pydantic-ai]
64
- pyautogen[interop-pydantic-ai]==0.7.3
65
-
66
- [jupyter-executor]
67
- pyautogen[jupyter-executor]==0.7.3
68
-
69
- [lint]
70
- pyautogen[lint]==0.7.3
71
-
72
- [lmm]
73
- pyautogen[lmm]==0.7.3
74
-
75
- [long-context]
76
- pyautogen[long-context]==0.7.3
77
-
78
- [mathchat]
79
- pyautogen[mathchat]==0.7.3
80
-
81
- [mistral]
82
- pyautogen[mistral]==0.7.3
83
-
84
- [neo4j]
85
- pyautogen[neo4j]==0.7.3
86
-
87
- [ollama]
88
- pyautogen[ollama]==0.7.3
89
-
90
- [rag]
91
- pyautogen[rag]==0.7.3
92
-
93
- [redis]
94
- pyautogen[redis]==0.7.3
95
-
96
- [retrievechat]
97
- pyautogen[retrievechat]==0.7.3
98
-
99
- [retrievechat-mongodb]
100
- pyautogen[retrievechat-mongodb]==0.7.3
101
-
102
- [retrievechat-pgvector]
103
- pyautogen[retrievechat-pgvector]==0.7.3
104
-
105
- [retrievechat-qdrant]
106
- pyautogen[retrievechat-qdrant]==0.7.3
107
-
108
- [teachable]
109
- pyautogen[teachable]==0.7.3
110
-
111
- [test]
112
- pyautogen[test]==0.7.3
113
-
114
- [together]
115
- pyautogen[together]==0.7.3
116
-
117
- [twilio]
118
- pyautogen[twilio]==0.7.3
119
-
120
- [types]
121
- pyautogen[types]==0.7.3
122
-
123
- [websockets]
124
- pyautogen[websockets]==0.7.3
125
-
126
- [websurfer]
127
- pyautogen[websurfer]==0.7.3
@@ -1,35 +0,0 @@
1
- # Copyright (c) 2023 - 2025, AG2ai, Inc., AG2ai open-source projects maintainers and core contributors
2
- #
3
- # SPDX-License-Identifier: Apache-2.0
4
- #
5
- # Portions derived from https://github.com/microsoft/autogen are under the MIT License.
6
- # SPDX-License-Identifier: MIT
7
-
8
-
9
- import pytest
10
-
11
- from .conftest import Credentials, credentials_all_llms, suppress_gemini_resource_exhausted
12
-
13
-
14
- @pytest.mark.parametrize("credentials_from_test_param", credentials_all_llms, indirect=True)
15
- @suppress_gemini_resource_exhausted
16
- def test_credentials_from_test_param_fixture(
17
- credentials_from_test_param: Credentials,
18
- request: pytest.FixtureRequest,
19
- ) -> None:
20
- # Get the parameter name request node
21
- current_llm = request.node.callspec.id
22
-
23
- assert current_llm is not None
24
- assert isinstance(credentials_from_test_param, Credentials)
25
-
26
- first_config = credentials_from_test_param.config_list[0]
27
- if "gpt_4" in current_llm:
28
- if "api_type" in first_config:
29
- assert first_config["api_type"] == "openai"
30
- elif "gemini" in current_llm:
31
- assert first_config["api_type"] == "google"
32
- elif "anthropic" in current_llm:
33
- assert first_config["api_type"] == "anthropic"
34
- else:
35
- assert False, f"Unknown LLM fixture: {current_llm}"
@@ -1,46 +0,0 @@
1
- # Copyright (c) 2023 - 2025, AG2ai, Inc., AG2ai open-source projects maintainers and core contributors
2
- #
3
- # SPDX-License-Identifier: Apache-2.0
4
- #
5
- # Portions derived from https://github.com/microsoft/autogen are under the MIT License.
6
- # SPDX-License-Identifier: MIT
7
- from typing import Annotated, Optional, Union
8
-
9
- from pydantic import BaseModel
10
-
11
- from autogen._pydantic import model_dump, model_dump_json, type2schema
12
-
13
-
14
- def test_type2schema() -> None:
15
- assert type2schema(str) == {"type": "string"}
16
- assert type2schema(int) == {"type": "integer"}
17
- assert type2schema(float) == {"type": "number"}
18
- assert type2schema(bool) == {"type": "boolean"}
19
- assert type2schema(None) == {"type": "null"}
20
- assert type2schema(Optional[int]) == {"anyOf": [{"type": "integer"}, {"type": "null"}]}
21
- assert type2schema(list[int]) == {"items": {"type": "integer"}, "type": "array"}
22
- assert type2schema(tuple[int, float, str]) == {
23
- "maxItems": 3,
24
- "minItems": 3,
25
- "prefixItems": [{"type": "integer"}, {"type": "number"}, {"type": "string"}],
26
- "type": "array",
27
- }
28
- assert type2schema(dict[str, int]) == {"additionalProperties": {"type": "integer"}, "type": "object"}
29
- assert type2schema(Annotated[str, "some text"]) == {"type": "string"}
30
- assert type2schema(Union[int, float]) == {"anyOf": [{"type": "integer"}, {"type": "number"}]}
31
-
32
-
33
- def test_model_dump() -> None:
34
- class A(BaseModel):
35
- a: str
36
- b: int = 2
37
-
38
- assert model_dump(A(a="aaa")) == {"a": "aaa", "b": 2}
39
-
40
-
41
- def test_model_dump_json() -> None:
42
- class A(BaseModel):
43
- a: str
44
- b: int = 2
45
-
46
- assert model_dump_json(A(a="aaa")).replace(" ", "") == '{"a":"aaa","b":2}'
File without changes
File without changes
File without changes
File without changes
File without changes
File without changes
File without changes