ag2 0.8.5a0__tar.gz → 0.8.6b0__tar.gz

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.

Potentially problematic release.


This version of ag2 might be problematic. Click here for more details.

@@ -1,6 +1,6 @@
1
1
  Metadata-Version: 2.1
2
2
  Name: ag2
3
- Version: 0.8.5a0
3
+ Version: 0.8.6b0
4
4
  Summary: Alias package for pyautogen
5
5
  Home-page: https://github.com/ag2ai/ag2
6
6
  Author: Chi Wang & Qingyun Wu
@@ -32,7 +32,6 @@ Provides-Extra: wikipedia
32
32
  Provides-Extra: neo4j
33
33
  Provides-Extra: twilio
34
34
  Provides-Extra: mcp
35
- Provides-Extra: mcp-proxy-gen
36
35
  Provides-Extra: interop-crewai
37
36
  Provides-Extra: interop-langchain
38
37
  Provides-Extra: interop-pydantic-ai
@@ -134,7 +133,7 @@ The project is currently maintained by a [dynamic group of volunteers](MAINTAINE
134
133
 
135
134
  ## Getting started
136
135
 
137
- For a step-by-step walk through of AG2 concepts and code, see [Basic Concepts](https://docs.ag2.ai/docs/user-guide/basic-concepts) in our documentation.
136
+ For a step-by-step walk through of AG2 concepts and code, see [Basic Concepts](https://docs.ag2.ai/latest/docs/user-guide/basic-concepts/installing-ag2/) in our documentation.
138
137
 
139
138
  ### Installation
140
139
 
@@ -197,12 +196,22 @@ We have several agent concepts in AG2 to help you build your AI agents. We intro
197
196
 
198
197
  ### Conversable agent
199
198
 
200
- The conversable agent is the most used agent and is created for generating conversations among agents.
201
- It serves as a base class for all agents in AG2.
199
+ The [ConversableAgent](https://docs.ag2.ai/latest/docs/api-reference/autogen/ConversableAgent) is the fundamental building block of AG2, designed to enable seamless communication between AI entities. This core agent type handles message exchange and response generation, serving as the base class for all agents in the framework.
200
+
201
+ In the example below, we'll create a simple information validation workflow with two specialized agents that communicate with each other:
202
+
203
+ Note: Before running this code, make sure to set your `OPENAI_API_KEY` as an environment variable. This example uses `gpt-4o-mini`, but you can replace it with any other [model](https://docs.ag2.ai/latest/docs/user-guide/models/amazon-bedrock) supported by AG2.
202
204
 
203
205
  ```python
204
- from autogen import ConversableAgent
206
+ # 1. Import ConversableAgent class
207
+ from autogen import ConversableAgent, LLMConfig
208
+
209
+ # 2. Define our LLM configuration for OpenAI's GPT-4o mini
210
+ # uses the OPENAI_API_KEY environment variable
211
+ llm_config = LLMConfig(api_type="openai", model="gpt-4o-mini")
205
212
 
213
+
214
+ # 3. Create our LLM agent
206
215
  with llm_config:
207
216
  # Create an AI agent
208
217
  assistant = ConversableAgent(
@@ -216,7 +225,7 @@ with llm_config:
216
225
  system_message="You are a fact-checking assistant.",
217
226
  )
218
227
 
219
- # Start the conversation
228
+ # 4. Start the conversation
220
229
  assistant.initiate_chat(
221
230
  recipient=fact_checker,
222
231
  message="What is AG2?",
@@ -226,25 +235,34 @@ assistant.initiate_chat(
226
235
 
227
236
  ### Human in the loop
228
237
 
229
- Sometimes your wished workflow requires human input. Therefore you can enable the human in the loop feature.
238
+ Human oversight is crucial for many AI workflows, especially when dealing with critical decisions, creative tasks, or situations requiring expert judgment. AG2 makes integrating human feedback seamless through its human-in-the-loop functionality.
239
+ You can configure how and when human input is solicited using the `human_input_mode` parameter:
230
240
 
231
- If you set `human_input_mode` to `ALWAYS` on ConversableAgent you can give human input to the conversation.
241
+ - `ALWAYS`: Requires human input for every response
242
+ - `NEVER`: Operates autonomously without human involvement
243
+ - `TERMINATE`: Only requests human input to end conversations
232
244
 
233
- There are three modes for `human_input_mode`: `ALWAYS`, `NEVER`, `TERMINATE`.
245
+ For convenience, AG2 provides the specialized `UserProxyAgent` class that automatically sets `human_input_mode` to `ALWAYS` and supports code execution:
234
246
 
235
- We created a class which sets the `human_input_mode` to `ALWAYS` for you. Its called `UserProxyAgent`.
247
+ Note: Before running this code, make sure to set your `OPENAI_API_KEY` as an environment variable. This example uses `gpt-4o-mini`, but you can replace it with any other [model](https://docs.ag2.ai/latest/docs/user-guide/models/amazon-bedrock) supported by AG2.
236
248
 
237
249
  ```python
238
- from autogen import ConversableAgent
250
+ # 1. Import ConversableAgent and UserProxyAgent classes
251
+ from autogen import ConversableAgent, UserProxyAgent, LLMConfig
252
+
253
+ # 2. Define our LLM configuration for OpenAI's GPT-4o mini
254
+ # uses the OPENAI_API_KEY environment variable
255
+ llm_config = LLMConfig(api_type="openai", model="gpt-4o-mini")
239
256
 
240
- # Create an AI agent
257
+
258
+ # 3. Create our LLM agent
241
259
  with llm_config:
242
260
  assistant = ConversableAgent(
243
261
  name="assistant",
244
262
  system_message="You are a helpful assistant.",
245
263
  )
246
264
 
247
- # Create a human agent with manual input mode
265
+ # 4. Create a human agent with manual input mode
248
266
  human = ConversableAgent(
249
267
  name="human",
250
268
  human_input_mode="ALWAYS"
@@ -252,7 +270,7 @@ human = ConversableAgent(
252
270
  # or
253
271
  human = UserProxyAgent(name="human", code_execution_config={"work_dir": "coding", "use_docker": False})
254
272
 
255
- # Start the chat
273
+ # 5. Start the chat
256
274
  human.initiate_chat(
257
275
  recipient=assistant,
258
276
  message="Hello! What's 2 + 2?"
@@ -262,45 +280,106 @@ human.initiate_chat(
262
280
 
263
281
  ### Orchestrating multiple agents
264
282
 
265
- Users can define their own orchestration patterns using the flexible programming interface from AG2.
283
+ AG2 enables sophisticated multi-agent collaboration through flexible orchestration patterns, allowing you to create dynamic systems where specialized agents work together to solve complex problems.
266
284
 
267
- Additionally AG2 provides multiple built-in patterns to orchestrate multiple agents, such as `GroupChat` and `Swarm`.
285
+ The framework offers both custom orchestration and several built-in collaboration patterns including `GroupChat` and `Swarm`.
268
286
 
269
- Both concepts are used to orchestrate multiple agents to solve a task.
287
+ Here's how to implement a collaborative team for curriculum development using GroupChat:
270
288
 
271
- The group chat works like a chat where each registered agent can participate in the conversation.
289
+ Note: Before running this code, make sure to set your `OPENAI_API_KEY` as an environment variable. This example uses `gpt-4o-mini`, but you can replace it with any other [model](https://docs.ag2.ai/latest/docs/user-guide/models/amazon-bedrock) supported by AG2.
272
290
 
273
291
  ```python
274
- from autogen import ConversableAgent, GroupChat, GroupChatManager
292
+ from autogen import ConversableAgent, GroupChat, GroupChatManager, LLMConfig
293
+
294
+ # Put your key in the OPENAI_API_KEY environment variable
295
+ llm_config = LLMConfig(api_type="openai", model="gpt-4o-mini")
296
+
297
+ planner_message = """You are a classroom lesson agent.
298
+ Given a topic, write a lesson plan for a fourth grade class.
299
+ Use the following format:
300
+ <title>Lesson plan title</title>
301
+ <learning_objectives>Key learning objectives</learning_objectives>
302
+ <script>How to introduce the topic to the kids</script>
303
+ """
304
+
305
+ reviewer_message = """You are a classroom lesson reviewer.
306
+ You compare the lesson plan to the fourth grade curriculum and provide a maximum of 3 recommended changes.
307
+ Provide only one round of reviews to a lesson plan.
308
+ """
309
+
310
+ # 1. Add a separate 'description' for our planner and reviewer agents
311
+ planner_description = "Creates or revises lesson plans."
275
312
 
276
- # Create AI agents
277
- teacher = ConversableAgent(name="teacher", system_message="You suggest lesson topics.")
278
- planner = ConversableAgent(name="planner", system_message="You create lesson plans.")
279
- reviewer = ConversableAgent(name="reviewer", system_message="You review lesson plans.")
313
+ reviewer_description = """Provides one round of reviews to a lesson plan
314
+ for the lesson_planner to revise."""
280
315
 
281
- # Create GroupChat
282
- groupchat = GroupChat(agents=[teacher, planner, reviewer], speaker_selection_method="auto")
316
+ with llm_config:
317
+ lesson_planner = ConversableAgent(
318
+ name="planner_agent",
319
+ system_message=planner_message,
320
+ description=planner_description,
321
+ )
322
+
323
+ lesson_reviewer = ConversableAgent(
324
+ name="reviewer_agent",
325
+ system_message=reviewer_message,
326
+ description=reviewer_description,
327
+ )
328
+
329
+ # 2. The teacher's system message can also be used as a description, so we don't define it
330
+ teacher_message = """You are a classroom teacher.
331
+ You decide topics for lessons and work with a lesson planner.
332
+ and reviewer to create and finalise lesson plans.
333
+ When you are happy with a lesson plan, output "DONE!".
334
+ """
283
335
 
284
- # Create the GroupChatManager, it will manage the conversation and uses an LLM to select the next agent
285
- manager = GroupChatManager(name="manager", groupchat=groupchat)
336
+ with llm_config:
337
+ teacher = ConversableAgent(
338
+ name="teacher_agent",
339
+ system_message=teacher_message,
340
+ # 3. Our teacher can end the conversation by saying DONE!
341
+ is_termination_msg=lambda x: "DONE!" in (x.get("content", "") or "").upper(),
342
+ )
343
+
344
+ # 4. Create the GroupChat with agents and selection method
345
+ groupchat = GroupChat(
346
+ agents=[teacher, lesson_planner, lesson_reviewer],
347
+ speaker_selection_method="auto",
348
+ messages=[],
349
+ )
350
+
351
+ # 5. Our GroupChatManager will manage the conversation and uses an LLM to select the next agent
352
+ manager = GroupChatManager(
353
+ name="group_manager",
354
+ groupchat=groupchat,
355
+ llm_config=llm_config,
356
+ )
286
357
 
287
- # Start the conversation
288
- teacher.initiate_chat(manager, "Create a lesson on photosynthesis.")
358
+ # 6. Initiate the chat with the GroupChatManager as the recipient
359
+ teacher.initiate_chat(
360
+ recipient=manager,
361
+ message="Today, let's introduce our kids to the solar system."
362
+ )
289
363
  ```
290
364
 
291
- The swarm requires a more rigid structure and the flow needs to be defined with hand-off, post-tool, and post-work transitions from an agent to another agent.
365
+ When executed, this code creates a collaborative system where the teacher initiates the conversation, and the lesson planner and reviewer agents work together to create and refine a lesson plan. The GroupChatManager orchestrates the conversation, selecting the next agent to respond based on the context of the discussion.
292
366
 
293
- Read more about it in the [documentation](https://docs.ag2.ai/docs/user-guide/advanced-concepts/conversation-patterns-deep-dive)
367
+ For workflows requiring more structured processes, explore the Swarm pattern in the detailed [documentation](https://docs.ag2.ai/latest/docs/user-guide/advanced-concepts/conversation-patterns-deep-dive).
294
368
 
295
369
  ### Tools
296
370
 
297
371
  Agents gain significant utility through tools as they provide access to external data, APIs, and functionality.
298
372
 
373
+ Note: Before running this code, make sure to set your `OPENAI_API_KEY` as an environment variable. This example uses `gpt-4o-mini`, but you can replace it with any other [model](https://docs.ag2.ai/latest/docs/user-guide/models/amazon-bedrock) supported by AG2.
374
+
299
375
  ```python
300
376
  from datetime import datetime
301
377
  from typing import Annotated
302
378
 
303
- from autogen import ConversableAgent, register_function
379
+ from autogen import ConversableAgent, register_function, LLMConfig
380
+
381
+ # Put your key in the OPENAI_API_KEY environment variable
382
+ llm_config = LLMConfig(api_type="openai", model="gpt-4o-mini")
304
383
 
305
384
  # 1. Our tool, returns the day of the week for a given date
306
385
  def get_weekday(date_string: Annotated[str, "Format: YYYY-MM-DD"]) -> str:
@@ -309,10 +388,10 @@ def get_weekday(date_string: Annotated[str, "Format: YYYY-MM-DD"]) -> str:
309
388
 
310
389
  # 2. Agent for determining whether to run the tool
311
390
  with llm_config:
312
- date_agent = ConversableAgent(
313
- name="date_agent",
314
- system_message="You get the day of the week for a given date.",
315
- )
391
+ date_agent = ConversableAgent(
392
+ name="date_agent",
393
+ system_message="You get the day of the week for a given date.",
394
+ )
316
395
 
317
396
  # 3. And an agent for executing the tool
318
397
  executor_agent = ConversableAgent(
@@ -332,8 +411,10 @@ register_function(
332
411
  chat_result = executor_agent.initiate_chat(
333
412
  recipient=date_agent,
334
413
  message="I was born on the 25th of March 1995, what day was it?",
335
- max_turns=1,
414
+ max_turns=2,
336
415
  )
416
+
417
+ print(chat_result.chat_history[-1]["content"])
337
418
  ```
338
419
 
339
420
  ### Advanced agentic design patterns
@@ -61,7 +61,7 @@ The project is currently maintained by a [dynamic group of volunteers](MAINTAINE
61
61
 
62
62
  ## Getting started
63
63
 
64
- For a step-by-step walk through of AG2 concepts and code, see [Basic Concepts](https://docs.ag2.ai/docs/user-guide/basic-concepts) in our documentation.
64
+ For a step-by-step walk through of AG2 concepts and code, see [Basic Concepts](https://docs.ag2.ai/latest/docs/user-guide/basic-concepts/installing-ag2/) in our documentation.
65
65
 
66
66
  ### Installation
67
67
 
@@ -124,12 +124,22 @@ We have several agent concepts in AG2 to help you build your AI agents. We intro
124
124
 
125
125
  ### Conversable agent
126
126
 
127
- The conversable agent is the most used agent and is created for generating conversations among agents.
128
- It serves as a base class for all agents in AG2.
127
+ The [ConversableAgent](https://docs.ag2.ai/latest/docs/api-reference/autogen/ConversableAgent) is the fundamental building block of AG2, designed to enable seamless communication between AI entities. This core agent type handles message exchange and response generation, serving as the base class for all agents in the framework.
128
+
129
+ In the example below, we'll create a simple information validation workflow with two specialized agents that communicate with each other:
130
+
131
+ Note: Before running this code, make sure to set your `OPENAI_API_KEY` as an environment variable. This example uses `gpt-4o-mini`, but you can replace it with any other [model](https://docs.ag2.ai/latest/docs/user-guide/models/amazon-bedrock) supported by AG2.
129
132
 
130
133
  ```python
131
- from autogen import ConversableAgent
134
+ # 1. Import ConversableAgent class
135
+ from autogen import ConversableAgent, LLMConfig
136
+
137
+ # 2. Define our LLM configuration for OpenAI's GPT-4o mini
138
+ # uses the OPENAI_API_KEY environment variable
139
+ llm_config = LLMConfig(api_type="openai", model="gpt-4o-mini")
132
140
 
141
+
142
+ # 3. Create our LLM agent
133
143
  with llm_config:
134
144
  # Create an AI agent
135
145
  assistant = ConversableAgent(
@@ -143,7 +153,7 @@ with llm_config:
143
153
  system_message="You are a fact-checking assistant.",
144
154
  )
145
155
 
146
- # Start the conversation
156
+ # 4. Start the conversation
147
157
  assistant.initiate_chat(
148
158
  recipient=fact_checker,
149
159
  message="What is AG2?",
@@ -153,25 +163,34 @@ assistant.initiate_chat(
153
163
 
154
164
  ### Human in the loop
155
165
 
156
- Sometimes your wished workflow requires human input. Therefore you can enable the human in the loop feature.
166
+ Human oversight is crucial for many AI workflows, especially when dealing with critical decisions, creative tasks, or situations requiring expert judgment. AG2 makes integrating human feedback seamless through its human-in-the-loop functionality.
167
+ You can configure how and when human input is solicited using the `human_input_mode` parameter:
157
168
 
158
- If you set `human_input_mode` to `ALWAYS` on ConversableAgent you can give human input to the conversation.
169
+ - `ALWAYS`: Requires human input for every response
170
+ - `NEVER`: Operates autonomously without human involvement
171
+ - `TERMINATE`: Only requests human input to end conversations
159
172
 
160
- There are three modes for `human_input_mode`: `ALWAYS`, `NEVER`, `TERMINATE`.
173
+ For convenience, AG2 provides the specialized `UserProxyAgent` class that automatically sets `human_input_mode` to `ALWAYS` and supports code execution:
161
174
 
162
- We created a class which sets the `human_input_mode` to `ALWAYS` for you. Its called `UserProxyAgent`.
175
+ Note: Before running this code, make sure to set your `OPENAI_API_KEY` as an environment variable. This example uses `gpt-4o-mini`, but you can replace it with any other [model](https://docs.ag2.ai/latest/docs/user-guide/models/amazon-bedrock) supported by AG2.
163
176
 
164
177
  ```python
165
- from autogen import ConversableAgent
178
+ # 1. Import ConversableAgent and UserProxyAgent classes
179
+ from autogen import ConversableAgent, UserProxyAgent, LLMConfig
180
+
181
+ # 2. Define our LLM configuration for OpenAI's GPT-4o mini
182
+ # uses the OPENAI_API_KEY environment variable
183
+ llm_config = LLMConfig(api_type="openai", model="gpt-4o-mini")
166
184
 
167
- # Create an AI agent
185
+
186
+ # 3. Create our LLM agent
168
187
  with llm_config:
169
188
  assistant = ConversableAgent(
170
189
  name="assistant",
171
190
  system_message="You are a helpful assistant.",
172
191
  )
173
192
 
174
- # Create a human agent with manual input mode
193
+ # 4. Create a human agent with manual input mode
175
194
  human = ConversableAgent(
176
195
  name="human",
177
196
  human_input_mode="ALWAYS"
@@ -179,7 +198,7 @@ human = ConversableAgent(
179
198
  # or
180
199
  human = UserProxyAgent(name="human", code_execution_config={"work_dir": "coding", "use_docker": False})
181
200
 
182
- # Start the chat
201
+ # 5. Start the chat
183
202
  human.initiate_chat(
184
203
  recipient=assistant,
185
204
  message="Hello! What's 2 + 2?"
@@ -189,45 +208,106 @@ human.initiate_chat(
189
208
 
190
209
  ### Orchestrating multiple agents
191
210
 
192
- Users can define their own orchestration patterns using the flexible programming interface from AG2.
211
+ AG2 enables sophisticated multi-agent collaboration through flexible orchestration patterns, allowing you to create dynamic systems where specialized agents work together to solve complex problems.
193
212
 
194
- Additionally AG2 provides multiple built-in patterns to orchestrate multiple agents, such as `GroupChat` and `Swarm`.
213
+ The framework offers both custom orchestration and several built-in collaboration patterns including `GroupChat` and `Swarm`.
195
214
 
196
- Both concepts are used to orchestrate multiple agents to solve a task.
215
+ Here's how to implement a collaborative team for curriculum development using GroupChat:
197
216
 
198
- The group chat works like a chat where each registered agent can participate in the conversation.
217
+ Note: Before running this code, make sure to set your `OPENAI_API_KEY` as an environment variable. This example uses `gpt-4o-mini`, but you can replace it with any other [model](https://docs.ag2.ai/latest/docs/user-guide/models/amazon-bedrock) supported by AG2.
199
218
 
200
219
  ```python
201
- from autogen import ConversableAgent, GroupChat, GroupChatManager
220
+ from autogen import ConversableAgent, GroupChat, GroupChatManager, LLMConfig
221
+
222
+ # Put your key in the OPENAI_API_KEY environment variable
223
+ llm_config = LLMConfig(api_type="openai", model="gpt-4o-mini")
224
+
225
+ planner_message = """You are a classroom lesson agent.
226
+ Given a topic, write a lesson plan for a fourth grade class.
227
+ Use the following format:
228
+ <title>Lesson plan title</title>
229
+ <learning_objectives>Key learning objectives</learning_objectives>
230
+ <script>How to introduce the topic to the kids</script>
231
+ """
232
+
233
+ reviewer_message = """You are a classroom lesson reviewer.
234
+ You compare the lesson plan to the fourth grade curriculum and provide a maximum of 3 recommended changes.
235
+ Provide only one round of reviews to a lesson plan.
236
+ """
237
+
238
+ # 1. Add a separate 'description' for our planner and reviewer agents
239
+ planner_description = "Creates or revises lesson plans."
202
240
 
203
- # Create AI agents
204
- teacher = ConversableAgent(name="teacher", system_message="You suggest lesson topics.")
205
- planner = ConversableAgent(name="planner", system_message="You create lesson plans.")
206
- reviewer = ConversableAgent(name="reviewer", system_message="You review lesson plans.")
241
+ reviewer_description = """Provides one round of reviews to a lesson plan
242
+ for the lesson_planner to revise."""
207
243
 
208
- # Create GroupChat
209
- groupchat = GroupChat(agents=[teacher, planner, reviewer], speaker_selection_method="auto")
244
+ with llm_config:
245
+ lesson_planner = ConversableAgent(
246
+ name="planner_agent",
247
+ system_message=planner_message,
248
+ description=planner_description,
249
+ )
250
+
251
+ lesson_reviewer = ConversableAgent(
252
+ name="reviewer_agent",
253
+ system_message=reviewer_message,
254
+ description=reviewer_description,
255
+ )
256
+
257
+ # 2. The teacher's system message can also be used as a description, so we don't define it
258
+ teacher_message = """You are a classroom teacher.
259
+ You decide topics for lessons and work with a lesson planner.
260
+ and reviewer to create and finalise lesson plans.
261
+ When you are happy with a lesson plan, output "DONE!".
262
+ """
210
263
 
211
- # Create the GroupChatManager, it will manage the conversation and uses an LLM to select the next agent
212
- manager = GroupChatManager(name="manager", groupchat=groupchat)
264
+ with llm_config:
265
+ teacher = ConversableAgent(
266
+ name="teacher_agent",
267
+ system_message=teacher_message,
268
+ # 3. Our teacher can end the conversation by saying DONE!
269
+ is_termination_msg=lambda x: "DONE!" in (x.get("content", "") or "").upper(),
270
+ )
271
+
272
+ # 4. Create the GroupChat with agents and selection method
273
+ groupchat = GroupChat(
274
+ agents=[teacher, lesson_planner, lesson_reviewer],
275
+ speaker_selection_method="auto",
276
+ messages=[],
277
+ )
278
+
279
+ # 5. Our GroupChatManager will manage the conversation and uses an LLM to select the next agent
280
+ manager = GroupChatManager(
281
+ name="group_manager",
282
+ groupchat=groupchat,
283
+ llm_config=llm_config,
284
+ )
213
285
 
214
- # Start the conversation
215
- teacher.initiate_chat(manager, "Create a lesson on photosynthesis.")
286
+ # 6. Initiate the chat with the GroupChatManager as the recipient
287
+ teacher.initiate_chat(
288
+ recipient=manager,
289
+ message="Today, let's introduce our kids to the solar system."
290
+ )
216
291
  ```
217
292
 
218
- The swarm requires a more rigid structure and the flow needs to be defined with hand-off, post-tool, and post-work transitions from an agent to another agent.
293
+ When executed, this code creates a collaborative system where the teacher initiates the conversation, and the lesson planner and reviewer agents work together to create and refine a lesson plan. The GroupChatManager orchestrates the conversation, selecting the next agent to respond based on the context of the discussion.
219
294
 
220
- Read more about it in the [documentation](https://docs.ag2.ai/docs/user-guide/advanced-concepts/conversation-patterns-deep-dive)
295
+ For workflows requiring more structured processes, explore the Swarm pattern in the detailed [documentation](https://docs.ag2.ai/latest/docs/user-guide/advanced-concepts/conversation-patterns-deep-dive).
221
296
 
222
297
  ### Tools
223
298
 
224
299
  Agents gain significant utility through tools as they provide access to external data, APIs, and functionality.
225
300
 
301
+ Note: Before running this code, make sure to set your `OPENAI_API_KEY` as an environment variable. This example uses `gpt-4o-mini`, but you can replace it with any other [model](https://docs.ag2.ai/latest/docs/user-guide/models/amazon-bedrock) supported by AG2.
302
+
226
303
  ```python
227
304
  from datetime import datetime
228
305
  from typing import Annotated
229
306
 
230
- from autogen import ConversableAgent, register_function
307
+ from autogen import ConversableAgent, register_function, LLMConfig
308
+
309
+ # Put your key in the OPENAI_API_KEY environment variable
310
+ llm_config = LLMConfig(api_type="openai", model="gpt-4o-mini")
231
311
 
232
312
  # 1. Our tool, returns the day of the week for a given date
233
313
  def get_weekday(date_string: Annotated[str, "Format: YYYY-MM-DD"]) -> str:
@@ -236,10 +316,10 @@ def get_weekday(date_string: Annotated[str, "Format: YYYY-MM-DD"]) -> str:
236
316
 
237
317
  # 2. Agent for determining whether to run the tool
238
318
  with llm_config:
239
- date_agent = ConversableAgent(
240
- name="date_agent",
241
- system_message="You get the day of the week for a given date.",
242
- )
319
+ date_agent = ConversableAgent(
320
+ name="date_agent",
321
+ system_message="You get the day of the week for a given date.",
322
+ )
243
323
 
244
324
  # 3. And an agent for executing the tool
245
325
  executor_agent = ConversableAgent(
@@ -259,8 +339,10 @@ register_function(
259
339
  chat_result = executor_agent.initiate_chat(
260
340
  recipient=date_agent,
261
341
  message="I was born on the 25th of March 1995, what day was it?",
262
- max_turns=1,
342
+ max_turns=2,
263
343
  )
344
+
345
+ print(chat_result.chat_history[-1]["content"])
264
346
  ```
265
347
 
266
348
  ### Advanced agentic design patterns
@@ -1,6 +1,6 @@
1
1
  Metadata-Version: 2.1
2
2
  Name: ag2
3
- Version: 0.8.5a0
3
+ Version: 0.8.6b0
4
4
  Summary: Alias package for pyautogen
5
5
  Home-page: https://github.com/ag2ai/ag2
6
6
  Author: Chi Wang & Qingyun Wu
@@ -32,7 +32,6 @@ Provides-Extra: wikipedia
32
32
  Provides-Extra: neo4j
33
33
  Provides-Extra: twilio
34
34
  Provides-Extra: mcp
35
- Provides-Extra: mcp-proxy-gen
36
35
  Provides-Extra: interop-crewai
37
36
  Provides-Extra: interop-langchain
38
37
  Provides-Extra: interop-pydantic-ai
@@ -134,7 +133,7 @@ The project is currently maintained by a [dynamic group of volunteers](MAINTAINE
134
133
 
135
134
  ## Getting started
136
135
 
137
- For a step-by-step walk through of AG2 concepts and code, see [Basic Concepts](https://docs.ag2.ai/docs/user-guide/basic-concepts) in our documentation.
136
+ For a step-by-step walk through of AG2 concepts and code, see [Basic Concepts](https://docs.ag2.ai/latest/docs/user-guide/basic-concepts/installing-ag2/) in our documentation.
138
137
 
139
138
  ### Installation
140
139
 
@@ -197,12 +196,22 @@ We have several agent concepts in AG2 to help you build your AI agents. We intro
197
196
 
198
197
  ### Conversable agent
199
198
 
200
- The conversable agent is the most used agent and is created for generating conversations among agents.
201
- It serves as a base class for all agents in AG2.
199
+ The [ConversableAgent](https://docs.ag2.ai/latest/docs/api-reference/autogen/ConversableAgent) is the fundamental building block of AG2, designed to enable seamless communication between AI entities. This core agent type handles message exchange and response generation, serving as the base class for all agents in the framework.
200
+
201
+ In the example below, we'll create a simple information validation workflow with two specialized agents that communicate with each other:
202
+
203
+ Note: Before running this code, make sure to set your `OPENAI_API_KEY` as an environment variable. This example uses `gpt-4o-mini`, but you can replace it with any other [model](https://docs.ag2.ai/latest/docs/user-guide/models/amazon-bedrock) supported by AG2.
202
204
 
203
205
  ```python
204
- from autogen import ConversableAgent
206
+ # 1. Import ConversableAgent class
207
+ from autogen import ConversableAgent, LLMConfig
208
+
209
+ # 2. Define our LLM configuration for OpenAI's GPT-4o mini
210
+ # uses the OPENAI_API_KEY environment variable
211
+ llm_config = LLMConfig(api_type="openai", model="gpt-4o-mini")
205
212
 
213
+
214
+ # 3. Create our LLM agent
206
215
  with llm_config:
207
216
  # Create an AI agent
208
217
  assistant = ConversableAgent(
@@ -216,7 +225,7 @@ with llm_config:
216
225
  system_message="You are a fact-checking assistant.",
217
226
  )
218
227
 
219
- # Start the conversation
228
+ # 4. Start the conversation
220
229
  assistant.initiate_chat(
221
230
  recipient=fact_checker,
222
231
  message="What is AG2?",
@@ -226,25 +235,34 @@ assistant.initiate_chat(
226
235
 
227
236
  ### Human in the loop
228
237
 
229
- Sometimes your wished workflow requires human input. Therefore you can enable the human in the loop feature.
238
+ Human oversight is crucial for many AI workflows, especially when dealing with critical decisions, creative tasks, or situations requiring expert judgment. AG2 makes integrating human feedback seamless through its human-in-the-loop functionality.
239
+ You can configure how and when human input is solicited using the `human_input_mode` parameter:
230
240
 
231
- If you set `human_input_mode` to `ALWAYS` on ConversableAgent you can give human input to the conversation.
241
+ - `ALWAYS`: Requires human input for every response
242
+ - `NEVER`: Operates autonomously without human involvement
243
+ - `TERMINATE`: Only requests human input to end conversations
232
244
 
233
- There are three modes for `human_input_mode`: `ALWAYS`, `NEVER`, `TERMINATE`.
245
+ For convenience, AG2 provides the specialized `UserProxyAgent` class that automatically sets `human_input_mode` to `ALWAYS` and supports code execution:
234
246
 
235
- We created a class which sets the `human_input_mode` to `ALWAYS` for you. Its called `UserProxyAgent`.
247
+ Note: Before running this code, make sure to set your `OPENAI_API_KEY` as an environment variable. This example uses `gpt-4o-mini`, but you can replace it with any other [model](https://docs.ag2.ai/latest/docs/user-guide/models/amazon-bedrock) supported by AG2.
236
248
 
237
249
  ```python
238
- from autogen import ConversableAgent
250
+ # 1. Import ConversableAgent and UserProxyAgent classes
251
+ from autogen import ConversableAgent, UserProxyAgent, LLMConfig
252
+
253
+ # 2. Define our LLM configuration for OpenAI's GPT-4o mini
254
+ # uses the OPENAI_API_KEY environment variable
255
+ llm_config = LLMConfig(api_type="openai", model="gpt-4o-mini")
239
256
 
240
- # Create an AI agent
257
+
258
+ # 3. Create our LLM agent
241
259
  with llm_config:
242
260
  assistant = ConversableAgent(
243
261
  name="assistant",
244
262
  system_message="You are a helpful assistant.",
245
263
  )
246
264
 
247
- # Create a human agent with manual input mode
265
+ # 4. Create a human agent with manual input mode
248
266
  human = ConversableAgent(
249
267
  name="human",
250
268
  human_input_mode="ALWAYS"
@@ -252,7 +270,7 @@ human = ConversableAgent(
252
270
  # or
253
271
  human = UserProxyAgent(name="human", code_execution_config={"work_dir": "coding", "use_docker": False})
254
272
 
255
- # Start the chat
273
+ # 5. Start the chat
256
274
  human.initiate_chat(
257
275
  recipient=assistant,
258
276
  message="Hello! What's 2 + 2?"
@@ -262,45 +280,106 @@ human.initiate_chat(
262
280
 
263
281
  ### Orchestrating multiple agents
264
282
 
265
- Users can define their own orchestration patterns using the flexible programming interface from AG2.
283
+ AG2 enables sophisticated multi-agent collaboration through flexible orchestration patterns, allowing you to create dynamic systems where specialized agents work together to solve complex problems.
266
284
 
267
- Additionally AG2 provides multiple built-in patterns to orchestrate multiple agents, such as `GroupChat` and `Swarm`.
285
+ The framework offers both custom orchestration and several built-in collaboration patterns including `GroupChat` and `Swarm`.
268
286
 
269
- Both concepts are used to orchestrate multiple agents to solve a task.
287
+ Here's how to implement a collaborative team for curriculum development using GroupChat:
270
288
 
271
- The group chat works like a chat where each registered agent can participate in the conversation.
289
+ Note: Before running this code, make sure to set your `OPENAI_API_KEY` as an environment variable. This example uses `gpt-4o-mini`, but you can replace it with any other [model](https://docs.ag2.ai/latest/docs/user-guide/models/amazon-bedrock) supported by AG2.
272
290
 
273
291
  ```python
274
- from autogen import ConversableAgent, GroupChat, GroupChatManager
292
+ from autogen import ConversableAgent, GroupChat, GroupChatManager, LLMConfig
293
+
294
+ # Put your key in the OPENAI_API_KEY environment variable
295
+ llm_config = LLMConfig(api_type="openai", model="gpt-4o-mini")
296
+
297
+ planner_message = """You are a classroom lesson agent.
298
+ Given a topic, write a lesson plan for a fourth grade class.
299
+ Use the following format:
300
+ <title>Lesson plan title</title>
301
+ <learning_objectives>Key learning objectives</learning_objectives>
302
+ <script>How to introduce the topic to the kids</script>
303
+ """
304
+
305
+ reviewer_message = """You are a classroom lesson reviewer.
306
+ You compare the lesson plan to the fourth grade curriculum and provide a maximum of 3 recommended changes.
307
+ Provide only one round of reviews to a lesson plan.
308
+ """
309
+
310
+ # 1. Add a separate 'description' for our planner and reviewer agents
311
+ planner_description = "Creates or revises lesson plans."
275
312
 
276
- # Create AI agents
277
- teacher = ConversableAgent(name="teacher", system_message="You suggest lesson topics.")
278
- planner = ConversableAgent(name="planner", system_message="You create lesson plans.")
279
- reviewer = ConversableAgent(name="reviewer", system_message="You review lesson plans.")
313
+ reviewer_description = """Provides one round of reviews to a lesson plan
314
+ for the lesson_planner to revise."""
280
315
 
281
- # Create GroupChat
282
- groupchat = GroupChat(agents=[teacher, planner, reviewer], speaker_selection_method="auto")
316
+ with llm_config:
317
+ lesson_planner = ConversableAgent(
318
+ name="planner_agent",
319
+ system_message=planner_message,
320
+ description=planner_description,
321
+ )
322
+
323
+ lesson_reviewer = ConversableAgent(
324
+ name="reviewer_agent",
325
+ system_message=reviewer_message,
326
+ description=reviewer_description,
327
+ )
328
+
329
+ # 2. The teacher's system message can also be used as a description, so we don't define it
330
+ teacher_message = """You are a classroom teacher.
331
+ You decide topics for lessons and work with a lesson planner.
332
+ and reviewer to create and finalise lesson plans.
333
+ When you are happy with a lesson plan, output "DONE!".
334
+ """
283
335
 
284
- # Create the GroupChatManager, it will manage the conversation and uses an LLM to select the next agent
285
- manager = GroupChatManager(name="manager", groupchat=groupchat)
336
+ with llm_config:
337
+ teacher = ConversableAgent(
338
+ name="teacher_agent",
339
+ system_message=teacher_message,
340
+ # 3. Our teacher can end the conversation by saying DONE!
341
+ is_termination_msg=lambda x: "DONE!" in (x.get("content", "") or "").upper(),
342
+ )
343
+
344
+ # 4. Create the GroupChat with agents and selection method
345
+ groupchat = GroupChat(
346
+ agents=[teacher, lesson_planner, lesson_reviewer],
347
+ speaker_selection_method="auto",
348
+ messages=[],
349
+ )
350
+
351
+ # 5. Our GroupChatManager will manage the conversation and uses an LLM to select the next agent
352
+ manager = GroupChatManager(
353
+ name="group_manager",
354
+ groupchat=groupchat,
355
+ llm_config=llm_config,
356
+ )
286
357
 
287
- # Start the conversation
288
- teacher.initiate_chat(manager, "Create a lesson on photosynthesis.")
358
+ # 6. Initiate the chat with the GroupChatManager as the recipient
359
+ teacher.initiate_chat(
360
+ recipient=manager,
361
+ message="Today, let's introduce our kids to the solar system."
362
+ )
289
363
  ```
290
364
 
291
- The swarm requires a more rigid structure and the flow needs to be defined with hand-off, post-tool, and post-work transitions from an agent to another agent.
365
+ When executed, this code creates a collaborative system where the teacher initiates the conversation, and the lesson planner and reviewer agents work together to create and refine a lesson plan. The GroupChatManager orchestrates the conversation, selecting the next agent to respond based on the context of the discussion.
292
366
 
293
- Read more about it in the [documentation](https://docs.ag2.ai/docs/user-guide/advanced-concepts/conversation-patterns-deep-dive)
367
+ For workflows requiring more structured processes, explore the Swarm pattern in the detailed [documentation](https://docs.ag2.ai/latest/docs/user-guide/advanced-concepts/conversation-patterns-deep-dive).
294
368
 
295
369
  ### Tools
296
370
 
297
371
  Agents gain significant utility through tools as they provide access to external data, APIs, and functionality.
298
372
 
373
+ Note: Before running this code, make sure to set your `OPENAI_API_KEY` as an environment variable. This example uses `gpt-4o-mini`, but you can replace it with any other [model](https://docs.ag2.ai/latest/docs/user-guide/models/amazon-bedrock) supported by AG2.
374
+
299
375
  ```python
300
376
  from datetime import datetime
301
377
  from typing import Annotated
302
378
 
303
- from autogen import ConversableAgent, register_function
379
+ from autogen import ConversableAgent, register_function, LLMConfig
380
+
381
+ # Put your key in the OPENAI_API_KEY environment variable
382
+ llm_config = LLMConfig(api_type="openai", model="gpt-4o-mini")
304
383
 
305
384
  # 1. Our tool, returns the day of the week for a given date
306
385
  def get_weekday(date_string: Annotated[str, "Format: YYYY-MM-DD"]) -> str:
@@ -309,10 +388,10 @@ def get_weekday(date_string: Annotated[str, "Format: YYYY-MM-DD"]) -> str:
309
388
 
310
389
  # 2. Agent for determining whether to run the tool
311
390
  with llm_config:
312
- date_agent = ConversableAgent(
313
- name="date_agent",
314
- system_message="You get the day of the week for a given date.",
315
- )
391
+ date_agent = ConversableAgent(
392
+ name="date_agent",
393
+ system_message="You get the day of the week for a given date.",
394
+ )
316
395
 
317
396
  # 3. And an agent for executing the tool
318
397
  executor_agent = ConversableAgent(
@@ -332,8 +411,10 @@ register_function(
332
411
  chat_result = executor_agent.initiate_chat(
333
412
  recipient=date_agent,
334
413
  message="I was born on the 25th of March 1995, what day was it?",
335
- max_turns=1,
414
+ max_turns=2,
336
415
  )
416
+
417
+ print(chat_result.chat_history[-1]["content"])
337
418
  ```
338
419
 
339
420
  ### Advanced agentic design patterns
@@ -0,0 +1,166 @@
1
+ pyautogen==0.8.6beta0
2
+
3
+ [anthropic]
4
+ pyautogen[anthropic]==0.8.6beta0
5
+
6
+ [autobuild]
7
+ pyautogen[autobuild]==0.8.6beta0
8
+
9
+ [bedrock]
10
+ pyautogen[bedrock]==0.8.6beta0
11
+
12
+ [blendsearch]
13
+ pyautogen[blendsearch]==0.8.6beta0
14
+
15
+ [browser-use]
16
+ pyautogen[browser-use]==0.8.6beta0
17
+
18
+ [captainagent]
19
+ pyautogen[captainagent]==0.8.6beta0
20
+
21
+ [cerebras]
22
+ pyautogen[cerebras]==0.8.6beta0
23
+
24
+ [cohere]
25
+ pyautogen[cohere]==0.8.6beta0
26
+
27
+ [commsagent-discord]
28
+ pyautogen[commsagent-discord]==0.8.6beta0
29
+
30
+ [commsagent-slack]
31
+ pyautogen[commsagent-slack]==0.8.6beta0
32
+
33
+ [commsagent-telegram]
34
+ pyautogen[commsagent-telegram]==0.8.6beta0
35
+
36
+ [cosmosdb]
37
+ pyautogen[cosmosdb]==0.8.6beta0
38
+
39
+ [crawl4ai]
40
+ pyautogen[crawl4ai]==0.8.6beta0
41
+
42
+ [deepseek]
43
+ pyautogen[deepseek]==0.8.6beta0
44
+
45
+ [dev]
46
+ pyautogen[dev]==0.8.6beta0
47
+
48
+ [docs]
49
+ pyautogen[docs]==0.8.6beta0
50
+
51
+ [flaml]
52
+ pyautogen[flaml]==0.8.6beta0
53
+
54
+ [gemini]
55
+ pyautogen[gemini]==0.8.6beta0
56
+
57
+ [gemini-realtime]
58
+ pyautogen[gemini-realtime]==0.8.6beta0
59
+
60
+ [google-api]
61
+ pyautogen[google-api]==0.8.6beta0
62
+
63
+ [google-client]
64
+ pyautogen[google-client]==0.8.6beta0
65
+
66
+ [google-search]
67
+ pyautogen[google-search]==0.8.6beta0
68
+
69
+ [graph]
70
+ pyautogen[graph]==0.8.6beta0
71
+
72
+ [graph-rag-falkor-db]
73
+ pyautogen[graph-rag-falkor-db]==0.8.6beta0
74
+
75
+ [groq]
76
+ pyautogen[groq]==0.8.6beta0
77
+
78
+ [interop]
79
+ pyautogen[interop]==0.8.6beta0
80
+
81
+ [interop-crewai]
82
+ pyautogen[interop-crewai]==0.8.6beta0
83
+
84
+ [interop-langchain]
85
+ pyautogen[interop-langchain]==0.8.6beta0
86
+
87
+ [interop-pydantic-ai]
88
+ pyautogen[interop-pydantic-ai]==0.8.6beta0
89
+
90
+ [jupyter-executor]
91
+ pyautogen[jupyter-executor]==0.8.6beta0
92
+
93
+ [lint]
94
+ pyautogen[lint]==0.8.6beta0
95
+
96
+ [lmm]
97
+ pyautogen[lmm]==0.8.6beta0
98
+
99
+ [long-context]
100
+ pyautogen[long-context]==0.8.6beta0
101
+
102
+ [mathchat]
103
+ pyautogen[mathchat]==0.8.6beta0
104
+
105
+ [mcp]
106
+ pyautogen[mcp]==0.8.6beta0
107
+
108
+ [mistral]
109
+ pyautogen[mistral]==0.8.6beta0
110
+
111
+ [neo4j]
112
+ pyautogen[neo4j]==0.8.6beta0
113
+
114
+ [ollama]
115
+ pyautogen[ollama]==0.8.6beta0
116
+
117
+ [openai]
118
+ pyautogen[openai]==0.8.6beta0
119
+
120
+ [openai-realtime]
121
+ pyautogen[openai-realtime]==0.8.6beta0
122
+
123
+ [rag]
124
+ pyautogen[rag]==0.8.6beta0
125
+
126
+ [redis]
127
+ pyautogen[redis]==0.8.6beta0
128
+
129
+ [retrievechat]
130
+ pyautogen[retrievechat]==0.8.6beta0
131
+
132
+ [retrievechat-couchbase]
133
+ pyautogen[retrievechat-couchbase]==0.8.6beta0
134
+
135
+ [retrievechat-mongodb]
136
+ pyautogen[retrievechat-mongodb]==0.8.6beta0
137
+
138
+ [retrievechat-pgvector]
139
+ pyautogen[retrievechat-pgvector]==0.8.6beta0
140
+
141
+ [retrievechat-qdrant]
142
+ pyautogen[retrievechat-qdrant]==0.8.6beta0
143
+
144
+ [teachable]
145
+ pyautogen[teachable]==0.8.6beta0
146
+
147
+ [test]
148
+ pyautogen[test]==0.8.6beta0
149
+
150
+ [together]
151
+ pyautogen[together]==0.8.6beta0
152
+
153
+ [twilio]
154
+ pyautogen[twilio]==0.8.6beta0
155
+
156
+ [types]
157
+ pyautogen[types]==0.8.6beta0
158
+
159
+ [websockets]
160
+ pyautogen[websockets]==0.8.6beta0
161
+
162
+ [websurfer]
163
+ pyautogen[websurfer]==0.8.6beta0
164
+
165
+ [wikipedia]
166
+ pyautogen[wikipedia]==0.8.6beta0
@@ -96,7 +96,10 @@ jupyter-executor = [
96
96
  retrievechat = [
97
97
  "protobuf==5.29.3",
98
98
  "chromadb==0.6.3",
99
- "sentence_transformers",
99
+ # ToDo: wait for sentence_transformers to integrate new version of transformers
100
+ "sentence_transformers<=4.0.2",
101
+ # transformers version 4.51.0 is not integrated with sentence_transformers and throws error
102
+ "transformers<4.51.0",
100
103
  "pypdf",
101
104
  "ipython",
102
105
  "beautifulsoup4",
@@ -179,7 +182,7 @@ neo4j = [
179
182
  "docx2txt==0.9",
180
183
  "llama-index>=0.12,<1",
181
184
  "llama-index-graph-stores-neo4j==0.4.6",
182
- "llama-index-readers-web==0.3.8",
185
+ "llama-index-readers-web==0.3.9",
183
186
  ]
184
187
 
185
188
  # used for agentchat_realtime_swarm notebook and realtime agent twilio demo
@@ -195,13 +198,6 @@ mcp = [
195
198
  "mcp>=1.4.0,<1.6; python_version>='3.10'"
196
199
  ]
197
200
 
198
- mcp-proxy-gen = [
199
- "fastapi-code-generator==0.5.2",
200
- "fastapi>=0.112,<1",
201
- "requests", # do not pin it
202
- "typer",
203
- ]
204
-
205
201
  interop-crewai = [
206
202
  "crewai[tools]>=0.76,<1; python_version>='3.10' and python_version<'3.13'",
207
203
  "weaviate-client>=4,<5; python_version>='3.10' and python_version<'3.13'",
@@ -212,7 +208,14 @@ interop =[
212
208
  "pyautogen[interop-crewai, interop-langchain, interop-pydantic-ai]"
213
209
  ]
214
210
 
215
- autobuild = ["chromadb", "sentence-transformers", "huggingface-hub"]
211
+ autobuild = [
212
+ "chromadb",
213
+ # ToDo: wait for sentence_transformers to integrate new version of transformers
214
+ "sentence_transformers<=4.0.2",
215
+ # transformers version 4.51.0 is not integrated with sentence_transformers and throws error
216
+ "transformers<4.51.0",
217
+ "huggingface-hub"
218
+ ]
216
219
 
217
220
  blendsearch = ["flaml[blendsearch]"]
218
221
  mathchat = ["sympy", "wolframalpha"]
@@ -260,7 +263,7 @@ test = [
260
263
  "ipykernel==6.29.5",
261
264
  "nbconvert==7.16.6",
262
265
  "nbformat==5.10.4",
263
- "pytest-cov==6.0.0",
266
+ "pytest-cov==6.1.1",
264
267
  "pytest-asyncio==0.26.0",
265
268
  "pytest==8.3.5",
266
269
  "mock==5.2.0",
@@ -270,23 +273,24 @@ test = [
270
273
  ]
271
274
 
272
275
  docs = [
273
- "mkdocs-material==9.6.10",
274
- "mkdocstrings[python]==0.29.0",
276
+ "mkdocs-material==9.6.11",
277
+ "mkdocstrings[python]==0.29.1",
275
278
  "mkdocs-literate-nav==0.6.2",
276
279
  "mdx-include==1.4.2",
277
- # currently problematic and cannot be upgraded
280
+ # ToDo: currently problematic and cannot be upgraded
278
281
  "mkdocs-git-revision-date-localized-plugin==1.3.0",
279
282
  "mike==2.1.3",
280
283
  "typer==0.15.2",
281
284
  "mkdocs-minify-plugin==0.8.0",
282
285
  "mkdocs-macros-plugin==1.3.7", # includes with variables
283
286
  "mkdocs-glightbox==0.4.0", # img zoom
287
+ "mkdocs-redirects==1.2.2", # required for handling redirects natively
284
288
  "pillow", # required for mkdocs-glightbo
285
289
  "cairosvg", # required for mkdocs-glightbo
286
290
  "pdoc3==0.11.6",
287
291
  "jinja2==3.1.6",
288
292
  "pyyaml==6.0.2",
289
- "termcolor==2.5.0",
293
+ "termcolor==3.0.1",
290
294
  "nbclient==0.10.2",
291
295
  ]
292
296
 
@@ -296,7 +300,7 @@ types = [
296
300
  ]
297
301
 
298
302
  lint = [
299
- "ruff==0.11.2",
303
+ "ruff==0.11.4",
300
304
  "codespell==2.4.1",
301
305
  "pyupgrade-directories==0.3.0",
302
306
  ]
@@ -306,7 +310,7 @@ dev = [
306
310
  "pyautogen[lint,test,types,docs]",
307
311
  "pre-commit==4.2.0",
308
312
  "detect-secrets==1.5.0",
309
- "uv==0.6.11",
313
+ "uv==0.6.12",
310
314
  ]
311
315
 
312
316
 
@@ -317,9 +321,6 @@ Tracker = "https://github.com/ag2ai/ag2/issues"
317
321
  Source = "https://github.com/ag2ai/ag2"
318
322
  Discord = "https://discord.gg/pAbnFJrkgZ"
319
323
 
320
- [project.scripts]
321
- mcp_proxy = "autogen.mcp.__main__:app"
322
-
323
324
  [tool.hatch.version]
324
325
  path = "autogen/version.py"
325
326
 
@@ -329,13 +330,7 @@ exclude = ["/test", "/notebook"]
329
330
 
330
331
  [tool.hatch.build.targets.wheel]
331
332
  packages = ["autogen"]
332
- only-include = [
333
- "autogen",
334
- # not sure about this, probably is not needed
335
- "autogen/agentchat/contrib/captainagent/tools",
336
- # need for generation of MCP Servers from OpenAPI specification
337
- "templates",
338
- ]
333
+ only-include = ["autogen", "autogen/agentchat/contrib/captainagent/tools"]
339
334
 
340
335
  [tool.hatch.build.targets.sdist]
341
336
  exclude = ["test", "notebook"]
@@ -48,7 +48,6 @@ setuptools.setup(
48
48
  "neo4j": ["pyautogen[neo4j]==" + __version__],
49
49
  "twilio": ["pyautogen[twilio]==" + __version__],
50
50
  "mcp": ["pyautogen[mcp]==" + __version__],
51
- "mcp-proxy-gen": ["pyautogen[mcp-proxy-gen]==" + __version__],
52
51
  "interop-crewai": ["pyautogen[interop-crewai]==" + __version__],
53
52
  "interop-langchain": ["pyautogen[interop-langchain]==" + __version__],
54
53
  "interop-pydantic-ai": ["pyautogen[interop-pydantic-ai]==" + __version__],
@@ -1,169 +0,0 @@
1
- pyautogen==0.8.5alpha
2
-
3
- [anthropic]
4
- pyautogen[anthropic]==0.8.5alpha
5
-
6
- [autobuild]
7
- pyautogen[autobuild]==0.8.5alpha
8
-
9
- [bedrock]
10
- pyautogen[bedrock]==0.8.5alpha
11
-
12
- [blendsearch]
13
- pyautogen[blendsearch]==0.8.5alpha
14
-
15
- [browser-use]
16
- pyautogen[browser-use]==0.8.5alpha
17
-
18
- [captainagent]
19
- pyautogen[captainagent]==0.8.5alpha
20
-
21
- [cerebras]
22
- pyautogen[cerebras]==0.8.5alpha
23
-
24
- [cohere]
25
- pyautogen[cohere]==0.8.5alpha
26
-
27
- [commsagent-discord]
28
- pyautogen[commsagent-discord]==0.8.5alpha
29
-
30
- [commsagent-slack]
31
- pyautogen[commsagent-slack]==0.8.5alpha
32
-
33
- [commsagent-telegram]
34
- pyautogen[commsagent-telegram]==0.8.5alpha
35
-
36
- [cosmosdb]
37
- pyautogen[cosmosdb]==0.8.5alpha
38
-
39
- [crawl4ai]
40
- pyautogen[crawl4ai]==0.8.5alpha
41
-
42
- [deepseek]
43
- pyautogen[deepseek]==0.8.5alpha
44
-
45
- [dev]
46
- pyautogen[dev]==0.8.5alpha
47
-
48
- [docs]
49
- pyautogen[docs]==0.8.5alpha
50
-
51
- [flaml]
52
- pyautogen[flaml]==0.8.5alpha
53
-
54
- [gemini]
55
- pyautogen[gemini]==0.8.5alpha
56
-
57
- [gemini-realtime]
58
- pyautogen[gemini-realtime]==0.8.5alpha
59
-
60
- [google-api]
61
- pyautogen[google-api]==0.8.5alpha
62
-
63
- [google-client]
64
- pyautogen[google-client]==0.8.5alpha
65
-
66
- [google-search]
67
- pyautogen[google-search]==0.8.5alpha
68
-
69
- [graph]
70
- pyautogen[graph]==0.8.5alpha
71
-
72
- [graph-rag-falkor-db]
73
- pyautogen[graph-rag-falkor-db]==0.8.5alpha
74
-
75
- [groq]
76
- pyautogen[groq]==0.8.5alpha
77
-
78
- [interop]
79
- pyautogen[interop]==0.8.5alpha
80
-
81
- [interop-crewai]
82
- pyautogen[interop-crewai]==0.8.5alpha
83
-
84
- [interop-langchain]
85
- pyautogen[interop-langchain]==0.8.5alpha
86
-
87
- [interop-pydantic-ai]
88
- pyautogen[interop-pydantic-ai]==0.8.5alpha
89
-
90
- [jupyter-executor]
91
- pyautogen[jupyter-executor]==0.8.5alpha
92
-
93
- [lint]
94
- pyautogen[lint]==0.8.5alpha
95
-
96
- [lmm]
97
- pyautogen[lmm]==0.8.5alpha
98
-
99
- [long-context]
100
- pyautogen[long-context]==0.8.5alpha
101
-
102
- [mathchat]
103
- pyautogen[mathchat]==0.8.5alpha
104
-
105
- [mcp]
106
- pyautogen[mcp]==0.8.5alpha
107
-
108
- [mcp-proxy-gen]
109
- pyautogen[mcp-proxy-gen]==0.8.5alpha
110
-
111
- [mistral]
112
- pyautogen[mistral]==0.8.5alpha
113
-
114
- [neo4j]
115
- pyautogen[neo4j]==0.8.5alpha
116
-
117
- [ollama]
118
- pyautogen[ollama]==0.8.5alpha
119
-
120
- [openai]
121
- pyautogen[openai]==0.8.5alpha
122
-
123
- [openai-realtime]
124
- pyautogen[openai-realtime]==0.8.5alpha
125
-
126
- [rag]
127
- pyautogen[rag]==0.8.5alpha
128
-
129
- [redis]
130
- pyautogen[redis]==0.8.5alpha
131
-
132
- [retrievechat]
133
- pyautogen[retrievechat]==0.8.5alpha
134
-
135
- [retrievechat-couchbase]
136
- pyautogen[retrievechat-couchbase]==0.8.5alpha
137
-
138
- [retrievechat-mongodb]
139
- pyautogen[retrievechat-mongodb]==0.8.5alpha
140
-
141
- [retrievechat-pgvector]
142
- pyautogen[retrievechat-pgvector]==0.8.5alpha
143
-
144
- [retrievechat-qdrant]
145
- pyautogen[retrievechat-qdrant]==0.8.5alpha
146
-
147
- [teachable]
148
- pyautogen[teachable]==0.8.5alpha
149
-
150
- [test]
151
- pyautogen[test]==0.8.5alpha
152
-
153
- [together]
154
- pyautogen[together]==0.8.5alpha
155
-
156
- [twilio]
157
- pyautogen[twilio]==0.8.5alpha
158
-
159
- [types]
160
- pyautogen[types]==0.8.5alpha
161
-
162
- [websockets]
163
- pyautogen[websockets]==0.8.5alpha
164
-
165
- [websurfer]
166
- pyautogen[websurfer]==0.8.5alpha
167
-
168
- [wikipedia]
169
- pyautogen[wikipedia]==0.8.5alpha
File without changes
File without changes
File without changes
File without changes
File without changes
File without changes
File without changes
File without changes
File without changes
File without changes
File without changes
File without changes
File without changes
File without changes
File without changes
File without changes
File without changes