ag2 0.8.6a2__py3-none-any.whl → 0.8.7__py3-none-any.whl
This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
Potentially problematic release.
This version of ag2 might be problematic. Click here for more details.
- {ag2-0.8.6a2.dist-info → ag2-0.8.7.dist-info}/METADATA +177 -97
- ag2-0.8.7.dist-info/RECORD +6 -0
- ag2-0.8.6a2.dist-info/RECORD +0 -6
- {ag2-0.8.6a2.dist-info → ag2-0.8.7.dist-info}/LICENSE +0 -0
- {ag2-0.8.6a2.dist-info → ag2-0.8.7.dist-info}/NOTICE.md +0 -0
- {ag2-0.8.6a2.dist-info → ag2-0.8.7.dist-info}/WHEEL +0 -0
- {ag2-0.8.6a2.dist-info → ag2-0.8.7.dist-info}/top_level.txt +0 -0
|
@@ -1,6 +1,6 @@
|
|
|
1
1
|
Metadata-Version: 2.1
|
|
2
2
|
Name: ag2
|
|
3
|
-
Version: 0.8.
|
|
3
|
+
Version: 0.8.7
|
|
4
4
|
Summary: Alias package for pyautogen
|
|
5
5
|
Home-page: https://github.com/ag2ai/ag2
|
|
6
6
|
Author: Chi Wang & Qingyun Wu
|
|
@@ -14,119 +14,117 @@ Requires-Python: >=3.9,<3.14
|
|
|
14
14
|
Description-Content-Type: text/markdown
|
|
15
15
|
License-File: LICENSE
|
|
16
16
|
License-File: NOTICE.md
|
|
17
|
-
Requires-Dist: pyautogen==0.8.
|
|
17
|
+
Requires-Dist: pyautogen==0.8.7
|
|
18
18
|
Provides-Extra: anthropic
|
|
19
|
-
Requires-Dist: pyautogen[anthropic]==0.8.
|
|
19
|
+
Requires-Dist: pyautogen[anthropic]==0.8.7; extra == "anthropic"
|
|
20
20
|
Provides-Extra: autobuild
|
|
21
|
-
Requires-Dist: pyautogen[autobuild]==0.8.
|
|
21
|
+
Requires-Dist: pyautogen[autobuild]==0.8.7; extra == "autobuild"
|
|
22
22
|
Provides-Extra: bedrock
|
|
23
|
-
Requires-Dist: pyautogen[bedrock]==0.8.
|
|
23
|
+
Requires-Dist: pyautogen[bedrock]==0.8.7; extra == "bedrock"
|
|
24
24
|
Provides-Extra: blendsearch
|
|
25
|
-
Requires-Dist: pyautogen[blendsearch]==0.8.
|
|
25
|
+
Requires-Dist: pyautogen[blendsearch]==0.8.7; extra == "blendsearch"
|
|
26
26
|
Provides-Extra: browser-use
|
|
27
|
-
Requires-Dist: pyautogen[browser-use]==0.8.
|
|
27
|
+
Requires-Dist: pyautogen[browser-use]==0.8.7; extra == "browser-use"
|
|
28
28
|
Provides-Extra: captainagent
|
|
29
|
-
Requires-Dist: pyautogen[captainagent]==0.8.
|
|
29
|
+
Requires-Dist: pyautogen[captainagent]==0.8.7; extra == "captainagent"
|
|
30
30
|
Provides-Extra: cerebras
|
|
31
|
-
Requires-Dist: pyautogen[cerebras]==0.8.
|
|
31
|
+
Requires-Dist: pyautogen[cerebras]==0.8.7; extra == "cerebras"
|
|
32
32
|
Provides-Extra: cohere
|
|
33
|
-
Requires-Dist: pyautogen[cohere]==0.8.
|
|
33
|
+
Requires-Dist: pyautogen[cohere]==0.8.7; extra == "cohere"
|
|
34
34
|
Provides-Extra: commsagent-discord
|
|
35
|
-
Requires-Dist: pyautogen[commsagent-discord]==0.8.
|
|
35
|
+
Requires-Dist: pyautogen[commsagent-discord]==0.8.7; extra == "commsagent-discord"
|
|
36
36
|
Provides-Extra: commsagent-slack
|
|
37
|
-
Requires-Dist: pyautogen[commsagent-slack]==0.8.
|
|
37
|
+
Requires-Dist: pyautogen[commsagent-slack]==0.8.7; extra == "commsagent-slack"
|
|
38
38
|
Provides-Extra: commsagent-telegram
|
|
39
|
-
Requires-Dist: pyautogen[commsagent-telegram]==0.8.
|
|
39
|
+
Requires-Dist: pyautogen[commsagent-telegram]==0.8.7; extra == "commsagent-telegram"
|
|
40
40
|
Provides-Extra: cosmosdb
|
|
41
|
-
Requires-Dist: pyautogen[cosmosdb]==0.8.
|
|
41
|
+
Requires-Dist: pyautogen[cosmosdb]==0.8.7; extra == "cosmosdb"
|
|
42
42
|
Provides-Extra: crawl4ai
|
|
43
|
-
Requires-Dist: pyautogen[crawl4ai]==0.8.
|
|
43
|
+
Requires-Dist: pyautogen[crawl4ai]==0.8.7; extra == "crawl4ai"
|
|
44
44
|
Provides-Extra: deepseek
|
|
45
|
-
Requires-Dist: pyautogen[deepseek]==0.8.
|
|
45
|
+
Requires-Dist: pyautogen[deepseek]==0.8.7; extra == "deepseek"
|
|
46
46
|
Provides-Extra: dev
|
|
47
|
-
Requires-Dist: pyautogen[dev]==0.8.
|
|
47
|
+
Requires-Dist: pyautogen[dev]==0.8.7; extra == "dev"
|
|
48
48
|
Provides-Extra: docs
|
|
49
|
-
Requires-Dist: pyautogen[docs]==0.8.
|
|
49
|
+
Requires-Dist: pyautogen[docs]==0.8.7; extra == "docs"
|
|
50
50
|
Provides-Extra: flaml
|
|
51
|
-
Requires-Dist: pyautogen[flaml]==0.8.
|
|
51
|
+
Requires-Dist: pyautogen[flaml]==0.8.7; extra == "flaml"
|
|
52
52
|
Provides-Extra: gemini
|
|
53
|
-
Requires-Dist: pyautogen[gemini]==0.8.
|
|
53
|
+
Requires-Dist: pyautogen[gemini]==0.8.7; extra == "gemini"
|
|
54
54
|
Provides-Extra: gemini-realtime
|
|
55
|
-
Requires-Dist: pyautogen[gemini-realtime]==0.8.
|
|
55
|
+
Requires-Dist: pyautogen[gemini-realtime]==0.8.7; extra == "gemini-realtime"
|
|
56
56
|
Provides-Extra: google-api
|
|
57
|
-
Requires-Dist: pyautogen[google-api]==0.8.
|
|
57
|
+
Requires-Dist: pyautogen[google-api]==0.8.7; extra == "google-api"
|
|
58
58
|
Provides-Extra: google-client
|
|
59
|
-
Requires-Dist: pyautogen[google-client]==0.8.
|
|
59
|
+
Requires-Dist: pyautogen[google-client]==0.8.7; extra == "google-client"
|
|
60
60
|
Provides-Extra: google-search
|
|
61
|
-
Requires-Dist: pyautogen[google-search]==0.8.
|
|
61
|
+
Requires-Dist: pyautogen[google-search]==0.8.7; extra == "google-search"
|
|
62
62
|
Provides-Extra: graph
|
|
63
|
-
Requires-Dist: pyautogen[graph]==0.8.
|
|
63
|
+
Requires-Dist: pyautogen[graph]==0.8.7; extra == "graph"
|
|
64
64
|
Provides-Extra: graph-rag-falkor-db
|
|
65
|
-
Requires-Dist: pyautogen[graph-rag-falkor-db]==0.8.
|
|
65
|
+
Requires-Dist: pyautogen[graph-rag-falkor-db]==0.8.7; extra == "graph-rag-falkor-db"
|
|
66
66
|
Provides-Extra: groq
|
|
67
|
-
Requires-Dist: pyautogen[groq]==0.8.
|
|
67
|
+
Requires-Dist: pyautogen[groq]==0.8.7; extra == "groq"
|
|
68
68
|
Provides-Extra: interop
|
|
69
|
-
Requires-Dist: pyautogen[interop]==0.8.
|
|
69
|
+
Requires-Dist: pyautogen[interop]==0.8.7; extra == "interop"
|
|
70
70
|
Provides-Extra: interop-crewai
|
|
71
|
-
Requires-Dist: pyautogen[interop-crewai]==0.8.
|
|
71
|
+
Requires-Dist: pyautogen[interop-crewai]==0.8.7; extra == "interop-crewai"
|
|
72
72
|
Provides-Extra: interop-langchain
|
|
73
|
-
Requires-Dist: pyautogen[interop-langchain]==0.8.
|
|
73
|
+
Requires-Dist: pyautogen[interop-langchain]==0.8.7; extra == "interop-langchain"
|
|
74
74
|
Provides-Extra: interop-pydantic-ai
|
|
75
|
-
Requires-Dist: pyautogen[interop-pydantic-ai]==0.8.
|
|
75
|
+
Requires-Dist: pyautogen[interop-pydantic-ai]==0.8.7; extra == "interop-pydantic-ai"
|
|
76
76
|
Provides-Extra: jupyter-executor
|
|
77
|
-
Requires-Dist: pyautogen[jupyter-executor]==0.8.
|
|
77
|
+
Requires-Dist: pyautogen[jupyter-executor]==0.8.7; extra == "jupyter-executor"
|
|
78
78
|
Provides-Extra: lint
|
|
79
|
-
Requires-Dist: pyautogen[lint]==0.8.
|
|
79
|
+
Requires-Dist: pyautogen[lint]==0.8.7; extra == "lint"
|
|
80
80
|
Provides-Extra: lmm
|
|
81
|
-
Requires-Dist: pyautogen[lmm]==0.8.
|
|
81
|
+
Requires-Dist: pyautogen[lmm]==0.8.7; extra == "lmm"
|
|
82
82
|
Provides-Extra: long-context
|
|
83
|
-
Requires-Dist: pyautogen[long-context]==0.8.
|
|
83
|
+
Requires-Dist: pyautogen[long-context]==0.8.7; extra == "long-context"
|
|
84
84
|
Provides-Extra: mathchat
|
|
85
|
-
Requires-Dist: pyautogen[mathchat]==0.8.
|
|
85
|
+
Requires-Dist: pyautogen[mathchat]==0.8.7; extra == "mathchat"
|
|
86
86
|
Provides-Extra: mcp
|
|
87
|
-
Requires-Dist: pyautogen[mcp]==0.8.
|
|
88
|
-
Provides-Extra: mcp-proxy-gen
|
|
89
|
-
Requires-Dist: pyautogen[mcp-proxy-gen]==0.8.6alpha2; extra == "mcp-proxy-gen"
|
|
87
|
+
Requires-Dist: pyautogen[mcp]==0.8.7; extra == "mcp"
|
|
90
88
|
Provides-Extra: mistral
|
|
91
|
-
Requires-Dist: pyautogen[mistral]==0.8.
|
|
89
|
+
Requires-Dist: pyautogen[mistral]==0.8.7; extra == "mistral"
|
|
92
90
|
Provides-Extra: neo4j
|
|
93
|
-
Requires-Dist: pyautogen[neo4j]==0.8.
|
|
91
|
+
Requires-Dist: pyautogen[neo4j]==0.8.7; extra == "neo4j"
|
|
94
92
|
Provides-Extra: ollama
|
|
95
|
-
Requires-Dist: pyautogen[ollama]==0.8.
|
|
93
|
+
Requires-Dist: pyautogen[ollama]==0.8.7; extra == "ollama"
|
|
96
94
|
Provides-Extra: openai
|
|
97
|
-
Requires-Dist: pyautogen[openai]==0.8.
|
|
95
|
+
Requires-Dist: pyautogen[openai]==0.8.7; extra == "openai"
|
|
98
96
|
Provides-Extra: openai-realtime
|
|
99
|
-
Requires-Dist: pyautogen[openai-realtime]==0.8.
|
|
97
|
+
Requires-Dist: pyautogen[openai-realtime]==0.8.7; extra == "openai-realtime"
|
|
100
98
|
Provides-Extra: rag
|
|
101
|
-
Requires-Dist: pyautogen[rag]==0.8.
|
|
99
|
+
Requires-Dist: pyautogen[rag]==0.8.7; extra == "rag"
|
|
102
100
|
Provides-Extra: redis
|
|
103
|
-
Requires-Dist: pyautogen[redis]==0.8.
|
|
101
|
+
Requires-Dist: pyautogen[redis]==0.8.7; extra == "redis"
|
|
104
102
|
Provides-Extra: retrievechat
|
|
105
|
-
Requires-Dist: pyautogen[retrievechat]==0.8.
|
|
103
|
+
Requires-Dist: pyautogen[retrievechat]==0.8.7; extra == "retrievechat"
|
|
106
104
|
Provides-Extra: retrievechat-couchbase
|
|
107
|
-
Requires-Dist: pyautogen[retrievechat-couchbase]==0.8.
|
|
105
|
+
Requires-Dist: pyautogen[retrievechat-couchbase]==0.8.7; extra == "retrievechat-couchbase"
|
|
108
106
|
Provides-Extra: retrievechat-mongodb
|
|
109
|
-
Requires-Dist: pyautogen[retrievechat-mongodb]==0.8.
|
|
107
|
+
Requires-Dist: pyautogen[retrievechat-mongodb]==0.8.7; extra == "retrievechat-mongodb"
|
|
110
108
|
Provides-Extra: retrievechat-pgvector
|
|
111
|
-
Requires-Dist: pyautogen[retrievechat-pgvector]==0.8.
|
|
109
|
+
Requires-Dist: pyautogen[retrievechat-pgvector]==0.8.7; extra == "retrievechat-pgvector"
|
|
112
110
|
Provides-Extra: retrievechat-qdrant
|
|
113
|
-
Requires-Dist: pyautogen[retrievechat-qdrant]==0.8.
|
|
111
|
+
Requires-Dist: pyautogen[retrievechat-qdrant]==0.8.7; extra == "retrievechat-qdrant"
|
|
114
112
|
Provides-Extra: teachable
|
|
115
|
-
Requires-Dist: pyautogen[teachable]==0.8.
|
|
113
|
+
Requires-Dist: pyautogen[teachable]==0.8.7; extra == "teachable"
|
|
116
114
|
Provides-Extra: test
|
|
117
|
-
Requires-Dist: pyautogen[test]==0.8.
|
|
115
|
+
Requires-Dist: pyautogen[test]==0.8.7; extra == "test"
|
|
118
116
|
Provides-Extra: together
|
|
119
|
-
Requires-Dist: pyautogen[together]==0.8.
|
|
117
|
+
Requires-Dist: pyautogen[together]==0.8.7; extra == "together"
|
|
120
118
|
Provides-Extra: twilio
|
|
121
|
-
Requires-Dist: pyautogen[twilio]==0.8.
|
|
119
|
+
Requires-Dist: pyautogen[twilio]==0.8.7; extra == "twilio"
|
|
122
120
|
Provides-Extra: types
|
|
123
|
-
Requires-Dist: pyautogen[types]==0.8.
|
|
121
|
+
Requires-Dist: pyautogen[types]==0.8.7; extra == "types"
|
|
124
122
|
Provides-Extra: websockets
|
|
125
|
-
Requires-Dist: pyautogen[websockets]==0.8.
|
|
123
|
+
Requires-Dist: pyautogen[websockets]==0.8.7; extra == "websockets"
|
|
126
124
|
Provides-Extra: websurfer
|
|
127
|
-
Requires-Dist: pyautogen[websurfer]==0.8.
|
|
125
|
+
Requires-Dist: pyautogen[websurfer]==0.8.7; extra == "websurfer"
|
|
128
126
|
Provides-Extra: wikipedia
|
|
129
|
-
Requires-Dist: pyautogen[wikipedia]==0.8.
|
|
127
|
+
Requires-Dist: pyautogen[wikipedia]==0.8.7; extra == "wikipedia"
|
|
130
128
|
|
|
131
129
|
<a name="readme-top"></a>
|
|
132
130
|
|
|
@@ -191,7 +189,7 @@ The project is currently maintained by a [dynamic group of volunteers](MAINTAINE
|
|
|
191
189
|
|
|
192
190
|
## Getting started
|
|
193
191
|
|
|
194
|
-
For a step-by-step walk through of AG2 concepts and code, see [Basic Concepts](https://docs.ag2.ai/docs/user-guide/basic-concepts) in our documentation.
|
|
192
|
+
For a step-by-step walk through of AG2 concepts and code, see [Basic Concepts](https://docs.ag2.ai/latest/docs/user-guide/basic-concepts/installing-ag2/) in our documentation.
|
|
195
193
|
|
|
196
194
|
### Installation
|
|
197
195
|
|
|
@@ -254,12 +252,22 @@ We have several agent concepts in AG2 to help you build your AI agents. We intro
|
|
|
254
252
|
|
|
255
253
|
### Conversable agent
|
|
256
254
|
|
|
257
|
-
The
|
|
258
|
-
|
|
255
|
+
The [ConversableAgent](https://docs.ag2.ai/latest/docs/api-reference/autogen/ConversableAgent) is the fundamental building block of AG2, designed to enable seamless communication between AI entities. This core agent type handles message exchange and response generation, serving as the base class for all agents in the framework.
|
|
256
|
+
|
|
257
|
+
In the example below, we'll create a simple information validation workflow with two specialized agents that communicate with each other:
|
|
258
|
+
|
|
259
|
+
Note: Before running this code, make sure to set your `OPENAI_API_KEY` as an environment variable. This example uses `gpt-4o-mini`, but you can replace it with any other [model](https://docs.ag2.ai/latest/docs/user-guide/models/amazon-bedrock) supported by AG2.
|
|
259
260
|
|
|
260
261
|
```python
|
|
261
|
-
|
|
262
|
+
# 1. Import ConversableAgent class
|
|
263
|
+
from autogen import ConversableAgent, LLMConfig
|
|
264
|
+
|
|
265
|
+
# 2. Define our LLM configuration for OpenAI's GPT-4o mini
|
|
266
|
+
# uses the OPENAI_API_KEY environment variable
|
|
267
|
+
llm_config = LLMConfig(api_type="openai", model="gpt-4o-mini")
|
|
262
268
|
|
|
269
|
+
|
|
270
|
+
# 3. Create our LLM agent
|
|
263
271
|
with llm_config:
|
|
264
272
|
# Create an AI agent
|
|
265
273
|
assistant = ConversableAgent(
|
|
@@ -273,7 +281,7 @@ with llm_config:
|
|
|
273
281
|
system_message="You are a fact-checking assistant.",
|
|
274
282
|
)
|
|
275
283
|
|
|
276
|
-
# Start the conversation
|
|
284
|
+
# 4. Start the conversation
|
|
277
285
|
assistant.initiate_chat(
|
|
278
286
|
recipient=fact_checker,
|
|
279
287
|
message="What is AG2?",
|
|
@@ -283,25 +291,34 @@ assistant.initiate_chat(
|
|
|
283
291
|
|
|
284
292
|
### Human in the loop
|
|
285
293
|
|
|
286
|
-
|
|
294
|
+
Human oversight is crucial for many AI workflows, especially when dealing with critical decisions, creative tasks, or situations requiring expert judgment. AG2 makes integrating human feedback seamless through its human-in-the-loop functionality.
|
|
295
|
+
You can configure how and when human input is solicited using the `human_input_mode` parameter:
|
|
287
296
|
|
|
288
|
-
|
|
297
|
+
- `ALWAYS`: Requires human input for every response
|
|
298
|
+
- `NEVER`: Operates autonomously without human involvement
|
|
299
|
+
- `TERMINATE`: Only requests human input to end conversations
|
|
289
300
|
|
|
290
|
-
|
|
301
|
+
For convenience, AG2 provides the specialized `UserProxyAgent` class that automatically sets `human_input_mode` to `ALWAYS` and supports code execution:
|
|
291
302
|
|
|
292
|
-
|
|
303
|
+
Note: Before running this code, make sure to set your `OPENAI_API_KEY` as an environment variable. This example uses `gpt-4o-mini`, but you can replace it with any other [model](https://docs.ag2.ai/latest/docs/user-guide/models/amazon-bedrock) supported by AG2.
|
|
293
304
|
|
|
294
305
|
```python
|
|
295
|
-
|
|
306
|
+
# 1. Import ConversableAgent and UserProxyAgent classes
|
|
307
|
+
from autogen import ConversableAgent, UserProxyAgent, LLMConfig
|
|
308
|
+
|
|
309
|
+
# 2. Define our LLM configuration for OpenAI's GPT-4o mini
|
|
310
|
+
# uses the OPENAI_API_KEY environment variable
|
|
311
|
+
llm_config = LLMConfig(api_type="openai", model="gpt-4o-mini")
|
|
296
312
|
|
|
297
|
-
|
|
313
|
+
|
|
314
|
+
# 3. Create our LLM agent
|
|
298
315
|
with llm_config:
|
|
299
316
|
assistant = ConversableAgent(
|
|
300
317
|
name="assistant",
|
|
301
318
|
system_message="You are a helpful assistant.",
|
|
302
319
|
)
|
|
303
320
|
|
|
304
|
-
# Create a human agent with manual input mode
|
|
321
|
+
# 4. Create a human agent with manual input mode
|
|
305
322
|
human = ConversableAgent(
|
|
306
323
|
name="human",
|
|
307
324
|
human_input_mode="ALWAYS"
|
|
@@ -309,7 +326,7 @@ human = ConversableAgent(
|
|
|
309
326
|
# or
|
|
310
327
|
human = UserProxyAgent(name="human", code_execution_config={"work_dir": "coding", "use_docker": False})
|
|
311
328
|
|
|
312
|
-
# Start the chat
|
|
329
|
+
# 5. Start the chat
|
|
313
330
|
human.initiate_chat(
|
|
314
331
|
recipient=assistant,
|
|
315
332
|
message="Hello! What's 2 + 2?"
|
|
@@ -319,45 +336,106 @@ human.initiate_chat(
|
|
|
319
336
|
|
|
320
337
|
### Orchestrating multiple agents
|
|
321
338
|
|
|
322
|
-
|
|
339
|
+
AG2 enables sophisticated multi-agent collaboration through flexible orchestration patterns, allowing you to create dynamic systems where specialized agents work together to solve complex problems.
|
|
323
340
|
|
|
324
|
-
|
|
341
|
+
The framework offers both custom orchestration and several built-in collaboration patterns including `GroupChat` and `Swarm`.
|
|
325
342
|
|
|
326
|
-
|
|
343
|
+
Here's how to implement a collaborative team for curriculum development using GroupChat:
|
|
327
344
|
|
|
328
|
-
|
|
345
|
+
Note: Before running this code, make sure to set your `OPENAI_API_KEY` as an environment variable. This example uses `gpt-4o-mini`, but you can replace it with any other [model](https://docs.ag2.ai/latest/docs/user-guide/models/amazon-bedrock) supported by AG2.
|
|
329
346
|
|
|
330
347
|
```python
|
|
331
|
-
from autogen import ConversableAgent, GroupChat, GroupChatManager
|
|
348
|
+
from autogen import ConversableAgent, GroupChat, GroupChatManager, LLMConfig
|
|
349
|
+
|
|
350
|
+
# Put your key in the OPENAI_API_KEY environment variable
|
|
351
|
+
llm_config = LLMConfig(api_type="openai", model="gpt-4o-mini")
|
|
352
|
+
|
|
353
|
+
planner_message = """You are a classroom lesson agent.
|
|
354
|
+
Given a topic, write a lesson plan for a fourth grade class.
|
|
355
|
+
Use the following format:
|
|
356
|
+
<title>Lesson plan title</title>
|
|
357
|
+
<learning_objectives>Key learning objectives</learning_objectives>
|
|
358
|
+
<script>How to introduce the topic to the kids</script>
|
|
359
|
+
"""
|
|
360
|
+
|
|
361
|
+
reviewer_message = """You are a classroom lesson reviewer.
|
|
362
|
+
You compare the lesson plan to the fourth grade curriculum and provide a maximum of 3 recommended changes.
|
|
363
|
+
Provide only one round of reviews to a lesson plan.
|
|
364
|
+
"""
|
|
365
|
+
|
|
366
|
+
# 1. Add a separate 'description' for our planner and reviewer agents
|
|
367
|
+
planner_description = "Creates or revises lesson plans."
|
|
332
368
|
|
|
333
|
-
|
|
334
|
-
|
|
335
|
-
planner = ConversableAgent(name="planner", system_message="You create lesson plans.")
|
|
336
|
-
reviewer = ConversableAgent(name="reviewer", system_message="You review lesson plans.")
|
|
369
|
+
reviewer_description = """Provides one round of reviews to a lesson plan
|
|
370
|
+
for the lesson_planner to revise."""
|
|
337
371
|
|
|
338
|
-
|
|
339
|
-
|
|
372
|
+
with llm_config:
|
|
373
|
+
lesson_planner = ConversableAgent(
|
|
374
|
+
name="planner_agent",
|
|
375
|
+
system_message=planner_message,
|
|
376
|
+
description=planner_description,
|
|
377
|
+
)
|
|
378
|
+
|
|
379
|
+
lesson_reviewer = ConversableAgent(
|
|
380
|
+
name="reviewer_agent",
|
|
381
|
+
system_message=reviewer_message,
|
|
382
|
+
description=reviewer_description,
|
|
383
|
+
)
|
|
384
|
+
|
|
385
|
+
# 2. The teacher's system message can also be used as a description, so we don't define it
|
|
386
|
+
teacher_message = """You are a classroom teacher.
|
|
387
|
+
You decide topics for lessons and work with a lesson planner.
|
|
388
|
+
and reviewer to create and finalise lesson plans.
|
|
389
|
+
When you are happy with a lesson plan, output "DONE!".
|
|
390
|
+
"""
|
|
340
391
|
|
|
341
|
-
|
|
342
|
-
|
|
392
|
+
with llm_config:
|
|
393
|
+
teacher = ConversableAgent(
|
|
394
|
+
name="teacher_agent",
|
|
395
|
+
system_message=teacher_message,
|
|
396
|
+
# 3. Our teacher can end the conversation by saying DONE!
|
|
397
|
+
is_termination_msg=lambda x: "DONE!" in (x.get("content", "") or "").upper(),
|
|
398
|
+
)
|
|
399
|
+
|
|
400
|
+
# 4. Create the GroupChat with agents and selection method
|
|
401
|
+
groupchat = GroupChat(
|
|
402
|
+
agents=[teacher, lesson_planner, lesson_reviewer],
|
|
403
|
+
speaker_selection_method="auto",
|
|
404
|
+
messages=[],
|
|
405
|
+
)
|
|
406
|
+
|
|
407
|
+
# 5. Our GroupChatManager will manage the conversation and uses an LLM to select the next agent
|
|
408
|
+
manager = GroupChatManager(
|
|
409
|
+
name="group_manager",
|
|
410
|
+
groupchat=groupchat,
|
|
411
|
+
llm_config=llm_config,
|
|
412
|
+
)
|
|
343
413
|
|
|
344
|
-
#
|
|
345
|
-
teacher.initiate_chat(
|
|
414
|
+
# 6. Initiate the chat with the GroupChatManager as the recipient
|
|
415
|
+
teacher.initiate_chat(
|
|
416
|
+
recipient=manager,
|
|
417
|
+
message="Today, let's introduce our kids to the solar system."
|
|
418
|
+
)
|
|
346
419
|
```
|
|
347
420
|
|
|
348
|
-
|
|
421
|
+
When executed, this code creates a collaborative system where the teacher initiates the conversation, and the lesson planner and reviewer agents work together to create and refine a lesson plan. The GroupChatManager orchestrates the conversation, selecting the next agent to respond based on the context of the discussion.
|
|
349
422
|
|
|
350
|
-
|
|
423
|
+
For workflows requiring more structured processes, explore the Swarm pattern in the detailed [documentation](https://docs.ag2.ai/latest/docs/user-guide/advanced-concepts/conversation-patterns-deep-dive).
|
|
351
424
|
|
|
352
425
|
### Tools
|
|
353
426
|
|
|
354
427
|
Agents gain significant utility through tools as they provide access to external data, APIs, and functionality.
|
|
355
428
|
|
|
429
|
+
Note: Before running this code, make sure to set your `OPENAI_API_KEY` as an environment variable. This example uses `gpt-4o-mini`, but you can replace it with any other [model](https://docs.ag2.ai/latest/docs/user-guide/models/amazon-bedrock) supported by AG2.
|
|
430
|
+
|
|
356
431
|
```python
|
|
357
432
|
from datetime import datetime
|
|
358
433
|
from typing import Annotated
|
|
359
434
|
|
|
360
|
-
from autogen import ConversableAgent, register_function
|
|
435
|
+
from autogen import ConversableAgent, register_function, LLMConfig
|
|
436
|
+
|
|
437
|
+
# Put your key in the OPENAI_API_KEY environment variable
|
|
438
|
+
llm_config = LLMConfig(api_type="openai", model="gpt-4o-mini")
|
|
361
439
|
|
|
362
440
|
# 1. Our tool, returns the day of the week for a given date
|
|
363
441
|
def get_weekday(date_string: Annotated[str, "Format: YYYY-MM-DD"]) -> str:
|
|
@@ -366,10 +444,10 @@ def get_weekday(date_string: Annotated[str, "Format: YYYY-MM-DD"]) -> str:
|
|
|
366
444
|
|
|
367
445
|
# 2. Agent for determining whether to run the tool
|
|
368
446
|
with llm_config:
|
|
369
|
-
|
|
370
|
-
|
|
371
|
-
|
|
372
|
-
|
|
447
|
+
date_agent = ConversableAgent(
|
|
448
|
+
name="date_agent",
|
|
449
|
+
system_message="You get the day of the week for a given date.",
|
|
450
|
+
)
|
|
373
451
|
|
|
374
452
|
# 3. And an agent for executing the tool
|
|
375
453
|
executor_agent = ConversableAgent(
|
|
@@ -389,16 +467,18 @@ register_function(
|
|
|
389
467
|
chat_result = executor_agent.initiate_chat(
|
|
390
468
|
recipient=date_agent,
|
|
391
469
|
message="I was born on the 25th of March 1995, what day was it?",
|
|
392
|
-
max_turns=
|
|
470
|
+
max_turns=2,
|
|
393
471
|
)
|
|
472
|
+
|
|
473
|
+
print(chat_result.chat_history[-1]["content"])
|
|
394
474
|
```
|
|
395
475
|
|
|
396
476
|
### Advanced agentic design patterns
|
|
397
477
|
|
|
398
478
|
AG2 supports more advanced concepts to help you build your AI agent workflows. You can find more information in the documentation.
|
|
399
479
|
|
|
400
|
-
- [Structured Output](https://docs.ag2.ai/latest/docs/user-guide/basic-concepts/llm-configuration/
|
|
401
|
-
- [Ending a conversation](https://docs.ag2.ai/docs/user-guide/basic-concepts/ending-a-chat)
|
|
480
|
+
- [Structured Output](https://docs.ag2.ai/latest/docs/user-guide/basic-concepts/llm-configuration/structured-outputs)
|
|
481
|
+
- [Ending a conversation](https://docs.ag2.ai/latest/docs/user-guide/basic-concepts/orchestration/ending-a-chat/)
|
|
402
482
|
- [Retrieval Augmented Generation (RAG)](https://docs.ag2.ai/docs/user-guide/advanced-concepts/rag)
|
|
403
483
|
- [Code Execution](https://docs.ag2.ai/docs/user-guide/advanced-concepts/code-execution)
|
|
404
484
|
- [Tools with Secrets](https://docs.ag2.ai/docs/user-guide/basic-concepts/tools/tools-with-secrets)
|
|
@@ -0,0 +1,6 @@
|
|
|
1
|
+
ag2-0.8.7.dist-info/LICENSE,sha256=GEFQVNayAR-S_rQD5l8hPdgvgyktVdy4Bx5-v90IfRI,11384
|
|
2
|
+
ag2-0.8.7.dist-info/METADATA,sha256=AhRU_LulV9enE90ihfjPWL_7DikP8Vc0OHf7nkTPyi0,24471
|
|
3
|
+
ag2-0.8.7.dist-info/NOTICE.md,sha256=07iCPQGbth4pQrgkSgZinJGT5nXddkZ6_MGYcBd2oiY,1134
|
|
4
|
+
ag2-0.8.7.dist-info/WHEEL,sha256=tZoeGjtWxWRfdplE7E3d45VPlLNQnvbKiYnx7gwAy8A,92
|
|
5
|
+
ag2-0.8.7.dist-info/top_level.txt,sha256=AbpHGcgLb-kRsJGnwFEktk7uzpZOCcBY74-YBdrKVGs,1
|
|
6
|
+
ag2-0.8.7.dist-info/RECORD,,
|
ag2-0.8.6a2.dist-info/RECORD
DELETED
|
@@ -1,6 +0,0 @@
|
|
|
1
|
-
ag2-0.8.6a2.dist-info/LICENSE,sha256=GEFQVNayAR-S_rQD5l8hPdgvgyktVdy4Bx5-v90IfRI,11384
|
|
2
|
-
ag2-0.8.6a2.dist-info/METADATA,sha256=hOwzD2uUQwwOBVaMYReXRr92o1aW14VKATPmpWq8lOI,20536
|
|
3
|
-
ag2-0.8.6a2.dist-info/NOTICE.md,sha256=07iCPQGbth4pQrgkSgZinJGT5nXddkZ6_MGYcBd2oiY,1134
|
|
4
|
-
ag2-0.8.6a2.dist-info/WHEEL,sha256=tZoeGjtWxWRfdplE7E3d45VPlLNQnvbKiYnx7gwAy8A,92
|
|
5
|
-
ag2-0.8.6a2.dist-info/top_level.txt,sha256=AbpHGcgLb-kRsJGnwFEktk7uzpZOCcBY74-YBdrKVGs,1
|
|
6
|
-
ag2-0.8.6a2.dist-info/RECORD,,
|
|
File without changes
|
|
File without changes
|
|
File without changes
|
|
File without changes
|