massgen 0.0.3__py3-none-any.whl
This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
Potentially problematic release.
This version of massgen might be problematic. Click here for more details.
- massgen/__init__.py +94 -0
- massgen/agent_config.py +507 -0
- massgen/backend/CLAUDE_API_RESEARCH.md +266 -0
- massgen/backend/Function calling openai responses.md +1161 -0
- massgen/backend/GEMINI_API_DOCUMENTATION.md +410 -0
- massgen/backend/OPENAI_RESPONSES_API_FORMAT.md +65 -0
- massgen/backend/__init__.py +25 -0
- massgen/backend/base.py +180 -0
- massgen/backend/chat_completions.py +228 -0
- massgen/backend/claude.py +661 -0
- massgen/backend/gemini.py +652 -0
- massgen/backend/grok.py +187 -0
- massgen/backend/response.py +397 -0
- massgen/chat_agent.py +440 -0
- massgen/cli.py +686 -0
- massgen/configs/README.md +293 -0
- massgen/configs/creative_team.yaml +53 -0
- massgen/configs/gemini_4o_claude.yaml +31 -0
- massgen/configs/news_analysis.yaml +51 -0
- massgen/configs/research_team.yaml +51 -0
- massgen/configs/single_agent.yaml +18 -0
- massgen/configs/single_flash2.5.yaml +44 -0
- massgen/configs/technical_analysis.yaml +51 -0
- massgen/configs/three_agents_default.yaml +31 -0
- massgen/configs/travel_planning.yaml +51 -0
- massgen/configs/two_agents.yaml +39 -0
- massgen/frontend/__init__.py +20 -0
- massgen/frontend/coordination_ui.py +945 -0
- massgen/frontend/displays/__init__.py +24 -0
- massgen/frontend/displays/base_display.py +83 -0
- massgen/frontend/displays/rich_terminal_display.py +3497 -0
- massgen/frontend/displays/simple_display.py +93 -0
- massgen/frontend/displays/terminal_display.py +381 -0
- massgen/frontend/logging/__init__.py +9 -0
- massgen/frontend/logging/realtime_logger.py +197 -0
- massgen/message_templates.py +431 -0
- massgen/orchestrator.py +1222 -0
- massgen/tests/__init__.py +10 -0
- massgen/tests/multi_turn_conversation_design.md +214 -0
- massgen/tests/multiturn_llm_input_analysis.md +189 -0
- massgen/tests/test_case_studies.md +113 -0
- massgen/tests/test_claude_backend.py +310 -0
- massgen/tests/test_grok_backend.py +160 -0
- massgen/tests/test_message_context_building.py +293 -0
- massgen/tests/test_rich_terminal_display.py +378 -0
- massgen/tests/test_v3_3agents.py +117 -0
- massgen/tests/test_v3_simple.py +216 -0
- massgen/tests/test_v3_three_agents.py +272 -0
- massgen/tests/test_v3_two_agents.py +176 -0
- massgen/utils.py +79 -0
- massgen/v1/README.md +330 -0
- massgen/v1/__init__.py +91 -0
- massgen/v1/agent.py +605 -0
- massgen/v1/agents.py +330 -0
- massgen/v1/backends/gemini.py +584 -0
- massgen/v1/backends/grok.py +410 -0
- massgen/v1/backends/oai.py +571 -0
- massgen/v1/cli.py +351 -0
- massgen/v1/config.py +169 -0
- massgen/v1/examples/fast-4o-mini-config.yaml +44 -0
- massgen/v1/examples/fast_config.yaml +44 -0
- massgen/v1/examples/production.yaml +70 -0
- massgen/v1/examples/single_agent.yaml +39 -0
- massgen/v1/logging.py +974 -0
- massgen/v1/main.py +368 -0
- massgen/v1/orchestrator.py +1138 -0
- massgen/v1/streaming_display.py +1190 -0
- massgen/v1/tools.py +160 -0
- massgen/v1/types.py +245 -0
- massgen/v1/utils.py +199 -0
- massgen-0.0.3.dist-info/METADATA +568 -0
- massgen-0.0.3.dist-info/RECORD +76 -0
- massgen-0.0.3.dist-info/WHEEL +5 -0
- massgen-0.0.3.dist-info/entry_points.txt +2 -0
- massgen-0.0.3.dist-info/licenses/LICENSE +204 -0
- massgen-0.0.3.dist-info/top_level.txt +1 -0
|
@@ -0,0 +1,293 @@
|
|
|
1
|
+
# MassGen Configuration Examples
|
|
2
|
+
|
|
3
|
+
This directory contains sample configuration files for MassGen CLI usage. Each configuration is optimized for specific use cases and demonstrates different agent collaboration patterns.
|
|
4
|
+
|
|
5
|
+
## 📁 Available Configurations
|
|
6
|
+
|
|
7
|
+
### 🤖 Single Agent Configurations
|
|
8
|
+
|
|
9
|
+
- **`single_agent.yaml`** - Basic single agent setup with Gemini
|
|
10
|
+
- Uses Gemini 2.5 Flash model
|
|
11
|
+
- Web search enabled for current information
|
|
12
|
+
- Rich terminal display with token usage tracking
|
|
13
|
+
|
|
14
|
+
### 👥 Multi-Agent Configurations
|
|
15
|
+
|
|
16
|
+
#### General Purpose Teams
|
|
17
|
+
|
|
18
|
+
- **`three_agents_default.yaml`** - Default three-agent setup with frontier models
|
|
19
|
+
- **Gemini 2.5 Flash**: Web search enabled
|
|
20
|
+
- **GPT-4o-mini**: Web search and code interpreter enabled
|
|
21
|
+
- **Grok-3-mini**: Web search with citations
|
|
22
|
+
- Best for general questions requiring diverse perspectives
|
|
23
|
+
|
|
24
|
+
- **`gemini_4o_claude.yaml`** - Premium three-agent configuration
|
|
25
|
+
- **Gemini 2.5 Flash**: Web search enabled
|
|
26
|
+
- **GPT-4o**: Full GPT-4o with web search and code interpreter
|
|
27
|
+
- **Claude 3.5 Haiku**: Web search with citations
|
|
28
|
+
- Higher quality responses for complex tasks
|
|
29
|
+
|
|
30
|
+
- **`two_agents.yaml`** - Focused two-agent collaboration
|
|
31
|
+
- **Primary Agent (GPT-4o)**: Comprehensive research and analysis
|
|
32
|
+
- **Secondary Agent (GPT-4o-mini)**: Review and refinement
|
|
33
|
+
- Efficient for tasks needing depth with validation
|
|
34
|
+
|
|
35
|
+
#### Specialized Teams
|
|
36
|
+
|
|
37
|
+
- **`research_team.yaml`** - Academic/technical research configuration
|
|
38
|
+
- **Information Gatherer (Grok)**: Web search specialist
|
|
39
|
+
- **Domain Expert (GPT-4o)**: Deep analysis with code interpreter
|
|
40
|
+
- **Synthesizer (GPT-4o-mini)**: Integration and summarization
|
|
41
|
+
- Low temperature (0.3) for accuracy
|
|
42
|
+
|
|
43
|
+
- **`creative_team.yaml`** - Creative writing and storytelling
|
|
44
|
+
- **Storyteller (GPT-4o)**: Narrative creation
|
|
45
|
+
- **Editor (GPT-4o-mini)**: Structure and flow refinement
|
|
46
|
+
- **Critic (Grok-3-mini)**: Literary analysis
|
|
47
|
+
- High temperature (0.8) for creativity
|
|
48
|
+
|
|
49
|
+
- **`news_analysis.yaml`** - Current events and news synthesis
|
|
50
|
+
- **News Gatherer (GPT-4o)**: Finding current events
|
|
51
|
+
- **Trend Analyst (Grok-3-mini)**: Pattern identification
|
|
52
|
+
- **News Synthesizer (GPT-4o-mini)**: Balanced summaries
|
|
53
|
+
- Medium temperature (0.4) for balanced analysis
|
|
54
|
+
|
|
55
|
+
- **`technical_analysis.yaml`** - Technical queries and cost estimation
|
|
56
|
+
- **Technical Researcher (GPT-4o)**: Specifications and documentation
|
|
57
|
+
- **Cost Analyst (Grok-3-mini)**: Pricing and cost calculations
|
|
58
|
+
- **Technical Advisor (GPT-4o-mini)**: Practical recommendations
|
|
59
|
+
- Low temperature (0.2) for precision
|
|
60
|
+
|
|
61
|
+
- **`travel_planning.yaml`** - Travel recommendations and planning
|
|
62
|
+
- **Travel Researcher (GPT-4o)**: Destination information
|
|
63
|
+
- **Local Expert (Grok-3-mini)**: Insider knowledge
|
|
64
|
+
- **Travel Planner (GPT-4o-mini)**: Itinerary organization
|
|
65
|
+
- Medium temperature (0.6) for balanced suggestions
|
|
66
|
+
|
|
67
|
+
## 🚀 Usage Examples
|
|
68
|
+
|
|
69
|
+
### Single Agent Mode
|
|
70
|
+
```bash
|
|
71
|
+
# Using configuration file
|
|
72
|
+
uv run python -m massgen.cli --config single_agent.yaml "What is machine learning?"
|
|
73
|
+
|
|
74
|
+
# Quick setup without config file
|
|
75
|
+
uv run python -m massgen.cli --model gemini-2.5-flash "Explain quantum computing"
|
|
76
|
+
```
|
|
77
|
+
|
|
78
|
+
### Multi-Agent Mode
|
|
79
|
+
```bash
|
|
80
|
+
# Default three agents for general questions
|
|
81
|
+
uv run python -m massgen.cli --config three_agents_default.yaml "Compare renewable energy technologies"
|
|
82
|
+
|
|
83
|
+
# Premium agents for complex analysis
|
|
84
|
+
uv run python -m massgen.cli --config gemini_4o_claude.yaml "Analyze the implications of quantum computing on cryptography"
|
|
85
|
+
|
|
86
|
+
# Specialized teams for specific tasks
|
|
87
|
+
uv run python -m massgen.cli --config research_team.yaml "Latest developments in CRISPR gene editing"
|
|
88
|
+
uv run python -m massgen.cli --config creative_team.yaml "Write a short story about AI consciousness"
|
|
89
|
+
uv run python -m massgen.cli --config news_analysis.yaml "What happened in tech news this week?"
|
|
90
|
+
uv run python -m massgen.cli --config technical_analysis.yaml "Cost analysis for running LLMs at scale"
|
|
91
|
+
uv run python -m massgen.cli --config travel_planning.yaml "Plan a 5-day trip to Tokyo in spring"
|
|
92
|
+
```
|
|
93
|
+
|
|
94
|
+
### Interactive Mode
|
|
95
|
+
```bash
|
|
96
|
+
# Start interactive session with any configuration
|
|
97
|
+
uv run python -m massgen.cli --config gemini_4o_claude.yaml
|
|
98
|
+
|
|
99
|
+
# Commands in interactive mode:
|
|
100
|
+
# /clear - Clear conversation history
|
|
101
|
+
# /quit, /exit, /q - Exit the session
|
|
102
|
+
```
|
|
103
|
+
|
|
104
|
+
## 📋 Configuration Structure
|
|
105
|
+
|
|
106
|
+
**Single Agent Configuration:**
|
|
107
|
+
|
|
108
|
+
Use the `agent` field to define a single agent with its backend and settings:
|
|
109
|
+
|
|
110
|
+
```yaml
|
|
111
|
+
agent:
|
|
112
|
+
id: "<agent_name>"
|
|
113
|
+
backend:
|
|
114
|
+
type: "claude" | "gemini" | "grok" | "openai" #Type of backend (Optional because we can infer backend type through model.)
|
|
115
|
+
model: "<model_name>" # Model name
|
|
116
|
+
api_key: "<optional_key>" # API key for backend. Uses env vars by default.
|
|
117
|
+
system_message: "..." # System Message for Single Agent
|
|
118
|
+
```
|
|
119
|
+
|
|
120
|
+
**Multi-Agent Configuration:**
|
|
121
|
+
|
|
122
|
+
Use the `agents` field to define multiple agents, each with its own backend and config:
|
|
123
|
+
|
|
124
|
+
```yaml
|
|
125
|
+
agents: # Multiple agents (alternative to 'agent')
|
|
126
|
+
- id: "<agent1 name>"
|
|
127
|
+
backend:
|
|
128
|
+
type: "claude" | "gemini" | "grok" | "openai" #Type of backend (Optional because we can infer backend type through model.)
|
|
129
|
+
model: "<model_name>" # Model name
|
|
130
|
+
api_key: "<optional_key>" # API key for backend. Uses env vars by default.
|
|
131
|
+
system_message: "..." # System Message for Single Agent
|
|
132
|
+
- id: "..."
|
|
133
|
+
backend:
|
|
134
|
+
type: "..."
|
|
135
|
+
model: "..."
|
|
136
|
+
...
|
|
137
|
+
system_message: "..."
|
|
138
|
+
```
|
|
139
|
+
|
|
140
|
+
**Backend Configuration:**
|
|
141
|
+
|
|
142
|
+
Detailed parameters for each agent's backend can be specified using the following configuration formats:
|
|
143
|
+
|
|
144
|
+
#### Claude
|
|
145
|
+
|
|
146
|
+
```yaml
|
|
147
|
+
backend:
|
|
148
|
+
type: "claude"
|
|
149
|
+
model: "claude-sonnet-4-20250514" # Model name
|
|
150
|
+
api_key: "<optional_key>" # API key for backend. Uses env vars by default.
|
|
151
|
+
temperature: 0.7 # Creativity vs consistency (0.0-1.0)
|
|
152
|
+
max_tokens: 2500 # Maximum response length
|
|
153
|
+
enable_web_search: true # Web search capability
|
|
154
|
+
enable_code_execution: true # Code execution capability
|
|
155
|
+
```
|
|
156
|
+
|
|
157
|
+
#### Gemini
|
|
158
|
+
|
|
159
|
+
```yaml
|
|
160
|
+
backend:
|
|
161
|
+
type: "gemini"
|
|
162
|
+
model: "gemini-2.5-flash" # Model name
|
|
163
|
+
api_key: "<optional_key>" # API key for backend. Uses env vars by default.
|
|
164
|
+
temperature: 0.7 # Creativity vs consistency (0.0-1.0)
|
|
165
|
+
max_tokens: 2500 # Maximum response length
|
|
166
|
+
enable_web_search: true # Web search capability
|
|
167
|
+
enable_code_execution: true # Code execution capability
|
|
168
|
+
```
|
|
169
|
+
|
|
170
|
+
#### Grok
|
|
171
|
+
|
|
172
|
+
```yaml
|
|
173
|
+
backend:
|
|
174
|
+
type: "grok"
|
|
175
|
+
model: "grok-3-mini" # Model name
|
|
176
|
+
api_key: "<optional_key>" # API key for backend. Uses env vars by default.
|
|
177
|
+
temperature: 0.7 # Creativity vs consistency (0.0-1.0)
|
|
178
|
+
max_tokens: 2500 # Maximum response length
|
|
179
|
+
enable_web_search: true # Web search capability
|
|
180
|
+
return_citations: true # Include search result citations
|
|
181
|
+
max_search_results: 10 # Maximum search results to use
|
|
182
|
+
search_mode: "auto" # Search strategy: "auto", "fast", "thorough"
|
|
183
|
+
```
|
|
184
|
+
|
|
185
|
+
#### OpenAI
|
|
186
|
+
|
|
187
|
+
```yaml
|
|
188
|
+
backend:
|
|
189
|
+
type: "openai"
|
|
190
|
+
model: "gpt-4o" # Model name
|
|
191
|
+
api_key: "<optional_key>" # API key for backend. Uses env vars by default.
|
|
192
|
+
temperature: 0.7 # Creativity vs consistency (0.0-1.0, o-series models don't support this)
|
|
193
|
+
max_tokens: 2500 # Maximum response length (o-series models don't support this)
|
|
194
|
+
enable_web_search: true # Web search capability
|
|
195
|
+
enable_code_interpreter: true # Code interpreter capability
|
|
196
|
+
```
|
|
197
|
+
|
|
198
|
+
**UI Configuration:**
|
|
199
|
+
|
|
200
|
+
Configure how MassGen displays information and handles logging during execution:
|
|
201
|
+
|
|
202
|
+
```yaml
|
|
203
|
+
ui:
|
|
204
|
+
display_type: "rich_terminal" | "terminal" | "simple" # Display format for agent interactions
|
|
205
|
+
logging_enabled: true | false # Enable/disable real-time logging
|
|
206
|
+
```
|
|
207
|
+
|
|
208
|
+
- `display_type`: Controls the visual presentation of agent interactions
|
|
209
|
+
- `"rich_terminal"`: Full-featured display with multi-region layout, live status updates, and colored output
|
|
210
|
+
- `"terminal"`: Standard terminal display with basic formatting and sequential output
|
|
211
|
+
- `"simple"`: Plain text output without any formatting or special display features
|
|
212
|
+
- `logging_enabled`: When `true`, saves detailed timestamp, agent outputs and system status
|
|
213
|
+
|
|
214
|
+
**Advanced Parameters:**
|
|
215
|
+
```yaml
|
|
216
|
+
# Global backend parameters
|
|
217
|
+
backend_params:
|
|
218
|
+
temperature: 0.7
|
|
219
|
+
max_tokens: 2000
|
|
220
|
+
enable_web_search: true # Web search capability (all backends)
|
|
221
|
+
enable_code_interpreter: true # OpenAI only
|
|
222
|
+
enable_code_execution: true # Gemini/Claude only
|
|
223
|
+
```
|
|
224
|
+
|
|
225
|
+
## 🔧 Environment Variables
|
|
226
|
+
|
|
227
|
+
Set these in your `.env` file:
|
|
228
|
+
|
|
229
|
+
```bash
|
|
230
|
+
# API Keys
|
|
231
|
+
ANTHROPIC_API_KEY="your-anthropic-key"
|
|
232
|
+
GEMINI_API_KEY="your-gemini-key"
|
|
233
|
+
OPENAI_API_KEY="your-openai-key"
|
|
234
|
+
XAI_API_KEY="your-xai-key"
|
|
235
|
+
```
|
|
236
|
+
|
|
237
|
+
## 💡 Backend Capabilities
|
|
238
|
+
|
|
239
|
+
| Backend | Live Search | Code Execution |
|
|
240
|
+
|---------|:-----------:|:--------------:|
|
|
241
|
+
| **Claude** | ✅ | ✅ |
|
|
242
|
+
| **OpenAI** | ✅ | ✅ |
|
|
243
|
+
| **Grok** | ✅ | ❌ |
|
|
244
|
+
| **Gemini** | ✅ | ✅ |
|
|
245
|
+
|
|
246
|
+
## 📚 Best Practices
|
|
247
|
+
|
|
248
|
+
1. **Choose the Right Configuration**
|
|
249
|
+
- Use `three_agents_default.yaml` for general questions
|
|
250
|
+
- Use specialized teams for domain-specific tasks
|
|
251
|
+
- Use `single_agent.yaml` for quick, simple queries
|
|
252
|
+
|
|
253
|
+
2. **Temperature Settings**
|
|
254
|
+
- Low (0.1-0.3): Technical analysis, factual information
|
|
255
|
+
- Medium (0.4-0.6): Balanced tasks, general questions
|
|
256
|
+
- High (0.7-0.9): Creative writing, brainstorming
|
|
257
|
+
|
|
258
|
+
3. **Cost Optimization**
|
|
259
|
+
- Use mini models (gpt-4o-mini, grok-3-mini) for routine tasks
|
|
260
|
+
- Reserve premium models (gpt-4o, claude-3-5-sonnet) for complex analysis
|
|
261
|
+
- Single agent mode is most cost-effective for simple queries
|
|
262
|
+
|
|
263
|
+
4. **Tool Usage**
|
|
264
|
+
- Enable web search for current events and real-time information
|
|
265
|
+
- Use code interpreter for data analysis and calculations
|
|
266
|
+
- Combine tools for comprehensive research tasks
|
|
267
|
+
|
|
268
|
+
## 🛠️ Creating Custom Configurations
|
|
269
|
+
|
|
270
|
+
1. Copy an existing configuration as a template
|
|
271
|
+
2. Modify agent roles and system messages for your use case
|
|
272
|
+
3. Adjust temperature and max_tokens based on task requirements
|
|
273
|
+
4. Enable/disable tools based on agent needs
|
|
274
|
+
5. Test with sample queries to refine the configuration
|
|
275
|
+
|
|
276
|
+
Example custom configuration for code review:
|
|
277
|
+
```yaml
|
|
278
|
+
agents:
|
|
279
|
+
- id: "code_analyzer"
|
|
280
|
+
backend:
|
|
281
|
+
type: "openai"
|
|
282
|
+
model: "gpt-4o"
|
|
283
|
+
temperature: 0.2
|
|
284
|
+
enable_code_interpreter: true
|
|
285
|
+
system_message: "Analyze code for bugs, security issues, and best practices"
|
|
286
|
+
|
|
287
|
+
- id: "refactoring_expert"
|
|
288
|
+
backend:
|
|
289
|
+
type: "claude"
|
|
290
|
+
model: "claude-3-5-sonnet-20250514"
|
|
291
|
+
temperature: 0.3
|
|
292
|
+
system_message: "Suggest code improvements and refactoring opportunities"
|
|
293
|
+
```
|
|
@@ -0,0 +1,53 @@
|
|
|
1
|
+
# MassGen Creative Team Configuration
|
|
2
|
+
# Optimized for creative writing and storytelling tasks
|
|
3
|
+
|
|
4
|
+
agents:
|
|
5
|
+
- id: "storyteller"
|
|
6
|
+
backend:
|
|
7
|
+
type: "openai"
|
|
8
|
+
model: "gpt-4o"
|
|
9
|
+
temperature: 0.8
|
|
10
|
+
max_tokens: 3000
|
|
11
|
+
system_message: |
|
|
12
|
+
You are a creative storyteller who excels at crafting engaging narratives
|
|
13
|
+
with rich characters, vivid descriptions, and compelling plot structures.
|
|
14
|
+
Focus on:
|
|
15
|
+
- Character development and emotional depth
|
|
16
|
+
- Vivid imagery and sensory details
|
|
17
|
+
- Strong narrative arc and pacing
|
|
18
|
+
- Creative and original concepts
|
|
19
|
+
|
|
20
|
+
- id: "editor"
|
|
21
|
+
backend:
|
|
22
|
+
type: "openai"
|
|
23
|
+
model: "gpt-4o-mini"
|
|
24
|
+
temperature: 0.8
|
|
25
|
+
max_tokens: 3000
|
|
26
|
+
system_message: |
|
|
27
|
+
You are a skilled editor who evaluates creative works for structure,
|
|
28
|
+
flow, and impact. You excel at identifying what makes a story compelling.
|
|
29
|
+
Focus on:
|
|
30
|
+
- Narrative structure and coherence
|
|
31
|
+
- Character consistency and development
|
|
32
|
+
- Emotional resonance and engagement
|
|
33
|
+
- Overall polish and readability
|
|
34
|
+
|
|
35
|
+
- id: "critic"
|
|
36
|
+
backend:
|
|
37
|
+
type: "grok"
|
|
38
|
+
model: "grok-3-mini"
|
|
39
|
+
temperature: 0.8
|
|
40
|
+
max_tokens: 3000
|
|
41
|
+
system_message: |
|
|
42
|
+
You are a literary critic who analyzes creative works with a discerning eye.
|
|
43
|
+
You evaluate originality, thematic depth, and artistic merit. Focus on:
|
|
44
|
+
- Originality and creativity
|
|
45
|
+
- Thematic depth and meaning
|
|
46
|
+
- Technical craft and execution
|
|
47
|
+
- Overall artistic impact
|
|
48
|
+
|
|
49
|
+
ui:
|
|
50
|
+
display_type: "rich_terminal"
|
|
51
|
+
logging_enabled: true
|
|
52
|
+
|
|
53
|
+
# Creative-optimized settings
|
|
@@ -0,0 +1,31 @@
|
|
|
1
|
+
# MassGen Three Agent Configuration
|
|
2
|
+
# Gemini-2.5-flash, GPT-4o-mini, and Grok-3-mini with builtin tools enabled
|
|
3
|
+
|
|
4
|
+
agents:
|
|
5
|
+
- id: "gemini2.5flash"
|
|
6
|
+
backend:
|
|
7
|
+
type: "gemini"
|
|
8
|
+
model: "gemini-2.5-flash"
|
|
9
|
+
enable_web_search: true
|
|
10
|
+
# enable_code_execution: true
|
|
11
|
+
# system_message: "You are a helpful AI assistant with web search and code execution capabilities. For any question involving current events, recent information, or real-time data, ALWAYS use web search first."
|
|
12
|
+
|
|
13
|
+
- id: "gpt-4o"
|
|
14
|
+
backend:
|
|
15
|
+
type: "openai"
|
|
16
|
+
model: "gpt-4o"
|
|
17
|
+
enable_web_search: true
|
|
18
|
+
enable_code_interpreter: true
|
|
19
|
+
# system_message: "You are a helpful AI assistant with web search and code execution capabilities. For any question involving current events, recent information, or real-time data, ALWAYS use web search first."
|
|
20
|
+
|
|
21
|
+
- id: "claude-3-5-haiku"
|
|
22
|
+
backend:
|
|
23
|
+
type: "claude"
|
|
24
|
+
model: "claude-3-5-haiku-20241022"
|
|
25
|
+
enable_web_search: true
|
|
26
|
+
return_citations: true
|
|
27
|
+
# system_message: "You are a helpful AI assistant with web search capabilities. For any question involving current events, recent information, or real-time data, ALWAYS use web search first."
|
|
28
|
+
|
|
29
|
+
ui:
|
|
30
|
+
display_type: "rich_terminal"
|
|
31
|
+
logging_enabled: true
|
|
@@ -0,0 +1,51 @@
|
|
|
1
|
+
# MassGen News Analysis Configuration
|
|
2
|
+
# Specialized for current events and news synthesis
|
|
3
|
+
|
|
4
|
+
agents:
|
|
5
|
+
- id: "news_gatherer"
|
|
6
|
+
backend:
|
|
7
|
+
type: "openai"
|
|
8
|
+
model: "gpt-4o"
|
|
9
|
+
temperature: 0.4
|
|
10
|
+
max_tokens: 2500
|
|
11
|
+
system_message: |
|
|
12
|
+
You are a news researcher who specializes in finding and curating current
|
|
13
|
+
events and breaking news. Focus on:
|
|
14
|
+
- Finding the most recent and relevant news
|
|
15
|
+
- Identifying authoritative news sources
|
|
16
|
+
- Covering diverse perspectives and angles
|
|
17
|
+
- Prioritizing high-impact and significant developments
|
|
18
|
+
|
|
19
|
+
- id: "trend_analyst"
|
|
20
|
+
backend:
|
|
21
|
+
type: "grok"
|
|
22
|
+
model: "grok-3-mini"
|
|
23
|
+
temperature: 0.4
|
|
24
|
+
max_tokens: 2500
|
|
25
|
+
system_message: |
|
|
26
|
+
You are a trend analyst who identifies patterns, significance, and
|
|
27
|
+
implications of current events. Focus on:
|
|
28
|
+
- Analyzing the broader context and implications
|
|
29
|
+
- Identifying emerging trends and patterns
|
|
30
|
+
- Connecting related events and developments
|
|
31
|
+
- Assessing long-term significance
|
|
32
|
+
|
|
33
|
+
- id: "news_synthesizer"
|
|
34
|
+
backend:
|
|
35
|
+
type: "openai"
|
|
36
|
+
model: "gpt-4o-mini"
|
|
37
|
+
temperature: 0.4
|
|
38
|
+
max_tokens: 2500
|
|
39
|
+
system_message: |
|
|
40
|
+
You are a news editor who synthesizes multiple news sources into
|
|
41
|
+
comprehensive, balanced summaries. Focus on:
|
|
42
|
+
- Creating coherent narratives from multiple sources
|
|
43
|
+
- Balancing different perspectives
|
|
44
|
+
- Organizing information by importance and relevance
|
|
45
|
+
- Providing clear, accessible summaries
|
|
46
|
+
|
|
47
|
+
ui:
|
|
48
|
+
display_type: "rich_terminal"
|
|
49
|
+
logging_enabled: true
|
|
50
|
+
|
|
51
|
+
# News-optimized settings
|
|
@@ -0,0 +1,51 @@
|
|
|
1
|
+
# MassGen Research Team Configuration
|
|
2
|
+
# Specialized configuration for research and analysis tasks
|
|
3
|
+
|
|
4
|
+
agents:
|
|
5
|
+
- id: "information_gatherer"
|
|
6
|
+
backend:
|
|
7
|
+
type: "grok"
|
|
8
|
+
model: "grok-3-mini"
|
|
9
|
+
enable_web_search: true
|
|
10
|
+
system_message: |
|
|
11
|
+
You are an information gathering specialist with access to web search.
|
|
12
|
+
Your role is to find the most current and relevant information on any topic.
|
|
13
|
+
Focus on:
|
|
14
|
+
- Finding authoritative sources
|
|
15
|
+
- Gathering diverse perspectives
|
|
16
|
+
- Identifying recent developments
|
|
17
|
+
- Providing comprehensive coverage of the topic
|
|
18
|
+
|
|
19
|
+
- id: "domain_expert"
|
|
20
|
+
backend:
|
|
21
|
+
type: "openai"
|
|
22
|
+
model: "gpt-4o"
|
|
23
|
+
enable_code_interpreter: true
|
|
24
|
+
system_message: |
|
|
25
|
+
You are a domain expert with analytical capabilities. Your role is to
|
|
26
|
+
provide deep subject matter expertise and analytical insights. Focus on:
|
|
27
|
+
- Technical accuracy and depth
|
|
28
|
+
- Identifying key concepts and relationships
|
|
29
|
+
- Providing expert-level analysis
|
|
30
|
+
- Validating information for accuracy
|
|
31
|
+
|
|
32
|
+
- id: "synthesizer"
|
|
33
|
+
backend:
|
|
34
|
+
type: "openai"
|
|
35
|
+
model: "gpt-4o-mini"
|
|
36
|
+
system_message: |
|
|
37
|
+
You are a synthesis specialist who excels at combining multiple perspectives
|
|
38
|
+
into coherent, actionable insights. Your role is to:
|
|
39
|
+
- Integrate findings from multiple sources
|
|
40
|
+
- Identify patterns and themes
|
|
41
|
+
- Create clear, structured summaries
|
|
42
|
+
- Highlight practical implications and recommendations
|
|
43
|
+
|
|
44
|
+
ui:
|
|
45
|
+
display_type: "rich_terminal"
|
|
46
|
+
logging_enabled: true
|
|
47
|
+
|
|
48
|
+
# Research-optimized settings
|
|
49
|
+
backend_params:
|
|
50
|
+
temperature: 0.3 # Lower temperature for more focused research
|
|
51
|
+
max_tokens: 3000 # Longer responses for detailed research
|
|
@@ -0,0 +1,18 @@
|
|
|
1
|
+
# Example Gemini configuration for MassGen
|
|
2
|
+
# Usage: python -m massgen.cli --config example_gemini_config.yaml "Your question here"
|
|
3
|
+
|
|
4
|
+
# Single agent configuration
|
|
5
|
+
agent:
|
|
6
|
+
id: "gemini_agent"
|
|
7
|
+
# system_message: "You are a helpful AI assistant powered by Google Gemini."
|
|
8
|
+
backend:
|
|
9
|
+
type: "gemini"
|
|
10
|
+
model: "gemini-2.5-flash"
|
|
11
|
+
enable_web_search: true
|
|
12
|
+
# enable_code_execution: true
|
|
13
|
+
system_message: "You are a helpful assistant"
|
|
14
|
+
|
|
15
|
+
# Display configuration
|
|
16
|
+
ui:
|
|
17
|
+
display_type: "rich_terminal"
|
|
18
|
+
logging_enabled: true
|
|
@@ -0,0 +1,44 @@
|
|
|
1
|
+
# Example Gemini configuration for MassGen
|
|
2
|
+
# Usage: python -m massgen.cli --config example_gemini_config.yaml "Your question here"
|
|
3
|
+
|
|
4
|
+
# Single agent configuration
|
|
5
|
+
agent:
|
|
6
|
+
id: "gemini_agent"
|
|
7
|
+
# system_message: "You are a helpful AI assistant powered by Google Gemini."
|
|
8
|
+
backend:
|
|
9
|
+
type: "gemini"
|
|
10
|
+
model: "gemini-2.5-flash"
|
|
11
|
+
enable_web_search: true
|
|
12
|
+
# enable_code_execution: true
|
|
13
|
+
|
|
14
|
+
# Alternative: Multi-agent configuration
|
|
15
|
+
# agents:
|
|
16
|
+
# research_agent:
|
|
17
|
+
# system_message: "You are a research specialist. Use web search for current information."
|
|
18
|
+
# backend:
|
|
19
|
+
# type: "gemini"
|
|
20
|
+
# model: "gemini-2.5-flash"
|
|
21
|
+
# enable_web_search: true
|
|
22
|
+
# enable_code_execution: false
|
|
23
|
+
#
|
|
24
|
+
# compute_agent:
|
|
25
|
+
# system_message: "You are a computational specialist. Use code execution for calculations."
|
|
26
|
+
# backend:
|
|
27
|
+
# type: "gemini"
|
|
28
|
+
# model: "gemini-2.5-flash"
|
|
29
|
+
# enable_web_search: false
|
|
30
|
+
# enable_code_execution: true
|
|
31
|
+
#
|
|
32
|
+
# analyst_agent:
|
|
33
|
+
# system_message: "You are a general analyst providing clear reasoning."
|
|
34
|
+
# backend:
|
|
35
|
+
# type: "gemini"
|
|
36
|
+
# model: "gemini-2.5-flash"
|
|
37
|
+
# enable_web_search: false
|
|
38
|
+
# enable_code_execution: false
|
|
39
|
+
|
|
40
|
+
# Display configuration
|
|
41
|
+
display:
|
|
42
|
+
type: "rich_terminal"
|
|
43
|
+
show_agent_details: true
|
|
44
|
+
show_token_usage: true
|
|
@@ -0,0 +1,51 @@
|
|
|
1
|
+
# MassGen Technical Analysis Configuration
|
|
2
|
+
# Optimized for technical queries and cost estimation
|
|
3
|
+
|
|
4
|
+
agents:
|
|
5
|
+
- id: "technical_researcher"
|
|
6
|
+
backend:
|
|
7
|
+
type: "openai"
|
|
8
|
+
model: "gpt-4o"
|
|
9
|
+
temperature: 0.2
|
|
10
|
+
max_tokens: 3000
|
|
11
|
+
system_message: |
|
|
12
|
+
You are a technical researcher who specializes in finding accurate
|
|
13
|
+
technical specifications, pricing, and system requirements. Focus on:
|
|
14
|
+
- Finding official documentation and specifications
|
|
15
|
+
- Gathering accurate pricing and cost information
|
|
16
|
+
- Understanding technical requirements and constraints
|
|
17
|
+
- Providing detailed technical analysis
|
|
18
|
+
|
|
19
|
+
- id: "cost_analyst"
|
|
20
|
+
backend:
|
|
21
|
+
type: "grok"
|
|
22
|
+
model: "grok-3-mini"
|
|
23
|
+
temperature: 0.2
|
|
24
|
+
max_tokens: 3000
|
|
25
|
+
system_message: |
|
|
26
|
+
You are a cost analyst who excels at calculating and estimating
|
|
27
|
+
technical costs and resource requirements. Focus on:
|
|
28
|
+
- Accurate cost calculations and estimations
|
|
29
|
+
- Understanding pricing models and structures
|
|
30
|
+
- Identifying cost variables and factors
|
|
31
|
+
- Providing detailed cost breakdowns
|
|
32
|
+
|
|
33
|
+
- id: "technical_advisor"
|
|
34
|
+
backend:
|
|
35
|
+
type: "openai"
|
|
36
|
+
model: "gpt-4o-mini"
|
|
37
|
+
temperature: 0.2
|
|
38
|
+
max_tokens: 3000
|
|
39
|
+
system_message: |
|
|
40
|
+
You are a technical advisor who synthesizes technical information
|
|
41
|
+
into practical recommendations and insights. Focus on:
|
|
42
|
+
- Translating technical details into practical advice
|
|
43
|
+
- Identifying key considerations and trade-offs
|
|
44
|
+
- Providing clear recommendations
|
|
45
|
+
- Highlighting important caveats and limitations
|
|
46
|
+
|
|
47
|
+
ui:
|
|
48
|
+
display_type: "rich_terminal"
|
|
49
|
+
logging_enabled: true
|
|
50
|
+
|
|
51
|
+
# Technical analysis settings
|
|
@@ -0,0 +1,31 @@
|
|
|
1
|
+
# MassGen Three Agent Configuration
|
|
2
|
+
# Gemini-2.5-flash, GPT-4o-mini, and Grok-3-mini with builtin tools enabled
|
|
3
|
+
|
|
4
|
+
agents:
|
|
5
|
+
- id: "gemini2.5flash"
|
|
6
|
+
backend:
|
|
7
|
+
type: "gemini"
|
|
8
|
+
model: "gemini-2.5-flash"
|
|
9
|
+
enable_web_search: true
|
|
10
|
+
# enable_code_execution: true
|
|
11
|
+
# system_message: "You are a helpful AI assistant with web search and code execution capabilities. For any question involving current events, recent information, or real-time data, ALWAYS use web search first."
|
|
12
|
+
|
|
13
|
+
- id: "4omini"
|
|
14
|
+
backend:
|
|
15
|
+
type: "openai"
|
|
16
|
+
model: "gpt-4o-mini"
|
|
17
|
+
enable_web_search: true
|
|
18
|
+
enable_code_interpreter: true
|
|
19
|
+
# system_message: "You are a helpful AI assistant with web search and code execution capabilities. For any question involving current events, recent information, or real-time data, ALWAYS use web search first."
|
|
20
|
+
|
|
21
|
+
- id: "grok3mini"
|
|
22
|
+
backend:
|
|
23
|
+
type: "grok"
|
|
24
|
+
model: "grok-3-mini"
|
|
25
|
+
enable_web_search: true
|
|
26
|
+
return_citations: true
|
|
27
|
+
# system_message: "You are a helpful AI assistant with web search capabilities. For any question involving current events, recent information, or real-time data, ALWAYS use web search first."
|
|
28
|
+
|
|
29
|
+
ui:
|
|
30
|
+
display_type: "rich_terminal"
|
|
31
|
+
logging_enabled: true
|
|
@@ -0,0 +1,51 @@
|
|
|
1
|
+
# MassGen Travel Planning Configuration
|
|
2
|
+
# Specialized for travel recommendations and destination guides
|
|
3
|
+
|
|
4
|
+
agents:
|
|
5
|
+
- id: "travel_researcher"
|
|
6
|
+
backend:
|
|
7
|
+
type: "openai"
|
|
8
|
+
model: "gpt-4o"
|
|
9
|
+
temperature: 0.6
|
|
10
|
+
max_tokens: 3000
|
|
11
|
+
system_message: |
|
|
12
|
+
You are a travel researcher who excels at finding comprehensive
|
|
13
|
+
information about destinations, attractions, and travel logistics. Focus on:
|
|
14
|
+
- Finding current attractions, events, and activities
|
|
15
|
+
- Gathering practical travel information
|
|
16
|
+
- Identifying seasonal considerations and timing
|
|
17
|
+
- Researching local culture and customs
|
|
18
|
+
|
|
19
|
+
- id: "local_expert"
|
|
20
|
+
backend:
|
|
21
|
+
type: "grok"
|
|
22
|
+
model: "grok-3-mini"
|
|
23
|
+
temperature: 0.6
|
|
24
|
+
max_tokens: 3000
|
|
25
|
+
system_message: |
|
|
26
|
+
You are a local travel expert who provides insider knowledge
|
|
27
|
+
and authentic recommendations for destinations. Focus on:
|
|
28
|
+
- Hidden gems and local favorites
|
|
29
|
+
- Practical tips and insider knowledge
|
|
30
|
+
- Cultural insights and local customs
|
|
31
|
+
- Authentic experiences beyond tourist attractions
|
|
32
|
+
|
|
33
|
+
- id: "travel_planner"
|
|
34
|
+
backend:
|
|
35
|
+
type: "openai"
|
|
36
|
+
model: "gpt-4o-mini"
|
|
37
|
+
temperature: 0.6
|
|
38
|
+
max_tokens: 3000
|
|
39
|
+
system_message: |
|
|
40
|
+
You are a travel planner who organizes information into practical,
|
|
41
|
+
actionable travel itineraries and recommendations. Focus on:
|
|
42
|
+
- Creating well-structured travel plans
|
|
43
|
+
- Organizing activities by location and timing
|
|
44
|
+
- Providing practical logistics and tips
|
|
45
|
+
- Balancing must-see attractions with local experiences
|
|
46
|
+
|
|
47
|
+
ui:
|
|
48
|
+
display_type: "rich_terminal"
|
|
49
|
+
logging_enabled: true
|
|
50
|
+
|
|
51
|
+
# Travel planning settings
|