fast-agent-mcp 0.1.0__py3-none-any.whl → 0.1.2__py3-none-any.whl

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
@@ -1,6 +1,6 @@
1
1
  Metadata-Version: 2.4
2
2
  Name: fast-agent-mcp
3
- Version: 0.1.0
3
+ Version: 0.1.2
4
4
  Summary: Define, Prompt and Test MCP enabled Agents and Workflows
5
5
  Author-email: Shaun Smith <fastagent@llmindset.co.uk>, Sarmad Qadri <sarmad@lastmileai.dev>
6
6
  License: Apache License
@@ -257,7 +257,7 @@ Evaluate how different models handle Agent and MCP Server calling tasks, then bu
257
257
 
258
258
  Prompts and configurations that define your Agent Applications are stored in simple files, with minimal boilerplate, enabling simple management and version control.
259
259
 
260
- Chat with individual Agents and Components before, during and after workflow execution to tune and diagnose your application.
260
+ Chat with individual Agents and Components before, during and after workflow execution to tune and diagnose your application. Agents can request human input to get additional context for task completion.
261
261
 
262
262
  Simple model selection makes testing Model <-> MCP Server interaction painless. You can read more about the motivation behind this project [here](https://llmindset.co.uk/resources/fast-agent/)
263
263
 
@@ -391,6 +391,21 @@ This starts an interactive session, which produces a short social media post for
391
391
 
392
392
  Chains can be incorporated in other workflows, or contain other workflow elements (including other Chains). You can set an `instruction` to precisely describe it's capabilities to other workflow steps if needed.
393
393
 
394
+ ### Human Input
395
+
396
+ Agents can request Human Input to assist with a task or get additional context:
397
+
398
+ ```python
399
+ @fast.agent(
400
+ instruction="An AI agent that assists with basic tasks. Request Human Input when needed.",
401
+ human_input=True,
402
+ )
403
+
404
+ await agent("print the next number in the sequence")
405
+ ```
406
+
407
+ In the example `human_input.py`, the Agent will prompt the User for additional information to complete the task.
408
+
394
409
  ### Parallel
395
410
 
396
411
  The Parallel Workflow sends the same message to multiple Agents simultaneously (`fan-out`), then uses the `fan-in` Agent to process the combined content.
@@ -464,7 +479,7 @@ Given a complex task, the Orchestrator uses an LLM to generate a plan to divide
464
479
  )
465
480
  ```
466
481
 
467
- See `orchestrator.py` in the workflow examples.
482
+ See the `orchestrator.py` or `agent_build.py` workflow example.
468
483
 
469
484
  ## Agent Features
470
485
 
@@ -559,9 +574,10 @@ agent["greeter"].send("Good Evening!") # Dictionary access is supported
559
574
  instruction="instruction", # base instruction for the orchestrator
560
575
  agents=["agent1", "agent2"], # list of agent names this orchestrator can use
561
576
  model="o3-mini.high", # specify orchestrator planning model
562
- use_history=False, # orchestrator doesn't maintain chat history by default
577
+ use_history=False, # orchestrator doesn't maintain chat history (no effect).
563
578
  human_input=False, # whether orchestrator can request human input
564
579
  plan_type="full", # planning approach: "full" or "iterative"
580
+ max_iterations=5, # maximum number of full plan attempts, or iterations
565
581
  )
566
582
  ```
567
583
 
@@ -23,7 +23,7 @@ mcp_agent/core/agent_utils.py,sha256=yUJ-qvw5TblqqOsB1vj0Qvcz9mass9awPA6UNNvuw0A
23
23
  mcp_agent/core/enhanced_prompt.py,sha256=XraDKdIMW960KXCiMfCEPKDakbf1wHYgvHwD-9CBDi0,13011
24
24
  mcp_agent/core/error_handling.py,sha256=D3HMW5odrbJvaKqcpCGj6eDXrbFcuqYaCZz7fyYiTu4,623
25
25
  mcp_agent/core/exceptions.py,sha256=a2-JGRwFFRoQEPuAq0JC5PhAJ5TO3xVJfdS4-VN29cw,2225
26
- mcp_agent/core/fastagent.py,sha256=CuT50oaexYq7L5-1xHR5HfS7qYKNToH3wmBAeD8kcBY,58234
26
+ mcp_agent/core/fastagent.py,sha256=drf11eHH1xCiyS91v_ADWfaV8T9asm_2Vw0NXxjinpc,58730
27
27
  mcp_agent/core/proxies.py,sha256=hXDUpsgGO4xBTIjdUeXj6vULPb8sf55vAFVQh6Ybn60,4411
28
28
  mcp_agent/core/server_validation.py,sha256=_59cn16nNT4HGPwg19HgxMtHK4MsdWYDUw_CuL-5xek,1696
29
29
  mcp_agent/core/types.py,sha256=Zhi9iW7uiOfdpSt9NC0FCtGRFtJPg4mpZPK2aYi7a7M,817
@@ -55,7 +55,7 @@ mcp_agent/mcp/mcp_aggregator.py,sha256=RVsgNnSJ1IPBkqKgF_Gp-Cpv97FVBIdppPey6FRoH
55
55
  mcp_agent/mcp/mcp_connection_manager.py,sha256=WLli0w3TVcsszyD9M7zP7vLKPetnQLTf_0PGhvMm9YM,13145
56
56
  mcp_agent/mcp/stdio.py,sha256=tW075R5rQ-UlflXWFKIFDgCbWbuhKqxhiYolWvyEkFs,3985
57
57
  mcp_agent/resources/examples/data-analysis/analysis-campaign.py,sha256=EG-HhaDHltZ4hHAqhgfX_pHM2wem48aYhSIKJxyWHKc,7269
58
- mcp_agent/resources/examples/data-analysis/analysis.py,sha256=yRwcYob-jaqwR1vdx_gYXpfqtBN4w7creNeNgimOHa4,2443
58
+ mcp_agent/resources/examples/data-analysis/analysis.py,sha256=5zLoioZQNKUfXt1EXLrGX3TU06-0N06-L9Gtp9BIr6k,2611
59
59
  mcp_agent/resources/examples/data-analysis/fastagent.config.yaml,sha256=eTKGbjnTHhDTeNRPQvG_fr9OQpEZ5Y9v7X2NyCj0V70,530
60
60
  mcp_agent/resources/examples/data-analysis/mount-point/WA_Fn-UseC_-HR-Employee-Attrition.csv,sha256=pcMeOL1_r8m8MziE6xgbBrQbjl5Ijo98yycZn7O-dlk,227977
61
61
  mcp_agent/resources/examples/internal/agent.py,sha256=f-jTgYabV3nWCQm0ZP9NtSEWjx3nQbRngzArRufcELg,384
@@ -65,12 +65,12 @@ mcp_agent/resources/examples/mcp_researcher/researcher-eval.py,sha256=kNPjIU-JwE
65
65
  mcp_agent/resources/examples/researcher/fastagent.config.yaml,sha256=2_VXZneckR6zk6RWzzL-smV_oWmgg4uSkLWqZv8jF0I,1995
66
66
  mcp_agent/resources/examples/researcher/researcher-eval.py,sha256=kNPjIU-JwE0oIBQKwhv6lZsUF_SPtYVkiEEbY1ZVZxk,1807
67
67
  mcp_agent/resources/examples/researcher/researcher.py,sha256=jPRafm7jbpHKkX_dQiYGG3Sw-e1Dm86q-JZT-WZDhM0,1425
68
- mcp_agent/resources/examples/workflows/agent_build.py,sha256=vdjS02rZR88RU53WYzXxPscfFNEFFe_niHYE_i49I8Q,2396
68
+ mcp_agent/resources/examples/workflows/agent_build.py,sha256=ioG4X8IbR8wwja8Zdncsk8YAu0VD2Xt1Vhr7saNJCZQ,2855
69
69
  mcp_agent/resources/examples/workflows/chaining.py,sha256=1G_0XBcFkSJCOXb6N_iXWlSc_oGAlhENR0k_CN1vJKI,1208
70
70
  mcp_agent/resources/examples/workflows/evaluator.py,sha256=3XmW1mjImlaWb0c5FWHYS9yP8nVGTbEdJySAoWXwrDg,3109
71
71
  mcp_agent/resources/examples/workflows/fastagent.config.yaml,sha256=k2AiapOcK42uqG2nWDVvnSLqN4okQIQZK0FTbZufBpY,809
72
72
  mcp_agent/resources/examples/workflows/human_input.py,sha256=c8cBdLEPbaMXddFwsfN3Z7RFs5PZXsdrjANfvq1VTPM,605
73
- mcp_agent/resources/examples/workflows/orchestrator.py,sha256=5TGFWrRQiTCdYY738cyd_OzZc7vckYkk1Up9VejFXB0,2574
73
+ mcp_agent/resources/examples/workflows/orchestrator.py,sha256=oyKzmLA1z00wbAwDwBCthJ_qJx4fai6GAJpeOXDR-bE,2569
74
74
  mcp_agent/resources/examples/workflows/parallel.py,sha256=pLbQrtXfbdYqMVddxtg5dZnBnm5Wo2mXlIa1Vf2F1FQ,3096
75
75
  mcp_agent/resources/examples/workflows/router.py,sha256=XT_ewCrxPxdUTMCYQGw34qZQ3GGu8TYY_v5Lige8By4,1707
76
76
  mcp_agent/telemetry/__init__.py,sha256=47DEQpj8HBSa-_TImW-5JCeuQeRkm5NMpJWZG3hSuFU,0
@@ -92,15 +92,15 @@ mcp_agent/workflows/intent_classifier/intent_classifier_llm_anthropic.py,sha256=
92
92
  mcp_agent/workflows/intent_classifier/intent_classifier_llm_openai.py,sha256=zj76WlTYnSCYjBQ_IDi5vFBQGmNwYaoUq1rT730sY98,1940
93
93
  mcp_agent/workflows/llm/__init__.py,sha256=47DEQpj8HBSa-_TImW-5JCeuQeRkm5NMpJWZG3hSuFU,0
94
94
  mcp_agent/workflows/llm/augmented_llm.py,sha256=Hyx-jwgbMjE_WQ--YjIUvdj6HAgX36IvXBesGy6uic0,25884
95
- mcp_agent/workflows/llm/augmented_llm_anthropic.py,sha256=iHfbn7bwC-ICTajEhGwg9vP-UEj681kNmZ9Cv6H05s4,22968
95
+ mcp_agent/workflows/llm/augmented_llm_anthropic.py,sha256=XZmumX-og07VR4O2TnEYQ9ZPwGgzLWt3uq6MII-tjnI,23076
96
96
  mcp_agent/workflows/llm/augmented_llm_openai.py,sha256=a95Q4AFiVw36bXMgYNLFrC2zyDmHERWwkjxJFHlL6JU,25061
97
97
  mcp_agent/workflows/llm/llm_selector.py,sha256=G7pIybuBDwtmyxUDov_QrNYH2FoI0qFRu2JfoxWUF5Y,11045
98
98
  mcp_agent/workflows/llm/model_factory.py,sha256=7zTJrO2ReHa_6dfh_gY6xO8dTySqGFCKlOG9-AMJ-i8,6920
99
99
  mcp_agent/workflows/llm/prompt_utils.py,sha256=EY3eddqnmc_YDUQJFysPnpTH6hr4r2HneeEmX76P8TQ,4948
100
100
  mcp_agent/workflows/orchestrator/__init__.py,sha256=47DEQpj8HBSa-_TImW-5JCeuQeRkm5NMpJWZG3hSuFU,0
101
- mcp_agent/workflows/orchestrator/orchestrator.py,sha256=nyn0vTjUz-lea7nIYY-aoVWOKB2ceNNV4x4z92bP3CI,23638
102
- mcp_agent/workflows/orchestrator/orchestrator_models.py,sha256=xTl2vUIqdLPvDAnqA485Hf_A3DD48TWhAbo-jfGrmRE,7182
103
- mcp_agent/workflows/orchestrator/orchestrator_prompts.py,sha256=eJSQThfd6Jvr1jTDx104sJI5R684yE55L_edCiWERsQ,6153
101
+ mcp_agent/workflows/orchestrator/orchestrator.py,sha256=Cu8cfDoTpT_FhGJp-T4NnCVvjkyDO1sbEJ7oKamK47k,26021
102
+ mcp_agent/workflows/orchestrator/orchestrator_models.py,sha256=1ldku1fYA_hu2F6K4l2C96mAdds05VibtSzSQrGm3yw,7321
103
+ mcp_agent/workflows/orchestrator/orchestrator_prompts.py,sha256=EXKEI174sshkZyPPEnWbwwNafzSPuA39MXL7iqG9cWc,9106
104
104
  mcp_agent/workflows/parallel/__init__.py,sha256=47DEQpj8HBSa-_TImW-5JCeuQeRkm5NMpJWZG3hSuFU,0
105
105
  mcp_agent/workflows/parallel/fan_in.py,sha256=EivpUL5-qftctws-tlfwmYS1QeSwr07POIbBUbwvwOk,13184
106
106
  mcp_agent/workflows/parallel/fan_out.py,sha256=J-yezgjzAWxfueW_Qcgwoet4PFDRIh0h4m48lIbFA4c,7023
@@ -115,8 +115,8 @@ mcp_agent/workflows/swarm/__init__.py,sha256=47DEQpj8HBSa-_TImW-5JCeuQeRkm5NMpJW
115
115
  mcp_agent/workflows/swarm/swarm.py,sha256=-lAIeSWDqbGHGRPTvjiP9nIKWvxxy9DAojl9yQzO1Pw,11050
116
116
  mcp_agent/workflows/swarm/swarm_anthropic.py,sha256=pW8zFx5baUWGd5Vw3nIDF2oVOOGNorij4qvGJKdYPcs,1624
117
117
  mcp_agent/workflows/swarm/swarm_openai.py,sha256=wfteywvAGkT5bLmIxX_StHJq8144whYmCRnJASAjOes,1596
118
- fast_agent_mcp-0.1.0.dist-info/METADATA,sha256=hN241Foz895Wj3dxkXWKDVeeTuIWmAT1zh7Jvgpo3Bo,27257
119
- fast_agent_mcp-0.1.0.dist-info/WHEEL,sha256=qtCwoSJWgHk21S1Kb4ihdzI2rlJ1ZKaIurTj_ngOhyQ,87
120
- fast_agent_mcp-0.1.0.dist-info/entry_points.txt,sha256=2IXtSmDK9XjWN__RWuRIJTgWyW17wJnJ_h-pb0pZAxo,174
121
- fast_agent_mcp-0.1.0.dist-info/licenses/LICENSE,sha256=cN3FxDURL9XuzE5mhK9L2paZo82LTfjwCYVT7e3j0e4,10939
122
- fast_agent_mcp-0.1.0.dist-info/RECORD,,
118
+ fast_agent_mcp-0.1.2.dist-info/METADATA,sha256=qcYR5D0SlhnnqX7er7yFF_0nEOmt4J74hbWiftzw6iI,27861
119
+ fast_agent_mcp-0.1.2.dist-info/WHEEL,sha256=qtCwoSJWgHk21S1Kb4ihdzI2rlJ1ZKaIurTj_ngOhyQ,87
120
+ fast_agent_mcp-0.1.2.dist-info/entry_points.txt,sha256=2IXtSmDK9XjWN__RWuRIJTgWyW17wJnJ_h-pb0pZAxo,174
121
+ fast_agent_mcp-0.1.2.dist-info/licenses/LICENSE,sha256=cN3FxDURL9XuzE5mhK9L2paZo82LTfjwCYVT7e3j0e4,10939
122
+ fast_agent_mcp-0.1.2.dist-info/RECORD,,
@@ -497,6 +497,7 @@ class FastAgent(ContextDependent):
497
497
  request_params: Optional[Dict] = None,
498
498
  human_input: bool = False,
499
499
  plan_type: Literal["full", "iterative"] = "full",
500
+ max_iterations: int = 30, # Add the max_iterations parameter with default value
500
501
  ) -> Callable:
501
502
  """
502
503
  Decorator to create and register an orchestrator.
@@ -510,6 +511,7 @@ class FastAgent(ContextDependent):
510
511
  request_params: Additional request parameters for the LLM
511
512
  human_input: Whether to enable human input capabilities
512
513
  plan_type: Planning approach - "full" generates entire plan first, "iterative" plans one step at a time
514
+ max_iterations: Maximum number of planning iterations (default: 10)
513
515
  """
514
516
  default_instruction = """
515
517
  You are an expert planner. Given an objective task and a list of MCP servers (which are collections of tools)
@@ -517,6 +519,13 @@ class FastAgent(ContextDependent):
517
519
  which can be performed by LLMs with access to the servers or agents.
518
520
  """
519
521
 
522
+ # Handle request_params update with max_iterations
523
+ if request_params is None:
524
+ request_params = {"max_iterations": max_iterations}
525
+ elif isinstance(request_params, dict):
526
+ if "max_iterations" not in request_params:
527
+ request_params["max_iterations"] = max_iterations
528
+
520
529
  decorator = self._create_decorator(
521
530
  AgentType.ORCHESTRATOR,
522
531
  default_name="Orchestrator",
@@ -7,6 +7,10 @@ from mcp_agent.workflows.llm.augmented_llm import RequestParams
7
7
  fast = FastAgent("Data Analysis (Roots)")
8
8
 
9
9
 
10
+ # The sample data is under Database Contents License (DbCL) v1.0.
11
+ # Available here : https://www.kaggle.com/datasets/pavansubhasht/ibm-hr-analytics-attrition-dataset
12
+
13
+
10
14
  @fast.agent(
11
15
  name="data_analysis",
12
16
  instruction="""
@@ -4,6 +4,7 @@ This demonstrates creating multiple agents and an orchestrator to coordinate the
4
4
 
5
5
  import asyncio
6
6
  from mcp_agent.core.fastagent import FastAgent
7
+ from mcp_agent.workflows.llm.augmented_llm import RequestParams
7
8
 
8
9
  # Create the application
9
10
  fast = FastAgent("Agent Builder")
@@ -12,49 +13,70 @@ fast = FastAgent("Agent Builder")
12
13
  @fast.agent(
13
14
  "agent_expert",
14
15
  instruction="""
15
- You design agent workflows, using the practices from 'Building Effective Agents'. You provide concise
16
- specific guidance on design and composition. Prefer simple solutions, and don't nest workflows more
17
- than one level deep. Your ultimate goal will be to produce a single '.py' agent in the style
18
- shown to you that fulfils the Human's needs.
19
- Keep the application simple, define agents with appropriate MCP Servers, Tools and the Human Input Tool.
20
- The style of the program should be like the examples you have been showm, very little additional code (use
21
- very simple Python where necessary). """,
16
+ You design agent workflows, adhering to 'Building Effective Agents' (details to follow).
17
+
18
+ You provide concise specific guidance on design and composition. Prefer simple solutions,
19
+ and don't nest workflows more than one level deep.
20
+
21
+ Your objective is to produce a single '.py' agent in the style of the examples.
22
+
23
+ Keep the application simple, concentrationg on defining Agent instructions, MCP Servers and
24
+ appropriate use of Workflows.
25
+
26
+ The style of the program should be like the examples you have been shown, with a minimum of
27
+ additional code, using only very simple Python where absolutely necessary.
28
+
29
+ Concentrate on the quality of the Agent instructions and "warmup" prompts given to them.
30
+
31
+ Keep requirements minimal: focus on building the prompts and the best workflow. The program
32
+ is expected to be adjusted and refined later.
33
+
34
+ If you are unsure about how to proceed, request input from the Human.
35
+
36
+ Use the filesystem tools to save your completed fastagent program, in an appropriately named '.py' file.
37
+
38
+ """,
22
39
  servers=["filesystem", "fetch"],
40
+ request_params=RequestParams(maxTokens=8192),
23
41
  )
24
42
  # Define worker agents
25
43
  @fast.agent(
26
44
  "requirements_capture",
27
45
  instruction="""
28
- You help the Human define their requirements for building Agent based systems. Keep questions short and
29
- simple, collaborate with the agent_expert or other agents in the workflow to refine human interaction.
30
- Keep requests to the Human simple and minimal. """,
46
+ You help the Human define their requirements for building Agent based systems.
47
+
48
+ Keep questions short, simple and minimal, always offering to complete the questioning
49
+ if desired. If uncertain about something, respond asking the 'agent_expert' for guidance.
50
+
51
+ Do not interrogate the Human, prefer to move the process on, as more details can be requested later
52
+ if needed. Remind the Human of this.
53
+ """,
31
54
  human_input=True,
32
55
  )
33
56
  # Define the orchestrator to coordinate the other agents
34
57
  @fast.orchestrator(
35
- name="orchestrator_worker",
58
+ name="agent_builder",
36
59
  agents=["agent_expert", "requirements_capture"],
37
60
  model="sonnet",
61
+ plan_type="iterative",
62
+ request_params=RequestParams(maxTokens=8192),
63
+ max_iterations=5,
38
64
  )
39
65
  async def main():
40
66
  async with fast.run() as agent:
41
- await agent.agent_expert("""
42
- - Read this paper: https://www.anthropic.com/research/building-effective-agents" to understand
43
- the principles of Building Effective Agents.
44
- - Read and examing the sample agent and workflow definitions in the current directory:
45
- - chaining.py - simple agent chaining example.
46
- - parallel.py - parallel agents example.
47
- - evaluator.py - evaluator optimizer example.
48
- - orchestrator.py - complex orchestration example.
49
- - router.py - workflow routing example.
50
- - Load the 'fastagent.config.yaml' file to see the available and configured MCP Servers.
51
- When producing the agent/workflow definition, keep to a simple single .py file in the style
52
- of the examples.
53
- """)
54
-
55
- await agent.orchestrator_worker(
56
- "Write an Agent program that fulfils the Human's needs."
57
- )
67
+ CODER_WARMUP = """
68
+ - Read this paper: https://www.anthropic.com/research/building-effective-agents" to understand how
69
+ and when to use different types of Agents and Workflow types.
70
+
71
+ - Read this README https://raw.githubusercontent.com/evalstate/fast-agent/refs/heads/main/README.md file
72
+ to see how to use "fast-agent" framework.
73
+
74
+ - Look at the 'fastagent.config.yaml' file to see the available and configured MCP Servers.
75
+
76
+ """
77
+ await agent.agent_expert(CODER_WARMUP)
78
+
79
+ await agent.agent_builder()
58
80
 
59
81
 
60
82
  if __name__ == "__main__":
@@ -45,7 +45,7 @@ fast = FastAgent("Orchestrator-Workers")
45
45
  @fast.orchestrator(
46
46
  name="orchestrate",
47
47
  agents=["finder", "writer", "proofreader"],
48
- plan_type="iterative",
48
+ plan_type="full",
49
49
  )
50
50
  async def main():
51
51
  async with fast.run() as agent:
@@ -246,15 +246,16 @@ class AnthropicAugmentedLLM(AugmentedLLM[MessageParam, Message]):
246
246
  style="dim green italic",
247
247
  )
248
248
 
249
- await self.show_assistant_message(message_text)
250
-
251
249
  # Process all tool calls and collect results
252
250
  tool_results = []
253
- for content in tool_uses:
251
+ for i, content in enumerate(tool_uses):
254
252
  tool_name = content.name
255
253
  tool_args = content.input
256
254
  tool_use_id = content.id
257
255
 
256
+ if i == 0: # Only show message for first tool use
257
+ await self.show_assistant_message(message_text, tool_name)
258
+
258
259
  self.show_tool_call(available_tools, tool_name, tool_args)
259
260
  tool_call_request = CallToolRequest(
260
261
  method="tools/call",
@@ -32,6 +32,7 @@ from mcp_agent.workflows.orchestrator.orchestrator_models import (
32
32
  from mcp_agent.workflows.orchestrator.orchestrator_prompts import (
33
33
  FULL_PLAN_PROMPT_TEMPLATE,
34
34
  ITERATIVE_PLAN_PROMPT_TEMPLATE,
35
+ SYNTHESIZE_INCOMPLETE_PLAN_TEMPLATE, # Add the missing import
35
36
  SYNTHESIZE_PLAN_PROMPT_TEMPLATE,
36
37
  TASK_PROMPT_TEMPLATE,
37
38
  )
@@ -90,7 +91,7 @@ class Orchestrator(AugmentedLLM[MessageParamT, MessageT]):
90
91
  # Initialize with orchestrator-specific defaults
91
92
  orchestrator_params = RequestParams(
92
93
  use_history=False, # Orchestrator doesn't support history
93
- max_iterations=30, # Higher default for complex tasks
94
+ max_iterations=5, # Reduced from 10 to prevent excessive iterations
94
95
  maxTokens=8192, # Higher default for planning
95
96
  parallel_tool_calls=True,
96
97
  )
@@ -126,7 +127,6 @@ class Orchestrator(AugmentedLLM[MessageParamT, MessageT]):
126
127
  self.agents = {agent.name: agent for agent in available_agents}
127
128
 
128
129
  # Initialize logger
129
- self.logger = logger
130
130
  self.name = name
131
131
 
132
132
  # Store agents by name - COMPLETE REWRITE OF AGENT STORAGE
@@ -213,8 +213,12 @@ class Orchestrator(AugmentedLLM[MessageParamT, MessageT]):
213
213
  ) -> PlanResult:
214
214
  """Execute task with result chaining between steps"""
215
215
  iterations = 0
216
+ total_steps_executed = 0
216
217
 
217
218
  params = self.get_request_params(request_params)
219
+ max_steps = getattr(
220
+ params, "max_steps", params.max_iterations * 5
221
+ ) # Default to 5× max_iterations
218
222
 
219
223
  # Single progress event for orchestration start
220
224
  model = await self.select_model(params) or "unknown-model"
@@ -231,6 +235,9 @@ class Orchestrator(AugmentedLLM[MessageParamT, MessageT]):
231
235
  )
232
236
 
233
237
  plan_result = PlanResult(objective=objective, step_results=[])
238
+ plan_result.max_iterations_reached = (
239
+ False # Add a flag to track if we hit the limit
240
+ )
234
241
 
235
242
  while iterations < params.max_iterations:
236
243
  if self.plan_type == "iterative":
@@ -255,30 +262,34 @@ class Orchestrator(AugmentedLLM[MessageParamT, MessageT]):
255
262
  plan_result.plan = plan
256
263
 
257
264
  if plan.is_complete:
258
- # Only mark as complete if we have actually executed some steps
259
- if len(plan_result.step_results) > 0:
260
- plan_result.is_complete = True
261
-
262
- # Synthesize final result into a single message
263
- # Use the structured XML format for better context
264
- synthesis_prompt = SYNTHESIZE_PLAN_PROMPT_TEMPLATE.format(
265
- plan_result=format_plan_result(plan_result)
266
- )
265
+ # Modified: Remove the requirement for steps to be executed
266
+ plan_result.is_complete = True
267
267
 
268
- # Use planner directly - planner already has PLANNING verb
269
- plan_result.result = await self.planner.generate_str(
270
- message=synthesis_prompt,
271
- request_params=params.model_copy(update={"max_iterations": 1}),
272
- )
268
+ # Synthesize final result into a single message
269
+ # Use the structured XML format for better context
270
+ synthesis_prompt = SYNTHESIZE_PLAN_PROMPT_TEMPLATE.format(
271
+ plan_result=format_plan_result(plan_result)
272
+ )
273
273
 
274
- return plan_result
275
- else:
276
- # Don't allow completion without executing steps
277
- plan.is_complete = False
274
+ # Use planner directly - planner already has PLANNING verb
275
+ plan_result.result = await self.planner.generate_str(
276
+ message=synthesis_prompt,
277
+ request_params=params.model_copy(update={"max_iterations": 1}),
278
+ )
279
+
280
+ return plan_result
278
281
 
279
282
  # Execute each step, collecting results
280
283
  # Note that in iterative mode this will only be a single step
281
284
  for step in plan.steps:
285
+ # Check if we've hit the step limit
286
+ if total_steps_executed >= max_steps:
287
+ self.logger.warning(
288
+ f"Reached maximum step limit ({max_steps}) without completing objective."
289
+ )
290
+ plan_result.max_steps_reached = True
291
+ break
292
+
282
293
  step_result = await self._execute_step(
283
294
  step=step,
284
295
  previous_result=plan_result,
@@ -286,16 +297,48 @@ class Orchestrator(AugmentedLLM[MessageParamT, MessageT]):
286
297
  )
287
298
 
288
299
  plan_result.add_step_result(step_result)
300
+ total_steps_executed += 1
301
+
302
+ # Check for step limit after executing steps
303
+ if total_steps_executed >= max_steps:
304
+ plan_result.max_iterations_reached = True
305
+ break
289
306
 
290
307
  logger.debug(
291
308
  f"Iteration {iterations}: Intermediate plan result:", data=plan_result
292
309
  )
310
+
311
+ # Check for diminishing returns
312
+ if iterations > 2 and len(plan.steps) <= 1:
313
+ # If plan has 0-1 steps after multiple iterations, might be done
314
+ self.logger.info("Minimal new steps detected, marking plan as complete")
315
+ plan_result.is_complete = True
316
+ break
317
+
293
318
  iterations += 1
294
319
 
295
- raise RuntimeError(
296
- f"Task failed to complete in {params.max_iterations} iterations"
320
+ # If we get here, we've hit the iteration limit without completing
321
+ self.logger.warning(
322
+ f"Failed to complete in {params.max_iterations} iterations."
323
+ )
324
+
325
+ # Mark that we hit the iteration limit
326
+ plan_result.max_iterations_reached = True
327
+
328
+ # Synthesize what we have so far, but use a different prompt that explains the incomplete status
329
+ synthesis_prompt = SYNTHESIZE_INCOMPLETE_PLAN_TEMPLATE.format(
330
+ plan_result=format_plan_result(plan_result),
331
+ max_iterations=params.max_iterations,
332
+ )
333
+
334
+ # Generate a final synthesis that acknowledges the incomplete status
335
+ plan_result.result = await self.planner.generate_str(
336
+ message=synthesis_prompt,
337
+ request_params=params.model_copy(update={"max_iterations": 1}),
297
338
  )
298
339
 
340
+ return plan_result
341
+
299
342
  async def _execute_step(
300
343
  self,
301
344
  step: Step,
@@ -399,18 +442,11 @@ class Orchestrator(AugmentedLLM[MessageParamT, MessageT]):
399
442
  params = self.get_request_params(request_params)
400
443
  params = params.model_copy(update={"use_history": False})
401
444
 
402
- # Debug: Print agent names before formatting
403
- print("\n------ AGENT NAMES BEFORE FORMATTING ------")
404
- for agent_name in self.agents.keys():
405
- print(f"Agent name: '{agent_name}'")
406
- print("------------------------------------------\n")
407
-
408
445
  # Format agents without numeric prefixes for cleaner XML
409
446
  agent_formats = []
410
447
  for agent_name in self.agents.keys():
411
448
  formatted = self._format_agent_info(agent_name)
412
449
  agent_formats.append(formatted)
413
- print(f"Formatted agent '{agent_name}':\n{formatted[:200]}...\n")
414
450
 
415
451
  agents = "\n".join(agent_formats)
416
452
 
@@ -423,10 +459,21 @@ class Orchestrator(AugmentedLLM[MessageParamT, MessageT]):
423
459
  else "Plan Status: In Progress"
424
460
  )
425
461
 
462
+ # Fix the iteration counting display
463
+ max_iterations = params.max_iterations
464
+ # Simplified iteration counting logic
465
+ current_iteration = len(plan_result.step_results)
466
+ current_iteration = min(current_iteration, max_iterations - 1) # Cap at max-1
467
+ iterations_remaining = max(
468
+ 0, max_iterations - current_iteration - 1
469
+ ) # Ensure non-negative
470
+ iterations_info = f"Planning Budget: Iteration {current_iteration + 1} of {max_iterations} (with {iterations_remaining} remaining)"
471
+
426
472
  prompt = FULL_PLAN_PROMPT_TEMPLATE.format(
427
473
  objective=objective,
428
474
  plan_result=format_plan_result(plan_result),
429
475
  plan_status=plan_status,
476
+ iterations_info=iterations_info,
430
477
  agents=agents,
431
478
  )
432
479
 
@@ -507,10 +554,17 @@ class Orchestrator(AugmentedLLM[MessageParamT, MessageT]):
507
554
  else "Plan Status: In Progress"
508
555
  )
509
556
 
557
+ # Add max_iterations info for the LLM
558
+ max_iterations = params.max_iterations
559
+ current_iteration = len(plan_result.step_results)
560
+ iterations_remaining = max_iterations - current_iteration
561
+ iterations_info = f"Planning Budget: {iterations_remaining} of {max_iterations} iterations remaining"
562
+
510
563
  prompt = ITERATIVE_PLAN_PROMPT_TEMPLATE.format(
511
564
  objective=objective,
512
565
  plan_result=format_plan_result(plan_result),
513
566
  plan_status=plan_status,
567
+ iterations_info=iterations_info,
514
568
  agents=agents,
515
569
  )
516
570
 
@@ -102,6 +102,9 @@ class PlanResult(BaseModel):
102
102
  is_complete: bool = False
103
103
  """Whether the overall plan objective is complete"""
104
104
 
105
+ max_iterations_reached: bool = False
106
+ """Whether the plan execution reached the maximum number of iterations without completing"""
107
+
105
108
  result: str | None = None
106
109
  """Result of executing the plan"""
107
110
 
@@ -1,3 +1,8 @@
1
+ """
2
+ Prompt templates used by the Orchestrator workflow.
3
+ """
4
+
5
+ # Templates for formatting results
1
6
  TASK_RESULT_TEMPLATE = """Task: {task_description}
2
7
  Result: {task_result}"""
3
8
 
@@ -30,6 +35,7 @@ You can analyze results from the previous steps already executed to decide if th
30
35
 
31
36
  <fastagent:status>
32
37
  {plan_status}
38
+ {iterations_info}
33
39
  </fastagent:status>
34
40
  </fastagent:data>
35
41
 
@@ -38,6 +44,10 @@ If the previous results achieve the objective, return is_complete=True.
38
44
  Otherwise, generate remaining steps needed.
39
45
 
40
46
  <fastagent:instruction>
47
+ You are operating in "full plan" mode, where you generate a complete plan with ALL remaining steps needed.
48
+ After receiving your plan, the system will execute ALL steps in your plan before asking for your input again.
49
+ If the plan needs multiple iterations, you'll be called again with updated results.
50
+
41
51
  Generate a plan with all remaining steps needed.
42
52
  Steps are sequential, but each Step can have parallel subtasks.
43
53
  For each Step, specify a description of the step and independent subtasks that can run in parallel.
@@ -68,6 +78,14 @@ Return your response in the following JSON structure:
68
78
  "is_complete": false
69
79
  }}
70
80
 
81
+ Set "is_complete" to true when ANY of these conditions are met:
82
+ 1. The objective has been achieved in full or substantively
83
+ 2. The remaining work is minor or trivial compared to what's been accomplished
84
+ 3. Additional steps provide minimal value toward the core objective
85
+ 4. The plan has gathered sufficient information to answer the original request
86
+
87
+ Be decisive - avoid excessive planning steps that add little value. It's better to complete a plan early than to continue with marginal improvements. Focus on the core intent of the objective, not perfection.
88
+
71
89
  You must respond with valid JSON only, with no triple backticks. No markdown formatting.
72
90
  No extra text. Do not wrap in ```json code fences.
73
91
  </fastagent:instruction>
@@ -92,6 +110,7 @@ to decide what to do next.
92
110
 
93
111
  <fastagent:status>
94
112
  {plan_status}
113
+ {iterations_info}
95
114
  </fastagent:status>
96
115
  </fastagent:data>
97
116
 
@@ -100,6 +119,9 @@ If the previous results achieve the objective, return is_complete=True.
100
119
  Otherwise, generate the next Step.
101
120
 
102
121
  <fastagent:instruction>
122
+ You are operating in "iterative plan" mode, where you generate ONLY ONE STEP at a time.
123
+ After each step is executed, you'll be called again to determine the next step based on updated results.
124
+
103
125
  Generate the next step, by specifying a description of the step and independent subtasks that can run in parallel:
104
126
  For each subtask specify:
105
127
  1. Clear description of the task that an LLM can execute
@@ -120,6 +142,14 @@ Return your response in the following JSON structure:
120
142
  "is_complete": false
121
143
  }}
122
144
 
145
+ Set "is_complete" to true when ANY of these conditions are met:
146
+ 1. The objective has been achieved in full or substantively
147
+ 2. The remaining work is minor or trivial compared to what's been accomplished
148
+ 3. Additional steps provide minimal value toward the core objective
149
+ 4. The plan has gathered sufficient information to answer the original request
150
+
151
+ Be decisive - avoid excessive planning steps that add little value. It's better to complete a plan early than to continue with marginal improvements. Focus on the core intent of the objective, not perfection.
152
+
123
153
  You must respond with valid JSON only, with no triple backticks. No markdown formatting.
124
154
  No extra text. Do not wrap in ```json code fences.
125
155
  </fastagent:instruction>
@@ -184,5 +214,35 @@ Create a comprehensive final response that addresses the original objective.
184
214
  Integrate all the information gathered across all plan steps.
185
215
  Provide a clear, complete answer that achieves the objective.
186
216
  Focus on delivering value through your synthesis, not just summarizing.
217
+
218
+ If the plan was marked as incomplete but the maximum number of iterations was reached,
219
+ make sure to state clearly what was accomplished and what remains to be done.
220
+ </fastagent:instruction>
221
+ """
222
+
223
+ # New template for incomplete plans due to iteration limits
224
+ SYNTHESIZE_INCOMPLETE_PLAN_TEMPLATE = """You need to synthesize the results of all completed plan steps into a final response.
225
+
226
+ <fastagent:data>
227
+ <fastagent:plan-results>
228
+ {plan_result}
229
+ </fastagent:plan-results>
230
+ </fastagent:data>
231
+
232
+ <fastagent:status>
233
+ The maximum number of iterations ({max_iterations}) was reached before the objective could be completed.
234
+ </fastagent:status>
235
+
236
+ <fastagent:instruction>
237
+ Create a comprehensive response that summarizes what was accomplished so far.
238
+ The objective was NOT fully completed due to reaching the maximum number of iterations.
239
+
240
+ In your response:
241
+ 1. Clearly state that the objective was not fully completed
242
+ 2. Summarize what WAS accomplished across all the executed steps
243
+ 3. Identify what remains to be done to complete the objective
244
+ 4. Organize the information to provide maximum value despite being incomplete
245
+
246
+ Focus on being transparent about the incomplete status while providing as much value as possible.
187
247
  </fastagent:instruction>
188
248
  """