fast-agent-mcp 0.1.0__py3-none-any.whl → 0.1.1__py3-none-any.whl

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
@@ -1,6 +1,6 @@
1
1
  Metadata-Version: 2.4
2
2
  Name: fast-agent-mcp
3
- Version: 0.1.0
3
+ Version: 0.1.1
4
4
  Summary: Define, Prompt and Test MCP enabled Agents and Workflows
5
5
  Author-email: Shaun Smith <fastagent@llmindset.co.uk>, Sarmad Qadri <sarmad@lastmileai.dev>
6
6
  License: Apache License
@@ -257,7 +257,7 @@ Evaluate how different models handle Agent and MCP Server calling tasks, then bu
257
257
 
258
258
  Prompts and configurations that define your Agent Applications are stored in simple files, with minimal boilerplate, enabling simple management and version control.
259
259
 
260
- Chat with individual Agents and Components before, during and after workflow execution to tune and diagnose your application.
260
+ Chat with individual Agents and Components before, during and after workflow execution to tune and diagnose your application. Agents can request human input to get additional context for task completion.
261
261
 
262
262
  Simple model selection makes testing Model <-> MCP Server interaction painless. You can read more about the motivation behind this project [here](https://llmindset.co.uk/resources/fast-agent/)
263
263
 
@@ -391,6 +391,21 @@ This starts an interactive session, which produces a short social media post for
391
391
 
392
392
  Chains can be incorporated in other workflows, or contain other workflow elements (including other Chains). You can set an `instruction` to precisely describe it's capabilities to other workflow steps if needed.
393
393
 
394
+ ### Human Input
395
+
396
+ Agents can request Human Input to assist with a task or get additional context:
397
+
398
+ ```python
399
+ @fast.agent(
400
+ instruction="An AI agent that assists with basic tasks. Request Human Input when needed.",
401
+ human_input=True,
402
+ )
403
+
404
+ await agent("print the next number in the sequence")
405
+ ```
406
+
407
+ In the example `human_input.py`, the Agent will prompt the User for additional information to complete the task.
408
+
394
409
  ### Parallel
395
410
 
396
411
  The Parallel Workflow sends the same message to multiple Agents simultaneously (`fan-out`), then uses the `fan-in` Agent to process the combined content.
@@ -464,7 +479,7 @@ Given a complex task, the Orchestrator uses an LLM to generate a plan to divide
464
479
  )
465
480
  ```
466
481
 
467
- See `orchestrator.py` in the workflow examples.
482
+ See the `orchestrator.py` or `agent_build.py` workflow example.
468
483
 
469
484
  ## Agent Features
470
485
 
@@ -559,9 +574,10 @@ agent["greeter"].send("Good Evening!") # Dictionary access is supported
559
574
  instruction="instruction", # base instruction for the orchestrator
560
575
  agents=["agent1", "agent2"], # list of agent names this orchestrator can use
561
576
  model="o3-mini.high", # specify orchestrator planning model
562
- use_history=False, # orchestrator doesn't maintain chat history by default
577
+ use_history=False, # orchestrator doesn't maintain chat history (no effect).
563
578
  human_input=False, # whether orchestrator can request human input
564
579
  plan_type="full", # planning approach: "full" or "iterative"
580
+ max_iterations=5, # maximum number of full plan attempts, or iterations
565
581
  )
566
582
  ```
567
583
 
@@ -23,7 +23,7 @@ mcp_agent/core/agent_utils.py,sha256=yUJ-qvw5TblqqOsB1vj0Qvcz9mass9awPA6UNNvuw0A
23
23
  mcp_agent/core/enhanced_prompt.py,sha256=XraDKdIMW960KXCiMfCEPKDakbf1wHYgvHwD-9CBDi0,13011
24
24
  mcp_agent/core/error_handling.py,sha256=D3HMW5odrbJvaKqcpCGj6eDXrbFcuqYaCZz7fyYiTu4,623
25
25
  mcp_agent/core/exceptions.py,sha256=a2-JGRwFFRoQEPuAq0JC5PhAJ5TO3xVJfdS4-VN29cw,2225
26
- mcp_agent/core/fastagent.py,sha256=CuT50oaexYq7L5-1xHR5HfS7qYKNToH3wmBAeD8kcBY,58234
26
+ mcp_agent/core/fastagent.py,sha256=drf11eHH1xCiyS91v_ADWfaV8T9asm_2Vw0NXxjinpc,58730
27
27
  mcp_agent/core/proxies.py,sha256=hXDUpsgGO4xBTIjdUeXj6vULPb8sf55vAFVQh6Ybn60,4411
28
28
  mcp_agent/core/server_validation.py,sha256=_59cn16nNT4HGPwg19HgxMtHK4MsdWYDUw_CuL-5xek,1696
29
29
  mcp_agent/core/types.py,sha256=Zhi9iW7uiOfdpSt9NC0FCtGRFtJPg4mpZPK2aYi7a7M,817
@@ -55,7 +55,7 @@ mcp_agent/mcp/mcp_aggregator.py,sha256=RVsgNnSJ1IPBkqKgF_Gp-Cpv97FVBIdppPey6FRoH
55
55
  mcp_agent/mcp/mcp_connection_manager.py,sha256=WLli0w3TVcsszyD9M7zP7vLKPetnQLTf_0PGhvMm9YM,13145
56
56
  mcp_agent/mcp/stdio.py,sha256=tW075R5rQ-UlflXWFKIFDgCbWbuhKqxhiYolWvyEkFs,3985
57
57
  mcp_agent/resources/examples/data-analysis/analysis-campaign.py,sha256=EG-HhaDHltZ4hHAqhgfX_pHM2wem48aYhSIKJxyWHKc,7269
58
- mcp_agent/resources/examples/data-analysis/analysis.py,sha256=yRwcYob-jaqwR1vdx_gYXpfqtBN4w7creNeNgimOHa4,2443
58
+ mcp_agent/resources/examples/data-analysis/analysis.py,sha256=5zLoioZQNKUfXt1EXLrGX3TU06-0N06-L9Gtp9BIr6k,2611
59
59
  mcp_agent/resources/examples/data-analysis/fastagent.config.yaml,sha256=eTKGbjnTHhDTeNRPQvG_fr9OQpEZ5Y9v7X2NyCj0V70,530
60
60
  mcp_agent/resources/examples/data-analysis/mount-point/WA_Fn-UseC_-HR-Employee-Attrition.csv,sha256=pcMeOL1_r8m8MziE6xgbBrQbjl5Ijo98yycZn7O-dlk,227977
61
61
  mcp_agent/resources/examples/internal/agent.py,sha256=f-jTgYabV3nWCQm0ZP9NtSEWjx3nQbRngzArRufcELg,384
@@ -65,12 +65,12 @@ mcp_agent/resources/examples/mcp_researcher/researcher-eval.py,sha256=kNPjIU-JwE
65
65
  mcp_agent/resources/examples/researcher/fastagent.config.yaml,sha256=2_VXZneckR6zk6RWzzL-smV_oWmgg4uSkLWqZv8jF0I,1995
66
66
  mcp_agent/resources/examples/researcher/researcher-eval.py,sha256=kNPjIU-JwE0oIBQKwhv6lZsUF_SPtYVkiEEbY1ZVZxk,1807
67
67
  mcp_agent/resources/examples/researcher/researcher.py,sha256=jPRafm7jbpHKkX_dQiYGG3Sw-e1Dm86q-JZT-WZDhM0,1425
68
- mcp_agent/resources/examples/workflows/agent_build.py,sha256=vdjS02rZR88RU53WYzXxPscfFNEFFe_niHYE_i49I8Q,2396
68
+ mcp_agent/resources/examples/workflows/agent_build.py,sha256=HxsTWkmcFh_X3UhAchvpwp7P4IVCBS-B-hK6e_DuNnM,2748
69
69
  mcp_agent/resources/examples/workflows/chaining.py,sha256=1G_0XBcFkSJCOXb6N_iXWlSc_oGAlhENR0k_CN1vJKI,1208
70
70
  mcp_agent/resources/examples/workflows/evaluator.py,sha256=3XmW1mjImlaWb0c5FWHYS9yP8nVGTbEdJySAoWXwrDg,3109
71
71
  mcp_agent/resources/examples/workflows/fastagent.config.yaml,sha256=k2AiapOcK42uqG2nWDVvnSLqN4okQIQZK0FTbZufBpY,809
72
72
  mcp_agent/resources/examples/workflows/human_input.py,sha256=c8cBdLEPbaMXddFwsfN3Z7RFs5PZXsdrjANfvq1VTPM,605
73
- mcp_agent/resources/examples/workflows/orchestrator.py,sha256=5TGFWrRQiTCdYY738cyd_OzZc7vckYkk1Up9VejFXB0,2574
73
+ mcp_agent/resources/examples/workflows/orchestrator.py,sha256=oyKzmLA1z00wbAwDwBCthJ_qJx4fai6GAJpeOXDR-bE,2569
74
74
  mcp_agent/resources/examples/workflows/parallel.py,sha256=pLbQrtXfbdYqMVddxtg5dZnBnm5Wo2mXlIa1Vf2F1FQ,3096
75
75
  mcp_agent/resources/examples/workflows/router.py,sha256=XT_ewCrxPxdUTMCYQGw34qZQ3GGu8TYY_v5Lige8By4,1707
76
76
  mcp_agent/telemetry/__init__.py,sha256=47DEQpj8HBSa-_TImW-5JCeuQeRkm5NMpJWZG3hSuFU,0
@@ -92,15 +92,15 @@ mcp_agent/workflows/intent_classifier/intent_classifier_llm_anthropic.py,sha256=
92
92
  mcp_agent/workflows/intent_classifier/intent_classifier_llm_openai.py,sha256=zj76WlTYnSCYjBQ_IDi5vFBQGmNwYaoUq1rT730sY98,1940
93
93
  mcp_agent/workflows/llm/__init__.py,sha256=47DEQpj8HBSa-_TImW-5JCeuQeRkm5NMpJWZG3hSuFU,0
94
94
  mcp_agent/workflows/llm/augmented_llm.py,sha256=Hyx-jwgbMjE_WQ--YjIUvdj6HAgX36IvXBesGy6uic0,25884
95
- mcp_agent/workflows/llm/augmented_llm_anthropic.py,sha256=iHfbn7bwC-ICTajEhGwg9vP-UEj681kNmZ9Cv6H05s4,22968
95
+ mcp_agent/workflows/llm/augmented_llm_anthropic.py,sha256=XZmumX-og07VR4O2TnEYQ9ZPwGgzLWt3uq6MII-tjnI,23076
96
96
  mcp_agent/workflows/llm/augmented_llm_openai.py,sha256=a95Q4AFiVw36bXMgYNLFrC2zyDmHERWwkjxJFHlL6JU,25061
97
97
  mcp_agent/workflows/llm/llm_selector.py,sha256=G7pIybuBDwtmyxUDov_QrNYH2FoI0qFRu2JfoxWUF5Y,11045
98
98
  mcp_agent/workflows/llm/model_factory.py,sha256=7zTJrO2ReHa_6dfh_gY6xO8dTySqGFCKlOG9-AMJ-i8,6920
99
99
  mcp_agent/workflows/llm/prompt_utils.py,sha256=EY3eddqnmc_YDUQJFysPnpTH6hr4r2HneeEmX76P8TQ,4948
100
100
  mcp_agent/workflows/orchestrator/__init__.py,sha256=47DEQpj8HBSa-_TImW-5JCeuQeRkm5NMpJWZG3hSuFU,0
101
- mcp_agent/workflows/orchestrator/orchestrator.py,sha256=nyn0vTjUz-lea7nIYY-aoVWOKB2ceNNV4x4z92bP3CI,23638
102
- mcp_agent/workflows/orchestrator/orchestrator_models.py,sha256=xTl2vUIqdLPvDAnqA485Hf_A3DD48TWhAbo-jfGrmRE,7182
103
- mcp_agent/workflows/orchestrator/orchestrator_prompts.py,sha256=eJSQThfd6Jvr1jTDx104sJI5R684yE55L_edCiWERsQ,6153
101
+ mcp_agent/workflows/orchestrator/orchestrator.py,sha256=bpTx5V9tF8tNsltWvFSDbj-u5A3aU_tofrnUp8EZ1t4,26049
102
+ mcp_agent/workflows/orchestrator/orchestrator_models.py,sha256=1ldku1fYA_hu2F6K4l2C96mAdds05VibtSzSQrGm3yw,7321
103
+ mcp_agent/workflows/orchestrator/orchestrator_prompts.py,sha256=TBFhAkbcwPCt51lEdl7h1fLFLXqu9_jWLzG4EWpJqZI,8230
104
104
  mcp_agent/workflows/parallel/__init__.py,sha256=47DEQpj8HBSa-_TImW-5JCeuQeRkm5NMpJWZG3hSuFU,0
105
105
  mcp_agent/workflows/parallel/fan_in.py,sha256=EivpUL5-qftctws-tlfwmYS1QeSwr07POIbBUbwvwOk,13184
106
106
  mcp_agent/workflows/parallel/fan_out.py,sha256=J-yezgjzAWxfueW_Qcgwoet4PFDRIh0h4m48lIbFA4c,7023
@@ -115,8 +115,8 @@ mcp_agent/workflows/swarm/__init__.py,sha256=47DEQpj8HBSa-_TImW-5JCeuQeRkm5NMpJW
115
115
  mcp_agent/workflows/swarm/swarm.py,sha256=-lAIeSWDqbGHGRPTvjiP9nIKWvxxy9DAojl9yQzO1Pw,11050
116
116
  mcp_agent/workflows/swarm/swarm_anthropic.py,sha256=pW8zFx5baUWGd5Vw3nIDF2oVOOGNorij4qvGJKdYPcs,1624
117
117
  mcp_agent/workflows/swarm/swarm_openai.py,sha256=wfteywvAGkT5bLmIxX_StHJq8144whYmCRnJASAjOes,1596
118
- fast_agent_mcp-0.1.0.dist-info/METADATA,sha256=hN241Foz895Wj3dxkXWKDVeeTuIWmAT1zh7Jvgpo3Bo,27257
119
- fast_agent_mcp-0.1.0.dist-info/WHEEL,sha256=qtCwoSJWgHk21S1Kb4ihdzI2rlJ1ZKaIurTj_ngOhyQ,87
120
- fast_agent_mcp-0.1.0.dist-info/entry_points.txt,sha256=2IXtSmDK9XjWN__RWuRIJTgWyW17wJnJ_h-pb0pZAxo,174
121
- fast_agent_mcp-0.1.0.dist-info/licenses/LICENSE,sha256=cN3FxDURL9XuzE5mhK9L2paZo82LTfjwCYVT7e3j0e4,10939
122
- fast_agent_mcp-0.1.0.dist-info/RECORD,,
118
+ fast_agent_mcp-0.1.1.dist-info/METADATA,sha256=pAhnRdhRDccoGazq-Vrtst1AWKgkGNR9Fe8TB8A9TJc,27861
119
+ fast_agent_mcp-0.1.1.dist-info/WHEEL,sha256=qtCwoSJWgHk21S1Kb4ihdzI2rlJ1ZKaIurTj_ngOhyQ,87
120
+ fast_agent_mcp-0.1.1.dist-info/entry_points.txt,sha256=2IXtSmDK9XjWN__RWuRIJTgWyW17wJnJ_h-pb0pZAxo,174
121
+ fast_agent_mcp-0.1.1.dist-info/licenses/LICENSE,sha256=cN3FxDURL9XuzE5mhK9L2paZo82LTfjwCYVT7e3j0e4,10939
122
+ fast_agent_mcp-0.1.1.dist-info/RECORD,,
@@ -497,6 +497,7 @@ class FastAgent(ContextDependent):
497
497
  request_params: Optional[Dict] = None,
498
498
  human_input: bool = False,
499
499
  plan_type: Literal["full", "iterative"] = "full",
500
+ max_iterations: int = 30, # Add the max_iterations parameter with default value
500
501
  ) -> Callable:
501
502
  """
502
503
  Decorator to create and register an orchestrator.
@@ -510,6 +511,7 @@ class FastAgent(ContextDependent):
510
511
  request_params: Additional request parameters for the LLM
511
512
  human_input: Whether to enable human input capabilities
512
513
  plan_type: Planning approach - "full" generates entire plan first, "iterative" plans one step at a time
514
+ max_iterations: Maximum number of planning iterations (default: 10)
513
515
  """
514
516
  default_instruction = """
515
517
  You are an expert planner. Given an objective task and a list of MCP servers (which are collections of tools)
@@ -517,6 +519,13 @@ class FastAgent(ContextDependent):
517
519
  which can be performed by LLMs with access to the servers or agents.
518
520
  """
519
521
 
522
+ # Handle request_params update with max_iterations
523
+ if request_params is None:
524
+ request_params = {"max_iterations": max_iterations}
525
+ elif isinstance(request_params, dict):
526
+ if "max_iterations" not in request_params:
527
+ request_params["max_iterations"] = max_iterations
528
+
520
529
  decorator = self._create_decorator(
521
530
  AgentType.ORCHESTRATOR,
522
531
  default_name="Orchestrator",
@@ -7,6 +7,10 @@ from mcp_agent.workflows.llm.augmented_llm import RequestParams
7
7
  fast = FastAgent("Data Analysis (Roots)")
8
8
 
9
9
 
10
+ # The sample data is under Database Contents License (DbCL) v1.0.
11
+ # Available here : https://www.kaggle.com/datasets/pavansubhasht/ibm-hr-analytics-attrition-dataset
12
+
13
+
10
14
  @fast.agent(
11
15
  name="data_analysis",
12
16
  instruction="""
@@ -4,6 +4,7 @@ This demonstrates creating multiple agents and an orchestrator to coordinate the
4
4
 
5
5
  import asyncio
6
6
  from mcp_agent.core.fastagent import FastAgent
7
+ from mcp_agent.workflows.llm.augmented_llm import RequestParams
7
8
 
8
9
  # Create the application
9
10
  fast = FastAgent("Agent Builder")
@@ -12,49 +13,68 @@ fast = FastAgent("Agent Builder")
12
13
  @fast.agent(
13
14
  "agent_expert",
14
15
  instruction="""
15
- You design agent workflows, using the practices from 'Building Effective Agents'. You provide concise
16
- specific guidance on design and composition. Prefer simple solutions, and don't nest workflows more
17
- than one level deep. Your ultimate goal will be to produce a single '.py' agent in the style
18
- shown to you that fulfils the Human's needs.
19
- Keep the application simple, define agents with appropriate MCP Servers, Tools and the Human Input Tool.
20
- The style of the program should be like the examples you have been showm, very little additional code (use
21
- very simple Python where necessary). """,
16
+ You design agent workflows, adhering to 'Building Effective Agents' (details to follow).
17
+
18
+ You provide concise specific guidance on design and composition. Prefer simple solutions,
19
+ and don't nest workflows more than one level deep.
20
+
21
+ Your objective is to produce a single '.py' agent in the style of the examples.
22
+
23
+ Keep the application simple, concentrationg on defining Agent instructions, MCP Servers and
24
+ appropriate use of Workflows.
25
+
26
+ The style of the program should be like the examples you have been shown, with a minimum of
27
+ additional code, using only very simple Python where absolutely necessary.
28
+
29
+ Concentrate on the quality of the Agent instructions and "warmup" prompts given to them.
30
+
31
+ Keep requirements minimal: focus on building the prompts and the best workflow. The program
32
+ is expected to be adjusted and refined later.
33
+
34
+ If you are unsure about how to proceed, request input from the Human.
35
+
36
+ """,
22
37
  servers=["filesystem", "fetch"],
38
+ request_params=RequestParams(maxTokens=8192),
23
39
  )
24
40
  # Define worker agents
25
41
  @fast.agent(
26
42
  "requirements_capture",
27
43
  instruction="""
28
- You help the Human define their requirements for building Agent based systems. Keep questions short and
29
- simple, collaborate with the agent_expert or other agents in the workflow to refine human interaction.
30
- Keep requests to the Human simple and minimal. """,
44
+ You help the Human define their requirements for building Agent based systems.
45
+
46
+ Keep questions short, simple and minimal, always offering to complete the questioning
47
+ if desired. If uncertain about something, respond asking the 'agent_expert' for guidance.
48
+
49
+ Do not interrogate the Human, prefer to move the process on, as more details can be requested later
50
+ if needed. Remind the Human of this.
51
+ """,
31
52
  human_input=True,
32
53
  )
33
54
  # Define the orchestrator to coordinate the other agents
34
55
  @fast.orchestrator(
35
- name="orchestrator_worker",
56
+ name="agent_builder",
36
57
  agents=["agent_expert", "requirements_capture"],
37
58
  model="sonnet",
59
+ plan_type="iterative",
60
+ request_params=RequestParams(maxTokens=8192),
61
+ max_iterations=5,
38
62
  )
39
63
  async def main():
40
64
  async with fast.run() as agent:
41
- await agent.agent_expert("""
42
- - Read this paper: https://www.anthropic.com/research/building-effective-agents" to understand
43
- the principles of Building Effective Agents.
44
- - Read and examing the sample agent and workflow definitions in the current directory:
45
- - chaining.py - simple agent chaining example.
46
- - parallel.py - parallel agents example.
47
- - evaluator.py - evaluator optimizer example.
48
- - orchestrator.py - complex orchestration example.
49
- - router.py - workflow routing example.
50
- - Load the 'fastagent.config.yaml' file to see the available and configured MCP Servers.
51
- When producing the agent/workflow definition, keep to a simple single .py file in the style
52
- of the examples.
53
- """)
54
-
55
- await agent.orchestrator_worker(
56
- "Write an Agent program that fulfils the Human's needs."
57
- )
65
+ CODER_WARMUP = """
66
+ - Read this paper: https://www.anthropic.com/research/building-effective-agents" to understand how
67
+ and when to use different types of Agents and Workflow types.
68
+
69
+ - Read this README https://raw.githubusercontent.com/evalstate/fast-agent/refs/heads/main/README.md file
70
+ to see how to use "fast-agent" framework.
71
+
72
+ - Look at the 'fastagent.config.yaml' file to see the available and configured MCP Servers.
73
+
74
+ """
75
+ await agent.agent_expert(CODER_WARMUP)
76
+
77
+ await agent.agent_builder()
58
78
 
59
79
 
60
80
  if __name__ == "__main__":
@@ -45,7 +45,7 @@ fast = FastAgent("Orchestrator-Workers")
45
45
  @fast.orchestrator(
46
46
  name="orchestrate",
47
47
  agents=["finder", "writer", "proofreader"],
48
- plan_type="iterative",
48
+ plan_type="full",
49
49
  )
50
50
  async def main():
51
51
  async with fast.run() as agent:
@@ -246,15 +246,16 @@ class AnthropicAugmentedLLM(AugmentedLLM[MessageParam, Message]):
246
246
  style="dim green italic",
247
247
  )
248
248
 
249
- await self.show_assistant_message(message_text)
250
-
251
249
  # Process all tool calls and collect results
252
250
  tool_results = []
253
- for content in tool_uses:
251
+ for i, content in enumerate(tool_uses):
254
252
  tool_name = content.name
255
253
  tool_args = content.input
256
254
  tool_use_id = content.id
257
255
 
256
+ if i == 0: # Only show message for first tool use
257
+ await self.show_assistant_message(message_text, tool_name)
258
+
258
259
  self.show_tool_call(available_tools, tool_name, tool_args)
259
260
  tool_call_request = CallToolRequest(
260
261
  method="tools/call",
@@ -32,6 +32,7 @@ from mcp_agent.workflows.orchestrator.orchestrator_models import (
32
32
  from mcp_agent.workflows.orchestrator.orchestrator_prompts import (
33
33
  FULL_PLAN_PROMPT_TEMPLATE,
34
34
  ITERATIVE_PLAN_PROMPT_TEMPLATE,
35
+ SYNTHESIZE_INCOMPLETE_PLAN_TEMPLATE, # Add the missing import
35
36
  SYNTHESIZE_PLAN_PROMPT_TEMPLATE,
36
37
  TASK_PROMPT_TEMPLATE,
37
38
  )
@@ -90,7 +91,7 @@ class Orchestrator(AugmentedLLM[MessageParamT, MessageT]):
90
91
  # Initialize with orchestrator-specific defaults
91
92
  orchestrator_params = RequestParams(
92
93
  use_history=False, # Orchestrator doesn't support history
93
- max_iterations=30, # Higher default for complex tasks
94
+ max_iterations=10, # Higher default for complex tasks
94
95
  maxTokens=8192, # Higher default for planning
95
96
  parallel_tool_calls=True,
96
97
  )
@@ -126,7 +127,6 @@ class Orchestrator(AugmentedLLM[MessageParamT, MessageT]):
126
127
  self.agents = {agent.name: agent for agent in available_agents}
127
128
 
128
129
  # Initialize logger
129
- self.logger = logger
130
130
  self.name = name
131
131
 
132
132
  # Store agents by name - COMPLETE REWRITE OF AGENT STORAGE
@@ -213,8 +213,12 @@ class Orchestrator(AugmentedLLM[MessageParamT, MessageT]):
213
213
  ) -> PlanResult:
214
214
  """Execute task with result chaining between steps"""
215
215
  iterations = 0
216
+ total_steps_executed = 0
216
217
 
217
218
  params = self.get_request_params(request_params)
219
+ max_steps = getattr(
220
+ params, "max_steps", params.max_iterations * 5
221
+ ) # Default to 5× max_iterations
218
222
 
219
223
  # Single progress event for orchestration start
220
224
  model = await self.select_model(params) or "unknown-model"
@@ -231,6 +235,9 @@ class Orchestrator(AugmentedLLM[MessageParamT, MessageT]):
231
235
  )
232
236
 
233
237
  plan_result = PlanResult(objective=objective, step_results=[])
238
+ plan_result.max_iterations_reached = (
239
+ False # Add a flag to track if we hit the limit
240
+ )
234
241
 
235
242
  while iterations < params.max_iterations:
236
243
  if self.plan_type == "iterative":
@@ -279,6 +286,14 @@ class Orchestrator(AugmentedLLM[MessageParamT, MessageT]):
279
286
  # Execute each step, collecting results
280
287
  # Note that in iterative mode this will only be a single step
281
288
  for step in plan.steps:
289
+ # Check if we've hit the step limit
290
+ if total_steps_executed >= max_steps:
291
+ self.logger.warning(
292
+ f"Reached maximum step limit ({max_steps}) without completing objective."
293
+ )
294
+ plan_result.max_steps_reached = True
295
+ break
296
+
282
297
  step_result = await self._execute_step(
283
298
  step=step,
284
299
  previous_result=plan_result,
@@ -286,16 +301,40 @@ class Orchestrator(AugmentedLLM[MessageParamT, MessageT]):
286
301
  )
287
302
 
288
303
  plan_result.add_step_result(step_result)
304
+ total_steps_executed += 1
305
+
306
+ # Check for step limit after executing steps
307
+ if total_steps_executed >= max_steps:
308
+ plan_result.max_iterations_reached = True
309
+ break
289
310
 
290
311
  logger.debug(
291
312
  f"Iteration {iterations}: Intermediate plan result:", data=plan_result
292
313
  )
293
314
  iterations += 1
294
315
 
295
- raise RuntimeError(
296
- f"Task failed to complete in {params.max_iterations} iterations"
316
+ # If we get here, we've hit the iteration limit without completing
317
+ self.logger.warning(
318
+ f"Failed to complete in {params.max_iterations} iterations."
319
+ )
320
+
321
+ # Mark that we hit the iteration limit
322
+ plan_result.max_iterations_reached = True
323
+
324
+ # Synthesize what we have so far, but use a different prompt that explains the incomplete status
325
+ synthesis_prompt = SYNTHESIZE_INCOMPLETE_PLAN_TEMPLATE.format(
326
+ plan_result=format_plan_result(plan_result),
327
+ max_iterations=params.max_iterations,
297
328
  )
298
329
 
330
+ # Generate a final synthesis that acknowledges the incomplete status
331
+ plan_result.result = await self.planner.generate_str(
332
+ message=synthesis_prompt,
333
+ request_params=params.model_copy(update={"max_iterations": 1}),
334
+ )
335
+
336
+ return plan_result
337
+
299
338
  async def _execute_step(
300
339
  self,
301
340
  step: Step,
@@ -399,18 +438,11 @@ class Orchestrator(AugmentedLLM[MessageParamT, MessageT]):
399
438
  params = self.get_request_params(request_params)
400
439
  params = params.model_copy(update={"use_history": False})
401
440
 
402
- # Debug: Print agent names before formatting
403
- print("\n------ AGENT NAMES BEFORE FORMATTING ------")
404
- for agent_name in self.agents.keys():
405
- print(f"Agent name: '{agent_name}'")
406
- print("------------------------------------------\n")
407
-
408
441
  # Format agents without numeric prefixes for cleaner XML
409
442
  agent_formats = []
410
443
  for agent_name in self.agents.keys():
411
444
  formatted = self._format_agent_info(agent_name)
412
445
  agent_formats.append(formatted)
413
- print(f"Formatted agent '{agent_name}':\n{formatted[:200]}...\n")
414
446
 
415
447
  agents = "\n".join(agent_formats)
416
448
 
@@ -423,10 +455,23 @@ class Orchestrator(AugmentedLLM[MessageParamT, MessageT]):
423
455
  else "Plan Status: In Progress"
424
456
  )
425
457
 
458
+ # Fix the iteration counting display
459
+ max_iterations = params.max_iterations
460
+ # Get the actual iteration number we're on (0-based → 1-based for display)
461
+ current_iteration = len(plan_result.step_results) // (
462
+ 1 if self.plan_type == "iterative" else len(plan_result.step_results) or 1
463
+ )
464
+ current_iteration = min(current_iteration, max_iterations - 1) # Cap at max-1
465
+ iterations_remaining = max(
466
+ 0, max_iterations - current_iteration - 1
467
+ ) # Ensure non-negative
468
+ iterations_info = f"Planning Budget: Iteration {current_iteration + 1} of {max_iterations} (with {iterations_remaining} remaining)"
469
+
426
470
  prompt = FULL_PLAN_PROMPT_TEMPLATE.format(
427
471
  objective=objective,
428
472
  plan_result=format_plan_result(plan_result),
429
473
  plan_status=plan_status,
474
+ iterations_info=iterations_info,
430
475
  agents=agents,
431
476
  )
432
477
 
@@ -507,10 +552,17 @@ class Orchestrator(AugmentedLLM[MessageParamT, MessageT]):
507
552
  else "Plan Status: In Progress"
508
553
  )
509
554
 
555
+ # Add max_iterations info for the LLM
556
+ max_iterations = params.max_iterations
557
+ current_iteration = len(plan_result.step_results)
558
+ iterations_remaining = max_iterations - current_iteration
559
+ iterations_info = f"Planning Budget: {iterations_remaining} of {max_iterations} iterations remaining"
560
+
510
561
  prompt = ITERATIVE_PLAN_PROMPT_TEMPLATE.format(
511
562
  objective=objective,
512
563
  plan_result=format_plan_result(plan_result),
513
564
  plan_status=plan_status,
565
+ iterations_info=iterations_info,
514
566
  agents=agents,
515
567
  )
516
568
 
@@ -102,6 +102,9 @@ class PlanResult(BaseModel):
102
102
  is_complete: bool = False
103
103
  """Whether the overall plan objective is complete"""
104
104
 
105
+ max_iterations_reached: bool = False
106
+ """Whether the plan execution reached the maximum number of iterations without completing"""
107
+
105
108
  result: str | None = None
106
109
  """Result of executing the plan"""
107
110
 
@@ -1,3 +1,8 @@
1
+ """
2
+ Prompt templates used by the Orchestrator workflow.
3
+ """
4
+
5
+ # Templates for formatting results
1
6
  TASK_RESULT_TEMPLATE = """Task: {task_description}
2
7
  Result: {task_result}"""
3
8
 
@@ -30,6 +35,7 @@ You can analyze results from the previous steps already executed to decide if th
30
35
 
31
36
  <fastagent:status>
32
37
  {plan_status}
38
+ {iterations_info}
33
39
  </fastagent:status>
34
40
  </fastagent:data>
35
41
 
@@ -38,6 +44,10 @@ If the previous results achieve the objective, return is_complete=True.
38
44
  Otherwise, generate remaining steps needed.
39
45
 
40
46
  <fastagent:instruction>
47
+ You are operating in "full plan" mode, where you generate a complete plan with ALL remaining steps needed.
48
+ After receiving your plan, the system will execute ALL steps in your plan before asking for your input again.
49
+ If the plan needs multiple iterations, you'll be called again with updated results.
50
+
41
51
  Generate a plan with all remaining steps needed.
42
52
  Steps are sequential, but each Step can have parallel subtasks.
43
53
  For each Step, specify a description of the step and independent subtasks that can run in parallel.
@@ -68,6 +78,8 @@ Return your response in the following JSON structure:
68
78
  "is_complete": false
69
79
  }}
70
80
 
81
+ Set "is_complete" to true ONLY if you are confident the objective has been fully achieved based on work completed so far.
82
+
71
83
  You must respond with valid JSON only, with no triple backticks. No markdown formatting.
72
84
  No extra text. Do not wrap in ```json code fences.
73
85
  </fastagent:instruction>
@@ -92,6 +104,7 @@ to decide what to do next.
92
104
 
93
105
  <fastagent:status>
94
106
  {plan_status}
107
+ {iterations_info}
95
108
  </fastagent:status>
96
109
  </fastagent:data>
97
110
 
@@ -100,6 +113,9 @@ If the previous results achieve the objective, return is_complete=True.
100
113
  Otherwise, generate the next Step.
101
114
 
102
115
  <fastagent:instruction>
116
+ You are operating in "iterative plan" mode, where you generate ONLY ONE STEP at a time.
117
+ After each step is executed, you'll be called again to determine the next step based on updated results.
118
+
103
119
  Generate the next step, by specifying a description of the step and independent subtasks that can run in parallel:
104
120
  For each subtask specify:
105
121
  1. Clear description of the task that an LLM can execute
@@ -120,6 +136,8 @@ Return your response in the following JSON structure:
120
136
  "is_complete": false
121
137
  }}
122
138
 
139
+ Set "is_complete" to true ONLY if you are confident the objective has been fully achieved based on work completed so far.
140
+
123
141
  You must respond with valid JSON only, with no triple backticks. No markdown formatting.
124
142
  No extra text. Do not wrap in ```json code fences.
125
143
  </fastagent:instruction>
@@ -184,5 +202,35 @@ Create a comprehensive final response that addresses the original objective.
184
202
  Integrate all the information gathered across all plan steps.
185
203
  Provide a clear, complete answer that achieves the objective.
186
204
  Focus on delivering value through your synthesis, not just summarizing.
205
+
206
+ If the plan was marked as incomplete but the maximum number of iterations was reached,
207
+ make sure to state clearly what was accomplished and what remains to be done.
208
+ </fastagent:instruction>
209
+ """
210
+
211
+ # New template for incomplete plans due to iteration limits
212
+ SYNTHESIZE_INCOMPLETE_PLAN_TEMPLATE = """You need to synthesize the results of all completed plan steps into a final response.
213
+
214
+ <fastagent:data>
215
+ <fastagent:plan-results>
216
+ {plan_result}
217
+ </fastagent:plan-results>
218
+ </fastagent:data>
219
+
220
+ <fastagent:status>
221
+ The maximum number of iterations ({max_iterations}) was reached before the objective could be completed.
222
+ </fastagent:status>
223
+
224
+ <fastagent:instruction>
225
+ Create a comprehensive response that summarizes what was accomplished so far.
226
+ The objective was NOT fully completed due to reaching the maximum number of iterations.
227
+
228
+ In your response:
229
+ 1. Clearly state that the objective was not fully completed
230
+ 2. Summarize what WAS accomplished across all the executed steps
231
+ 3. Identify what remains to be done to complete the objective
232
+ 4. Organize the information to provide maximum value despite being incomplete
233
+
234
+ Focus on being transparent about the incomplete status while providing as much value as possible.
187
235
  </fastagent:instruction>
188
236
  """