biblemate 0.0.11__tar.gz

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
@@ -0,0 +1,119 @@
1
+ Metadata-Version: 2.1
2
+ Name: biblemate
3
+ Version: 0.0.11
4
+ Summary: AgentMake AI MCP Servers - Easy setup of MCP servers running AgentMake AI agentic components.
5
+ Home-page: https://toolmate.ai
6
+ Author: Eliran Wong
7
+ Author-email: support@toolmate.ai
8
+ License: GNU General Public License (GPL)
9
+ Project-URL: Source, https://github.com/eliranwong/xomateai
10
+ Project-URL: Tracker, https://github.com/eliranwong/xomateai/issues
11
+ Project-URL: Documentation, https://github.com/eliranwong/xomateai/wiki
12
+ Project-URL: Funding, https://www.paypal.me/toolmate
13
+ Keywords: mcp agent toolmate ai anthropic azure chatgpt cohere deepseek genai github googleai groq llamacpp mistral ollama openai vertexai xai
14
+ Classifier: Development Status :: 5 - Production/Stable
15
+ Classifier: Intended Audience :: End Users/Desktop
16
+ Classifier: Topic :: Utilities
17
+ Classifier: Topic :: Scientific/Engineering :: Artificial Intelligence
18
+ Classifier: Topic :: Software Development :: Build Tools
19
+ Classifier: License :: OSI Approved :: GNU General Public License v3 or later (GPLv3+)
20
+ Classifier: Programming Language :: Python :: 3.8
21
+ Classifier: Programming Language :: Python :: 3.9
22
+ Classifier: Programming Language :: Python :: 3.10
23
+ Classifier: Programming Language :: Python :: 3.11
24
+ Classifier: Programming Language :: Python :: 3.12
25
+ Requires-Python: >=3.8, <3.13
26
+ Provides-Extra: genai
27
+
28
+ =========
29
+ XoMate AI
30
+ =========
31
+
32
+ Execute. Orchestrate. Automate.
33
+
34
+ **XoMate.AI is your autonomous execution engine—automating planning, orchestration, and execution of tasks using multiple tools to resolve user requests seamlessly.**
35
+
36
+ For professionals, teams, and innovators who need more than just chat-based AI, xomate.ai is an intelligent automation agent that plans, coordinates, and executes tasks across multiple tools. Unlike basic AI chatbots, xomate.ai doesn’t just answer—it gets things done.
37
+
38
+ Core Messaging
39
+ --------------
40
+
41
+ Elevator Pitch
42
+ ~~~~~~~~~~~~~~
43
+
44
+ xomate.ai is an automation-first AI agent that takes your goals, creates a structured plan, and executes it by orchestrating multiple tools. It goes beyond conversation—delivering real results.
45
+
46
+ Value Propositions
47
+ ~~~~~~~~~~~~~~~~~~
48
+
49
+ * **Execute**: Automatically carry out tasks from start to finish.
50
+ * **Orchestrate**: Seamlessly coordinate multiple tools and APIs.
51
+ * **Automate**: Save time and effort by letting xomate.ai handle complex workflows.
52
+
53
+ Key Differentiators
54
+ ~~~~~~~~~~~~~~~~~~~
55
+
56
+ * Built on the `agentmake.ai <https://github.com/eliranwong/agentmake>`_ framework, proven through `LetMeDoIt.AI <https://github.com/eliranwong/letmedoit>`_, `ToolMate.AI <https://github.com/eliranwong/toolmate>`_ and `TeamGen AI <https://github.com/eliranwong/teamgenai>`_.
57
+ * Execution-focused, not just advisory.
58
+ * Flexible integration with existing tools and APIs.
59
+ * Scalable from individual users to enterprise workflows.
60
+ * **Versatile** – supports 16 AI backends and numerous models, leveraging the advantages of AgentMake AI.
61
+ * **Extensible** – capable of extending functionalities by interacting with Additional AgentMake AI tools or third-party MCP (Modal Context Protocol) servers.
62
+
63
+ XoMate AI Agentic Workflow
64
+ --------------------------
65
+
66
+ 1. **XoMate AI** receives a request from a user.
67
+ 2. **XoMate AI** analyzes the request and determines that it requires multiple steps to complete.
68
+ 3. **XoMate AI** generates a ``Master Prompt`` that outlines the steps needed to complete the request.
69
+ 4. **XoMate AI** sends the ``Master Prompt`` to a supervisor agent, who reviews the prompt and provides suggestions for improvement.
70
+ 5. **XoMate AI** sends the suggestions to a tool selection agent, who selects the most appropriate tools for each step of the ``Master Prompt``.
71
+ 6. **XoMate AI** sends the selected tools and the ``Master Prompt`` to an instruction generation agent, who converts the suggestions into clear and concise instructions for an AI assistant to follow.
72
+ 7. **XoMate AI** sends the instructions to an AI assistant, who executes the instructions using the selected tools. When the selected tool is not an internal tool, built in with XoMate AI, XoMate AI calls the external tool via interacting with the MCP (Modal Context Protocol) servers, configured by users.
73
+ 8. **XoMate AI** monitors the progress of the AI assistant and provides additional suggestions or instructions as needed.
74
+ 9. Once all steps are completed, **XoMate AI** provides a concise summary of the results to the user.
75
+ 10. The user receives the final response, which fully resolves their original request.
76
+
77
+ Development in Progress
78
+ -----------------------
79
+
80
+ 1. Agentic workflow developed and tested.
81
+ 2. Core code built for the agentic workflow.
82
+ 3. Tested with AgentMake AI MCP servers.
83
+
84
+ Pending
85
+ ~~~~~~~
86
+
87
+ * Build an action plan agent to handle random requests.
88
+ * Refine code and improve effectiveness.
89
+ * Test with third-party systems.
90
+ * Select frequently used AgentMake AI tools to include in the main library as built-in tools.
91
+ * Build CLI/TUI interfaces.
92
+ * Build a web UI.
93
+ * Test on Windows, macOS, Linux, and ChromeOS.
94
+ * Test on Android mobile devices.
95
+
96
+ Custom Features
97
+ ~~~~~~~~~~~~~~~
98
+
99
+ * options to unload some or all built-in tools
100
+ * custom XoMate AI system prompts
101
+ * edit master plan
102
+ * iteration allowance
103
+ * change tools
104
+
105
+ ... more ...
106
+
107
+ Install (upcoming ...)
108
+ ----------------------
109
+
110
+ .. code-block:: bash
111
+
112
+ pip install --upgrade xomateai
113
+
114
+ Attention: The ``xomateai`` package is currently mirroring the ``agentmakemcp`` package. Once the development of XoMate AI reaches the production stage, the actual ``xomateai`` package will be uploaded.
115
+
116
+ License
117
+ -------
118
+
119
+ This project is licensed under the MIT License - see the `LICENSE <LICENSE>`_ file for details.
@@ -0,0 +1,92 @@
1
+ =========
2
+ XoMate AI
3
+ =========
4
+
5
+ Execute. Orchestrate. Automate.
6
+
7
+ **XoMate.AI is your autonomous execution engine—automating planning, orchestration, and execution of tasks using multiple tools to resolve user requests seamlessly.**
8
+
9
+ For professionals, teams, and innovators who need more than just chat-based AI, xomate.ai is an intelligent automation agent that plans, coordinates, and executes tasks across multiple tools. Unlike basic AI chatbots, xomate.ai doesn’t just answer—it gets things done.
10
+
11
+ Core Messaging
12
+ --------------
13
+
14
+ Elevator Pitch
15
+ ~~~~~~~~~~~~~~
16
+
17
+ xomate.ai is an automation-first AI agent that takes your goals, creates a structured plan, and executes it by orchestrating multiple tools. It goes beyond conversation—delivering real results.
18
+
19
+ Value Propositions
20
+ ~~~~~~~~~~~~~~~~~~
21
+
22
+ * **Execute**: Automatically carry out tasks from start to finish.
23
+ * **Orchestrate**: Seamlessly coordinate multiple tools and APIs.
24
+ * **Automate**: Save time and effort by letting xomate.ai handle complex workflows.
25
+
26
+ Key Differentiators
27
+ ~~~~~~~~~~~~~~~~~~~
28
+
29
+ * Built on the `agentmake.ai <https://github.com/eliranwong/agentmake>`_ framework, proven through `LetMeDoIt.AI <https://github.com/eliranwong/letmedoit>`_, `ToolMate.AI <https://github.com/eliranwong/toolmate>`_ and `TeamGen AI <https://github.com/eliranwong/teamgenai>`_.
30
+ * Execution-focused, not just advisory.
31
+ * Flexible integration with existing tools and APIs.
32
+ * Scalable from individual users to enterprise workflows.
33
+ * **Versatile** – supports 16 AI backends and numerous models, leveraging the advantages of AgentMake AI.
34
+ * **Extensible** – capable of extending functionalities by interacting with Additional AgentMake AI tools or third-party MCP (Modal Context Protocol) servers.
35
+
36
+ XoMate AI Agentic Workflow
37
+ --------------------------
38
+
39
+ 1. **XoMate AI** receives a request from a user.
40
+ 2. **XoMate AI** analyzes the request and determines that it requires multiple steps to complete.
41
+ 3. **XoMate AI** generates a ``Master Prompt`` that outlines the steps needed to complete the request.
42
+ 4. **XoMate AI** sends the ``Master Prompt`` to a supervisor agent, who reviews the prompt and provides suggestions for improvement.
43
+ 5. **XoMate AI** sends the suggestions to a tool selection agent, who selects the most appropriate tools for each step of the ``Master Prompt``.
44
+ 6. **XoMate AI** sends the selected tools and the ``Master Prompt`` to an instruction generation agent, who converts the suggestions into clear and concise instructions for an AI assistant to follow.
45
+ 7. **XoMate AI** sends the instructions to an AI assistant, who executes the instructions using the selected tools. When the selected tool is not an internal tool, built in with XoMate AI, XoMate AI calls the external tool via interacting with the MCP (Modal Context Protocol) servers, configured by users.
46
+ 8. **XoMate AI** monitors the progress of the AI assistant and provides additional suggestions or instructions as needed.
47
+ 9. Once all steps are completed, **XoMate AI** provides a concise summary of the results to the user.
48
+ 10. The user receives the final response, which fully resolves their original request.
49
+
50
+ Development in Progress
51
+ -----------------------
52
+
53
+ 1. Agentic workflow developed and tested.
54
+ 2. Core code built for the agentic workflow.
55
+ 3. Tested with AgentMake AI MCP servers.
56
+
57
+ Pending
58
+ ~~~~~~~
59
+
60
+ * Build an action plan agent to handle random requests.
61
+ * Refine code and improve effectiveness.
62
+ * Test with third-party systems.
63
+ * Select frequently used AgentMake AI tools to include in the main library as built-in tools.
64
+ * Build CLI/TUI interfaces.
65
+ * Build a web UI.
66
+ * Test on Windows, macOS, Linux, and ChromeOS.
67
+ * Test on Android mobile devices.
68
+
69
+ Custom Features
70
+ ~~~~~~~~~~~~~~~
71
+
72
+ * options to unload some or all built-in tools
73
+ * custom XoMate AI system prompts
74
+ * edit master plan
75
+ * iteration allowance
76
+ * change tools
77
+
78
+ ... more ...
79
+
80
+ Install (upcoming ...)
81
+ ----------------------
82
+
83
+ .. code-block:: bash
84
+
85
+ pip install --upgrade xomateai
86
+
87
+ Attention: The ``xomateai`` package is currently mirroring the ``agentmakemcp`` package. Once the development of XoMate AI reaches the production stage, the actual ``xomateai`` package will be uploaded.
88
+
89
+ License
90
+ -------
91
+
92
+ This project is licensed under the MIT License - see the `LICENSE <LICENSE>`_ file for details.
File without changes
@@ -0,0 +1,26 @@
1
+ import os
2
+ from agentmake import PACKAGE_PATH, AGENTMAKE_USER_DIR, readTextFile
3
+
4
+ def get_system_suggestion(master_plan: str) -> str:
5
+ """
6
+ create system prompt for suggestion
7
+ """
8
+ possible_system_file_path_2 = os.path.join(PACKAGE_PATH, "systems", "xomate", "supervisor.md")
9
+ possible_system_file_path_1 = os.path.join(AGENTMAKE_USER_DIR, "systems", "xomate", "supervisor.md")
10
+ return readTextFile(possible_system_file_path_2 if os.path.isfile(possible_system_file_path_2) else possible_system_file_path_1).format(master_plan=master_plan)
11
+
12
+ def get_system_tool_instruction(tool: str, tool_description: str = "") -> str:
13
+ """
14
+ create system prompt for tool instruction
15
+ """
16
+ possible_system_file_path_2 = os.path.join(PACKAGE_PATH, "systems", "xomate", "tool_instruction.md")
17
+ possible_system_file_path_1 = os.path.join(AGENTMAKE_USER_DIR, "systems", "xomate", "tool_instruction.md")
18
+ return readTextFile(possible_system_file_path_2 if os.path.isfile(possible_system_file_path_2) else possible_system_file_path_1).format(tool=tool, tool_description=tool_description)
19
+
20
+ def get_system_tool_selection(available_tools: list, tool_descriptions: str) -> str:
21
+ """
22
+ create system prompt for tool selection
23
+ """
24
+ possible_system_file_path_2 = os.path.join(PACKAGE_PATH, "systems", "xomate", "tool_selection.md")
25
+ possible_system_file_path_1 = os.path.join(AGENTMAKE_USER_DIR, "systems", "xomate", "tool_selection.md")
26
+ return readTextFile(possible_system_file_path_2 if os.path.isfile(possible_system_file_path_2) else possible_system_file_path_1).format(available_tools=available_tools, tool_descriptions=tool_descriptions)
@@ -0,0 +1,288 @@
1
+ from biblemate.core.systems import *
2
+ from biblemate.ui.prompts import getInput
3
+ from biblemate.ui.info import get_banner
4
+ from pathlib import Path
5
+ import asyncio, re, os
6
+ from alive_progress import alive_bar
7
+ from fastmcp import Client
8
+ from agentmake import agentmake, writeTextFile, getCurrentDateTime, AGENTMAKE_USER_DIR, USER_OS, DEVELOPER_MODE
9
+ from rich.console import Console
10
+ from rich.markdown import Markdown
11
+ from rich.progress import Progress, SpinnerColumn, TextColumn
12
+ from rich.terminal_theme import MONOKAI
13
+ if not USER_OS == "Windows":
14
+ import readline # for better input experience
15
+
16
+ # MCP server client example
17
+ # testing in progress; not in production yet
18
+ client = Client("http://127.0.0.1:8083/mcp/") # !agentmakemcp agentmakemcp/examples/bible_study.py
19
+
20
+ # TODO: allow overriding default AgentMake config
21
+ AGENTMAKE_CONFIG = {
22
+ "backend": None,
23
+ "model": None,
24
+ "model_keep_alive": None,
25
+ "temperature": None,
26
+ "max_tokens": None,
27
+ "context_window": None,
28
+ "batch_size": None,
29
+ "stream": None,
30
+ "print_on_terminal": False,
31
+ "word_wrap": False,
32
+ }
33
+ MAX_STEPS = 50
34
+
35
+ async def main():
36
+
37
+ console = Console(record=True)
38
+ console.clear()
39
+ console.print(get_banner())
40
+
41
+ async with client:
42
+ await client.ping()
43
+
44
+ #resources = await client.list_resources()
45
+ #print("# Resources\n\n", resources, "\n\n")
46
+
47
+ # List available tools, resources, and prompts
48
+ tools_raw = await client.list_tools()
49
+ #print(tools_raw)
50
+ tools = {t.name: t.description for t in tools_raw}
51
+
52
+ available_tools = list(tools.keys())
53
+ if not "get_direct_text_response" in available_tools:
54
+ available_tools.insert(0, "get_direct_text_response")
55
+
56
+ # add tool description for get_direct_text_response if not exists
57
+ if not "get_direct_text_response" in tools:
58
+ tool_descriptions = f"""# TOOL DESCRIPTION: `get_direct_text_response`
59
+ Get a static text-based response directly from a text-based AI model without using any other tools. This is useful when you want to provide a simple and direct answer to a question or request, without the need for online latest updates or task execution.\n\n\n"""
60
+ # add tool descriptions
61
+ for tool_name, tool_description in tools.items():
62
+ tool_descriptions += f"""# TOOL DESCRIPTION: `{tool_name}`
63
+ {tool_description}\n\n\n"""
64
+
65
+ prompts_raw = await client.list_prompts()
66
+ #print("# Prompts\n\n", prompts_raw, "\n\n")
67
+ prompts = {p.name: p.description for p in prompts_raw}
68
+ prompt_list = [f"/{p}" for p in prompts.keys()]
69
+ prompt_pattern = "|".join(prompt_list)
70
+ prompt_pattern = f"""^({prompt_pattern}) """
71
+
72
+ user_request = ""
73
+ messages = []
74
+
75
+ while not user_request == ".quit":
76
+
77
+ # spinner while thinking
78
+ async def thinking(process):
79
+ with Progress(
80
+ SpinnerColumn(),
81
+ TextColumn("[progress.description]{task.description}"),
82
+ transient=True # This makes the progress bar disappear after the task is done
83
+ ) as progress:
84
+ # Add an indefinite task (total=None)
85
+ task_id = progress.add_task("Thinking ...", total=None)
86
+ # Create and run the async task concurrently
87
+ async_task = asyncio.create_task(process())
88
+ # Loop until the async task is done
89
+ while not async_task.done():
90
+ progress.update(task_id)
91
+ await asyncio.sleep(0.01)
92
+ await async_task
93
+ # progress bar for processing steps
94
+ async def async_alive_bar(task):
95
+ """
96
+ A coroutine that runs a progress bar while awaiting a task.
97
+ """
98
+ with alive_bar(title="Processing...", spinner='dots') as bar:
99
+ while not task.done():
100
+ bar() # Update the bar
101
+ await asyncio.sleep(0.01) # Yield control back to the event loop
102
+ return task.result()
103
+ async def process_step_async(step_number):
104
+ """
105
+ Manages the async task and the progress bar.
106
+ """
107
+ print(f"# Starting Step [{step_number}]...")
108
+ # Create the async task but don't await it yet.
109
+ task = asyncio.create_task(process_step())
110
+ # Await the custom async progress bar that awaits the task.
111
+ await async_alive_bar(task)
112
+
113
+ if messages:
114
+ console.rule()
115
+
116
+ # Original user request
117
+ # note: `python3 -m rich.emoji` for checking emoji
118
+ console.print("Enter your request :smiley: :" if not messages else "Enter a follow-up request :flexed_biceps: :")
119
+ input_suggestions = [".new", ".quit"]+prompt_list
120
+ user_request = await getInput("> ", input_suggestions)
121
+ while not user_request.strip():
122
+ user_request = await getInput("> ", input_suggestions)
123
+ # TODO: auto-prompt engineering based on the user request
124
+
125
+ if user_request in (".new", ".quit"):
126
+ # TODO: backup messages
127
+ if user_request == ".new":
128
+ user_request = ""
129
+ messages = []
130
+ console.clear()
131
+ console.print(get_banner())
132
+ continue
133
+
134
+ print(prompt_pattern, user_request)
135
+ if re.search(prompt_pattern, user_request):
136
+ print(111)
137
+ prompt_name = re.search(prompt_pattern, user_request).group(1)
138
+ user_request = user_request[len(prompt_name):]
139
+ # Call the MCP prompt
140
+ result = await client.get_prompt(prompt_name[1:], {"request": user_request})
141
+ #print(result, "\n\n")
142
+ master_plan = result.messages[0].content.text
143
+ # display info
144
+ console.print(Markdown(f"# User Request\n\n{user_request}\n\n# Master plan\n\n{master_plan}"))
145
+ else:
146
+ # display info
147
+ console.print(Markdown(f"# User Request\n\n{user_request}"), "\n")
148
+ # Generate master plan
149
+ master_plan = ""
150
+ async def generate_master_plan():
151
+ nonlocal master_plan
152
+ # Create initial prompt to create master plan
153
+ initial_prompt = f"""Provide me with the `Preliminary Action Plan` and the `Measurable Outcome` for resolving `My Request`.
154
+
155
+ # Available Tools
156
+
157
+ Available tools are: {available_tools}.
158
+
159
+ {tool_descriptions}
160
+
161
+ # My Request
162
+
163
+ {user_request}"""
164
+ console.print(Markdown("# Master plan"), "\n")
165
+ print()
166
+ master_plan = agentmake(messages+[{"role": "user", "content": initial_prompt}], system="create_action_plan", **AGENTMAKE_CONFIG)[-1].get("content", "").strip()
167
+ await thinking(generate_master_plan)
168
+ # display info
169
+ console.print(Markdown(master_plan), "\n\n")
170
+
171
+ system_suggestion = get_system_suggestion(master_plan)
172
+
173
+ # Tool selection systemm message
174
+ system_tool_selection = get_system_tool_selection(available_tools, tool_descriptions)
175
+
176
+ # Get the first suggestion
177
+ next_suggestion = ""
178
+ async def get_first_suggestion():
179
+ nonlocal next_suggestion
180
+ console.print(Markdown("## Suggestion [1]"), "\n")
181
+ next_suggestion = agentmake(user_request, system=system_suggestion, **AGENTMAKE_CONFIG)[-1].get("content", "").strip()
182
+ await thinking(get_first_suggestion)
183
+ console.print(Markdown(next_suggestion), "\n\n")
184
+
185
+ if not messages:
186
+ messages = [
187
+ {"role": "system", "content": "You are XoMate, an autonomous AI agent."},
188
+ {"role": "user", "content": user_request},
189
+ ]
190
+ else:
191
+ messages.append({"role": "user", "content": user_request})
192
+
193
+ step = 1
194
+ while not ("DONE" in next_suggestion or re.sub("^[^A-Za-z]*?([A-Za-z]+?)[^A-Za-z]*?$", r"\1", next_suggestion).upper() == "DONE"):
195
+
196
+ # Get tool suggestion for the next iteration
197
+ suggested_tools = []
198
+ async def get_tool_suggestion():
199
+ nonlocal suggested_tools, next_suggestion, system_tool_selection
200
+ if DEVELOPER_MODE:
201
+ console.print(Markdown(f"## Tool Selection (descending order by relevance) [{step}]"), "\n")
202
+ else:
203
+ console.print(Markdown(f"## Tool Selection [{step}]"), "\n")
204
+ # Extract suggested tools from the step suggestion
205
+ suggested_tools = agentmake(next_suggestion, system=system_tool_selection, **AGENTMAKE_CONFIG)[-1].get("content", "").strip() # Note: suggested tools are printed on terminal by default, could be hidden by setting `print_on_terminal` to false
206
+ suggested_tools = re.sub(r"^.*?(\[.*?\]).*?$", r"\1", suggested_tools, flags=re.DOTALL)
207
+ suggested_tools = eval(suggested_tools) if suggested_tools.startswith("[") and suggested_tools.endswith("]") else ["get_direct_text_response"] # fallback to direct response
208
+ await thinking(get_tool_suggestion)
209
+ if DEVELOPER_MODE:
210
+ console.print(Markdown(str(suggested_tools)))
211
+
212
+ # Use the next suggested tool
213
+ next_tool = suggested_tools[0] if suggested_tools else "get_direct_text_response"
214
+ prefix = f"## Next Tool [{step}]\n\n" if DEVELOPER_MODE else ""
215
+ console.print(Markdown(f"{prefix}`{next_tool}`"))
216
+ print()
217
+
218
+ # Get next step instruction
219
+ next_step = ""
220
+ async def get_next_step():
221
+ nonlocal next_step, next_tool, next_suggestion, tools
222
+ console.print(Markdown(f"## Next Instruction [{step}]"), "\n")
223
+ if next_tool == "get_direct_text_response":
224
+ next_step = agentmake(next_suggestion, system="xomate/direct_instruction", **AGENTMAKE_CONFIG)[-1].get("content", "").strip()
225
+ else:
226
+ next_tool_description = tools.get(next_tool, "No description available.")
227
+ system_tool_instruction = get_system_tool_instruction(next_tool, next_tool_description)
228
+ next_step = agentmake(next_suggestion, system=system_tool_instruction, **AGENTMAKE_CONFIG)[-1].get("content", "").strip()
229
+ await thinking(get_next_step)
230
+ console.print(Markdown(next_step), "\n\n")
231
+
232
+ if messages[-1]["role"] != "assistant": # first iteration
233
+ messages.append({"role": "assistant", "content": "Please provide me with an initial instruction to begin."})
234
+ messages.append({"role": "user", "content": next_step})
235
+
236
+ async def process_step():
237
+ nonlocal messages, next_tool, next_step
238
+ if next_tool == "get_direct_text_response":
239
+ messages = agentmake(messages, system="auto", **AGENTMAKE_CONFIG)
240
+ else:
241
+ try:
242
+ tool_result = await client.call_tool(next_tool, {"request": next_step})
243
+ tool_result = tool_result.content[0].text
244
+ messages[-1]["content"] += f"\n\n[Using tool `{next_tool}`]"
245
+ messages.append({"role": "assistant", "content": tool_result})
246
+ except Exception as e:
247
+ if DEVELOPER_MODE:
248
+ console.print(f"Error: {e}\nFallback to direct response...\n\n")
249
+ messages = agentmake(messages, system="auto", **AGENTMAKE_CONFIG)
250
+ await process_step_async(step)
251
+
252
+ console.print(Markdown(f"\n## Output [{step}]\n\n{messages[-1]["content"]}"))
253
+
254
+ # iteration count
255
+ step += 1
256
+ if step > MAX_STEPS:
257
+ print("Stopped! Too many steps! `MAX_STEPS` is currently set to ", MAX_STEPS, "!")
258
+ print("You can increase it in the settings, but be careful not to create an infinite loop!")
259
+ break
260
+
261
+ # Get the next suggestion
262
+ async def get_next_suggestion():
263
+ nonlocal next_suggestion, messages, system_suggestion
264
+ console.print(Markdown(f"## Suggestion [{step}]"), "\n")
265
+ next_suggestion = agentmake(messages, system=system_suggestion, follow_up_prompt="Please provide me with the next suggestion.", **AGENTMAKE_CONFIG)[-1].get("content", "").strip()
266
+ await thinking(get_next_suggestion)
267
+ #print()
268
+ console.print(Markdown(next_suggestion), "\n")
269
+
270
+ # Backup
271
+ timestamp = getCurrentDateTime()
272
+ storagePath = os.path.join(AGENTMAKE_USER_DIR, "xomate", timestamp)
273
+ Path(storagePath).mkdir(parents=True, exist_ok=True)
274
+ # Save full conversation
275
+ conversation_file = os.path.join(storagePath, "conversation.py")
276
+ writeTextFile(conversation_file, str(messages))
277
+ # Save master plan
278
+ writeTextFile(os.path.join(storagePath, "master_plan.md"), master_plan)
279
+ # Save html
280
+ html_file = os.path.join(storagePath, "conversation.html")
281
+ console.save_html(html_file, inline_styles=True, theme=MONOKAI)
282
+ # Save text
283
+ console.save_text(os.path.join(storagePath, "conversation.md"))
284
+ # Inform users of the backup location
285
+ print(f"Conversation backup saved to {storagePath}")
286
+ print(f"HTML file saved to {html_file}\n")
287
+
288
+ asyncio.run(main())
@@ -0,0 +1 @@
1
+ biblemate
@@ -0,0 +1,5 @@
1
+ agentmake>=1.0.63
2
+ agentmakemcp>=0.0.8
3
+ fastmcp
4
+ rich
5
+ alive-progress
@@ -0,0 +1,26 @@
1
+ from rich.panel import Panel
2
+ from rich.text import Text
3
+ from rich.align import Align
4
+
5
+ # Project title styling
6
+ def get_banner():
7
+ # Title styling
8
+ title = Text("XoMate AI", style="bold magenta", justify="center")
9
+ title.stylize("bold magenta underline", 0, len("XoMate AI"))
10
+ # Tagline styling
11
+ tagline = Text("Execute. Orchestrate. Automate.", style="bold cyan", justify="center")
12
+ # Combine into a panel
13
+ banner_content = Align.center(
14
+ Text("\n") + title + Text("\n") + tagline + Text("\n"),
15
+ vertical="middle"
16
+ )
17
+ banner = Panel(
18
+ banner_content,
19
+ border_style="bright_blue",
20
+ title="🚀 Gets Things Done",
21
+ title_align="left",
22
+ subtitle="Eliran Wong",
23
+ subtitle_align="right",
24
+ padding=(1, 4)
25
+ )
26
+ return banner
@@ -0,0 +1,97 @@
1
+ from agentmake.main import AGENTMAKE_USER_DIR
2
+ from agentmake.utils.system import getCliOutput
3
+ import os, shutil
4
+
5
+
6
+ async def getInput(prompt:str="Instruction: ", input_suggestions:list=None):
7
+ """
8
+ Prompt for user input
9
+ """
10
+ # place import lines here to work with stdin
11
+ from prompt_toolkit import PromptSession
12
+ from prompt_toolkit.history import FileHistory
13
+ from prompt_toolkit.completion import WordCompleter, FuzzyCompleter
14
+ from prompt_toolkit.key_binding import KeyBindings
15
+ bindings = KeyBindings()
16
+ # new chat
17
+ @bindings.add("c-n")
18
+ def _(event):
19
+ buffer = event.app.current_buffer
20
+ buffer.text = ".new"
21
+ buffer.validate_and_handle()
22
+ # quit
23
+ @bindings.add("c-q")
24
+ def _(event):
25
+ buffer = event.app.current_buffer
26
+ buffer.text = ".quit"
27
+ buffer.validate_and_handle()
28
+ # copy text to clipboard
29
+ @bindings.add("c-c")
30
+ def _(event):
31
+ try:
32
+ buffer = event.app.current_buffer
33
+ data = buffer.copy_selection()
34
+ copyText = data.text
35
+ if shutil.which("termux-clipboard-set"):
36
+ from pydoc import pipepager
37
+ pipepager(copyText, cmd="termux-clipboard-set")
38
+ else:
39
+ import pyperclip
40
+ pyperclip.copy(copyText)
41
+ except:
42
+ pass
43
+ # paste clipboard text
44
+ @bindings.add("c-v")
45
+ def _(event):
46
+ try:
47
+ import pyperclip
48
+ buffer = event.app.current_buffer
49
+ buffer.cut_selection()
50
+ clipboardText = getCliOutput("termux-clipboard-get") if shutil.which("termux-clipboard-get") else pyperclip.paste()
51
+ buffer.insert_text(clipboardText)
52
+ except:
53
+ pass
54
+ # insert new line
55
+ @bindings.add("c-i")
56
+ def _(event):
57
+ event.app.current_buffer.newline()
58
+ # reset buffer
59
+ @bindings.add("c-z")
60
+ def _(event):
61
+ event.app.current_buffer.reset()
62
+ # go to the beginning of the text
63
+ @bindings.add("escape", "a")
64
+ def _(event):
65
+ event.app.current_buffer.cursor_position = 0
66
+ # go to the end of the text
67
+ @bindings.add("escape", "z")
68
+ def _(event):
69
+ buffer = event.app.current_buffer
70
+ buffer.cursor_position = len(buffer.text)
71
+ # go to current line starting position
72
+ @bindings.add("home")
73
+ @bindings.add("escape", "b")
74
+ def _(event):
75
+ buffer = event.app.current_buffer
76
+ buffer.cursor_position = buffer.cursor_position - buffer.document.cursor_position_col
77
+ # go to current line ending position
78
+ @bindings.add("end")
79
+ @bindings.add("escape", "e")
80
+ def _(event):
81
+ buffer = event.app.current_buffer
82
+ buffer.cursor_position = buffer.cursor_position + buffer.document.get_end_of_line_position()
83
+
84
+ history_dir = os.path.join(AGENTMAKE_USER_DIR, "history")
85
+ if not os.path.isdir(history_dir):
86
+ from pathlib import Path
87
+ Path(history_dir).mkdir(parents=True, exist_ok=True)
88
+ session = PromptSession(history=FileHistory(os.path.join(history_dir, "xomate_history")))
89
+ completer = FuzzyCompleter(WordCompleter(input_suggestions, ignore_case=True)) if input_suggestions else None
90
+ instruction = await session.prompt_async(
91
+ prompt,
92
+ bottom_toolbar="[ENTER] submit [TAB] new line [Ctrl+N] new chat [Ctrl+Q] quit",
93
+ completer=completer,
94
+ key_bindings=bindings,
95
+ )
96
+ print()
97
+ return instruction.strip() if instruction else ""
@@ -0,0 +1,119 @@
1
+ Metadata-Version: 2.1
2
+ Name: biblemate
3
+ Version: 0.0.11
4
+ Summary: AgentMake AI MCP Servers - Easy setup of MCP servers running AgentMake AI agentic components.
5
+ Home-page: https://toolmate.ai
6
+ Author: Eliran Wong
7
+ Author-email: support@toolmate.ai
8
+ License: GNU General Public License (GPL)
9
+ Project-URL: Source, https://github.com/eliranwong/xomateai
10
+ Project-URL: Tracker, https://github.com/eliranwong/xomateai/issues
11
+ Project-URL: Documentation, https://github.com/eliranwong/xomateai/wiki
12
+ Project-URL: Funding, https://www.paypal.me/toolmate
13
+ Keywords: mcp agent toolmate ai anthropic azure chatgpt cohere deepseek genai github googleai groq llamacpp mistral ollama openai vertexai xai
14
+ Classifier: Development Status :: 5 - Production/Stable
15
+ Classifier: Intended Audience :: End Users/Desktop
16
+ Classifier: Topic :: Utilities
17
+ Classifier: Topic :: Scientific/Engineering :: Artificial Intelligence
18
+ Classifier: Topic :: Software Development :: Build Tools
19
+ Classifier: License :: OSI Approved :: GNU General Public License v3 or later (GPLv3+)
20
+ Classifier: Programming Language :: Python :: 3.8
21
+ Classifier: Programming Language :: Python :: 3.9
22
+ Classifier: Programming Language :: Python :: 3.10
23
+ Classifier: Programming Language :: Python :: 3.11
24
+ Classifier: Programming Language :: Python :: 3.12
25
+ Requires-Python: >=3.8, <3.13
26
+ Provides-Extra: genai
27
+
28
+ =========
29
+ XoMate AI
30
+ =========
31
+
32
+ Execute. Orchestrate. Automate.
33
+
34
+ **XoMate.AI is your autonomous execution engine—automating planning, orchestration, and execution of tasks using multiple tools to resolve user requests seamlessly.**
35
+
36
+ For professionals, teams, and innovators who need more than just chat-based AI, xomate.ai is an intelligent automation agent that plans, coordinates, and executes tasks across multiple tools. Unlike basic AI chatbots, xomate.ai doesn’t just answer—it gets things done.
37
+
38
+ Core Messaging
39
+ --------------
40
+
41
+ Elevator Pitch
42
+ ~~~~~~~~~~~~~~
43
+
44
+ xomate.ai is an automation-first AI agent that takes your goals, creates a structured plan, and executes it by orchestrating multiple tools. It goes beyond conversation—delivering real results.
45
+
46
+ Value Propositions
47
+ ~~~~~~~~~~~~~~~~~~
48
+
49
+ * **Execute**: Automatically carry out tasks from start to finish.
50
+ * **Orchestrate**: Seamlessly coordinate multiple tools and APIs.
51
+ * **Automate**: Save time and effort by letting xomate.ai handle complex workflows.
52
+
53
+ Key Differentiators
54
+ ~~~~~~~~~~~~~~~~~~~
55
+
56
+ * Built on the `agentmake.ai <https://github.com/eliranwong/agentmake>`_ framework, proven through `LetMeDoIt.AI <https://github.com/eliranwong/letmedoit>`_, `ToolMate.AI <https://github.com/eliranwong/toolmate>`_ and `TeamGen AI <https://github.com/eliranwong/teamgenai>`_.
57
+ * Execution-focused, not just advisory.
58
+ * Flexible integration with existing tools and APIs.
59
+ * Scalable from individual users to enterprise workflows.
60
+ * **Versatile** – supports 16 AI backends and numerous models, leveraging the advantages of AgentMake AI.
61
+ * **Extensible** – capable of extending functionalities by interacting with Additional AgentMake AI tools or third-party MCP (Modal Context Protocol) servers.
62
+
63
+ XoMate AI Agentic Workflow
64
+ --------------------------
65
+
66
+ 1. **XoMate AI** receives a request from a user.
67
+ 2. **XoMate AI** analyzes the request and determines that it requires multiple steps to complete.
68
+ 3. **XoMate AI** generates a ``Master Prompt`` that outlines the steps needed to complete the request.
69
+ 4. **XoMate AI** sends the ``Master Prompt`` to a supervisor agent, who reviews the prompt and provides suggestions for improvement.
70
+ 5. **XoMate AI** sends the suggestions to a tool selection agent, who selects the most appropriate tools for each step of the ``Master Prompt``.
71
+ 6. **XoMate AI** sends the selected tools and the ``Master Prompt`` to an instruction generation agent, who converts the suggestions into clear and concise instructions for an AI assistant to follow.
72
+ 7. **XoMate AI** sends the instructions to an AI assistant, who executes the instructions using the selected tools. When the selected tool is not an internal tool, built in with XoMate AI, XoMate AI calls the external tool via interacting with the MCP (Modal Context Protocol) servers, configured by users.
73
+ 8. **XoMate AI** monitors the progress of the AI assistant and provides additional suggestions or instructions as needed.
74
+ 9. Once all steps are completed, **XoMate AI** provides a concise summary of the results to the user.
75
+ 10. The user receives the final response, which fully resolves their original request.
76
+
77
+ Development in Progress
78
+ -----------------------
79
+
80
+ 1. Agentic workflow developed and tested.
81
+ 2. Core code built for the agentic workflow.
82
+ 3. Tested with AgentMake AI MCP servers.
83
+
84
+ Pending
85
+ ~~~~~~~
86
+
87
+ * Build an action plan agent to handle random requests.
88
+ * Refine code and improve effectiveness.
89
+ * Test with third-party systems.
90
+ * Select frequently used AgentMake AI tools to include in the main library as built-in tools.
91
+ * Build CLI/TUI interfaces.
92
+ * Build a web UI.
93
+ * Test on Windows, macOS, Linux, and ChromeOS.
94
+ * Test on Android mobile devices.
95
+
96
+ Custom Features
97
+ ~~~~~~~~~~~~~~~
98
+
99
+ * options to unload some or all built-in tools
100
+ * custom XoMate AI system prompts
101
+ * edit master plan
102
+ * iteration allowance
103
+ * change tools
104
+
105
+ ... more ...
106
+
107
+ Install (upcoming ...)
108
+ ----------------------
109
+
110
+ .. code-block:: bash
111
+
112
+ pip install --upgrade xomateai
113
+
114
+ Attention: The ``xomateai`` package is currently mirroring the ``agentmakemcp`` package. Once the development of XoMate AI reaches the production stage, the actual ``xomateai`` package will be uploaded.
115
+
116
+ License
117
+ -------
118
+
119
+ This project is licensed under the MIT License - see the `LICENSE <LICENSE>`_ file for details.
@@ -0,0 +1,15 @@
1
+ setup.py
2
+ biblemate/README.md
3
+ biblemate/__init__.py
4
+ biblemate/main.py
5
+ biblemate/package_name.txt
6
+ biblemate/requirements.txt
7
+ biblemate.egg-info/PKG-INFO
8
+ biblemate.egg-info/SOURCES.txt
9
+ biblemate.egg-info/dependency_links.txt
10
+ biblemate.egg-info/entry_points.txt
11
+ biblemate.egg-info/requires.txt
12
+ biblemate.egg-info/top_level.txt
13
+ biblemate/core/systems.py
14
+ biblemate/ui/info.py
15
+ biblemate/ui/prompts.py
@@ -0,0 +1,2 @@
1
+ [console_scripts]
2
+ biblemate = biblemate.main:main
@@ -0,0 +1,8 @@
1
+ agentmake>=1.0.63
2
+ agentmakemcp>=0.0.8
3
+ alive-progress
4
+ fastmcp
5
+ rich
6
+
7
+ [genai]
8
+ google-genai>=1.25.0
@@ -0,0 +1 @@
1
+ biblemate
@@ -0,0 +1,4 @@
1
+ [egg_info]
2
+ tag_build =
3
+ tag_date = 0
4
+
@@ -0,0 +1,90 @@
1
+ from setuptools import setup
2
+ from setuptools.command.install import install
3
+ import os, shutil, platform, sys
4
+
5
+ # package name
6
+ package_name_0 = "package_name.txt"
7
+ with open(package_name_0, "r", encoding="utf-8") as fileObj:
8
+ package = fileObj.read()
9
+ package_name_1 = os.path.join(package, "package_name.txt") # package readme
10
+ shutil.copy(package_name_0, package_name_1)
11
+
12
+ # update package readme
13
+ latest_readme = os.path.join("..", "README_pypi.md") # github repository readme
14
+ package_readme = os.path.join(package, "README.md") # package readme
15
+ shutil.copy(latest_readme, package_readme)
16
+ with open(package_readme, "r", encoding="utf-8") as fileObj:
17
+ long_description = fileObj.read()
18
+
19
+ # get required packages
20
+ install_requires = []
21
+ with open(os.path.join(package, "requirements.txt"), "r") as fileObj:
22
+ for line in fileObj.readlines():
23
+ mod = line.strip()
24
+ if mod:
25
+ install_requires.append(mod)
26
+
27
+ # https://packaging.python.org/en/latest/guides/distributing-packages-using-setuptools/
28
+ setup(
29
+ name=package,
30
+ version="0.0.11",
31
+ python_requires=">=3.8, <3.13",
32
+ description=f"AgentMake AI MCP Servers - Easy setup of MCP servers running AgentMake AI agentic components.",
33
+ long_description=long_description,
34
+ author="Eliran Wong",
35
+ author_email="support@toolmate.ai",
36
+ packages=[
37
+ package,
38
+ f"{package}.core",
39
+ f"{package}.ui",
40
+ ],
41
+ package_data={
42
+ package: ["*.*"],
43
+ f"{package}.core": ["*.*"],
44
+ f"{package}.ui": ["*.*"],
45
+ },
46
+ license="GNU General Public License (GPL)",
47
+ install_requires=install_requires,
48
+ extras_require={
49
+ 'genai': ["google-genai>=1.25.0"], # Dependencies for running Vertex AI
50
+ },
51
+ entry_points={
52
+ "console_scripts": [
53
+ f"{package}={package}.main:main",
54
+ ],
55
+ },
56
+ keywords="mcp agent toolmate ai anthropic azure chatgpt cohere deepseek genai github googleai groq llamacpp mistral ollama openai vertexai xai",
57
+ url="https://toolmate.ai",
58
+ project_urls={
59
+ "Source": "https://github.com/eliranwong/xomateai",
60
+ "Tracker": "https://github.com/eliranwong/xomateai/issues",
61
+ "Documentation": "https://github.com/eliranwong/xomateai/wiki",
62
+ "Funding": "https://www.paypal.me/toolmate",
63
+ },
64
+ classifiers=[
65
+ # Reference: https://pypi.org/classifiers/
66
+
67
+ # How mature is this project? Common values are
68
+ # 3 - Alpha
69
+ # 4 - Beta
70
+ # 5 - Production/Stable
71
+ 'Development Status :: 5 - Production/Stable',
72
+
73
+ # Indicate who your project is intended for
74
+ 'Intended Audience :: End Users/Desktop',
75
+ 'Topic :: Utilities',
76
+ 'Topic :: Scientific/Engineering :: Artificial Intelligence',
77
+ 'Topic :: Software Development :: Build Tools',
78
+
79
+ # Pick your license as you wish (should match "license" above)
80
+ 'License :: OSI Approved :: GNU General Public License v3 or later (GPLv3+)',
81
+
82
+ # Specify the Python versions you support here. In particular, ensure
83
+ # that you indicate whether you support Python 2, Python 3 or both.
84
+ 'Programming Language :: Python :: 3.8',
85
+ 'Programming Language :: Python :: 3.9',
86
+ 'Programming Language :: Python :: 3.10',
87
+ 'Programming Language :: Python :: 3.11',
88
+ 'Programming Language :: Python :: 3.12',
89
+ ],
90
+ )