biblemate 0.0.11__py3-none-any.whl → 0.0.15__py3-none-any.whl
This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
- biblemate/README.md +61 -67
- biblemate/bible_study_mcp.py +335 -0
- biblemate/config.py +1 -0
- biblemate/core/systems.py +11 -6
- biblemate/main.py +126 -42
- biblemate/requirements.txt +2 -2
- biblemate/ui/info.py +4 -4
- biblemate/ui/prompts.py +2 -2
- biblemate-0.0.15.dist-info/METADATA +119 -0
- biblemate-0.0.15.dist-info/RECORD +15 -0
- biblemate-0.0.11.dist-info/METADATA +0 -125
- biblemate-0.0.11.dist-info/RECORD +0 -13
- {biblemate-0.0.11.dist-info → biblemate-0.0.15.dist-info}/WHEEL +0 -0
- {biblemate-0.0.11.dist-info → biblemate-0.0.15.dist-info}/entry_points.txt +0 -0
- {biblemate-0.0.11.dist-info → biblemate-0.0.15.dist-info}/top_level.txt +0 -0
biblemate/README.md
CHANGED
@@ -1,92 +1,86 @@
|
|
1
|
-
|
2
|
-
XoMate AI
|
3
|
-
=========
|
1
|
+
# BibleMate AI
|
4
2
|
|
5
|
-
|
3
|
+
BibleMate AI is a groundbreaking, autonomous AI agent designed to revolutionize your Bible study. It can create study plans, coordinate multiple Bible tools, and take multi-step actions to complete complex Bible-related tasks, such as conducting an in-depth study of a particular Bible passage.
|
6
4
|
|
7
|
-
|
5
|
+
## Core Features
|
8
6
|
|
9
|
-
|
7
|
+
* **Autonomous AI Agent:** BibleMate AI can work independently to fulfill your bible study requests.
|
8
|
+
* **Multi-step Task Execution:** It can break down complex tasks into smaller, manageable steps and execute them sequentially.
|
9
|
+
* **Rich Toolset:** Comes with over 40 built-in bible tools, powered by our comprehensive bible suite, the [UniqueBible App](https://github.com/eliranwong/UniqueBible).
|
10
|
+
* **Customizable and Extensible:** Advanced users can customize existing tools or add new ones to suit their specific needs.
|
10
11
|
|
11
|
-
|
12
|
-
--------------
|
12
|
+
## Built-in Tools
|
13
13
|
|
14
|
-
|
15
|
-
~~~~~~~~~~~~~~
|
14
|
+
To see the full list of built-in tools and their descriptions, please see the [TOOLS.md](TOOLS.md) file.
|
16
15
|
|
17
|
-
|
16
|
+
## Demo
|
18
17
|
|
19
|
-
|
20
|
-
~~~~~~~~~~~~~~~~~~
|
18
|
+
See BibleMate AI in action in our YouTube video: [BibleMate AI Demo](https://www.youtube.com/watch?v=2BPZVufnKJU&feature=youtu.be)
|
21
19
|
|
22
|
-
|
23
|
-
* **Orchestrate**: Seamlessly coordinate multiple tools and APIs.
|
24
|
-
* **Automate**: Save time and effort by letting xomate.ai handle complex workflows.
|
20
|
+
## Our Other AI Projects
|
25
21
|
|
26
|
-
|
27
|
-
~~~~~~~~~~~~~~~~~~~
|
22
|
+
BibleMate AI was developed from our agent development kit (ADK), [AgentMake AI](https://github.com/eliranwong/agentmake), and was a side-project we explored while developing [BibleMate AI](https://github.com/eliranwong/BibleMate). Its success as a bible study tool prompted us to develop it further as a standalone project.
|
28
23
|
|
29
|
-
|
30
|
-
* Execution-focused, not just advisory.
|
31
|
-
* Flexible integration with existing tools and APIs.
|
32
|
-
* Scalable from individual users to enterprise workflows.
|
33
|
-
* **Versatile** – supports 16 AI backends and numerous models, leveraging the advantages of AgentMake AI.
|
34
|
-
* **Extensible** – capable of extending functionalities by interacting with Additional AgentMake AI tools or third-party MCP (Modal Context Protocol) servers.
|
24
|
+
## BibleMate AI Agentic Workflow
|
35
25
|
|
36
|
-
|
37
|
-
|
38
|
-
|
39
|
-
|
40
|
-
|
41
|
-
|
42
|
-
|
43
|
-
|
44
|
-
|
45
|
-
7. **XoMate AI** sends the instructions to an AI assistant, who executes the instructions using the selected tools. When the selected tool is not an internal tool, built in with XoMate AI, XoMate AI calls the external tool via interacting with the MCP (Modal Context Protocol) servers, configured by users.
|
46
|
-
8. **XoMate AI** monitors the progress of the AI assistant and provides additional suggestions or instructions as needed.
|
47
|
-
9. Once all steps are completed, **XoMate AI** provides a concise summary of the results to the user.
|
26
|
+
1. **BibleMate AI** receives a request from a user.
|
27
|
+
2. **BibleMate AI** analyzes the request and determines that it requires multiple steps to complete.
|
28
|
+
3. **BibleMate AI** generates a `Master Plan` that outlines the steps needed to complete the request.
|
29
|
+
4. **BibleMate AI** sends the `Master Plan` to a supervisor agent, who reviews the prompt and provides suggestions for improvement.
|
30
|
+
5. **BibleMate AI** sends the suggestions to a bible tool selection agent, who selects the most appropriate bible tools for each step of the `Master Plan`.
|
31
|
+
6. **BibleMate AI** sends the selected bible tools and the `Master Plan` to an instruction generation agent, who converts the suggestions into clear and concise instructions for an AI assistant to follow.
|
32
|
+
7. **BibleMate AI** sends the instructions to an AI assistant, who executes the instructions using the selected bible tools.
|
33
|
+
8. **BibleMate AI** monitors the progress of the AI assistant and provides additional suggestions or instructions as needed.
|
34
|
+
9. Once all steps are completed, **BibleMate AI** provides a concise summary of the results to the user.
|
48
35
|
10. The user receives the final response, which fully resolves their original request.
|
49
36
|
|
50
|
-
|
51
|
-
|
37
|
+
## Getting Started
|
38
|
+
|
39
|
+
> pip install --upgrade biblemate
|
40
|
+
|
41
|
+
> biblemate
|
42
|
+
|
43
|
+
## Configure AI Backend
|
44
|
+
|
45
|
+
After BibleMate AI is launched, enter:
|
46
|
+
|
47
|
+
> .backend
|
48
|
+
|
49
|
+
A text editor is opened for you to edit the AgentMake AI settings. Change the `DEFAULT_AI_BACKEND` to your own choice of AI backend and enter API keys where appropriate.
|
50
|
+
|
51
|
+
## AI Modes
|
52
|
+
|
53
|
+
You can swap between two AI modes:
|
54
|
+
|
55
|
+
Chat mode – Provide users with direct text responses without using tools. This is useful when users have simple queries and need straightforward answers.
|
56
|
+
|
57
|
+
Agent mode - Fully autonomous agent designed to plan, orchestrate tools, and take multiple actions to address user requests.
|
52
58
|
|
53
|
-
|
54
|
-
2. Core code built for the agentic workflow.
|
55
|
-
3. Tested with AgentMake AI MCP servers.
|
59
|
+
How to swap?
|
56
60
|
|
57
|
-
|
58
|
-
|
61
|
+
* Enter `.chat` in BibleMate AI prompt to enable chat mode and disable agent mode.
|
62
|
+
* Enter `.agent` in BibleMate AI prompt to enable agent mode and disable chat mode.
|
59
63
|
|
60
|
-
|
61
|
-
* Refine code and improve effectiveness.
|
62
|
-
* Test with third-party systems.
|
63
|
-
* Select frequently used AgentMake AI tools to include in the main library as built-in tools.
|
64
|
-
* Build CLI/TUI interfaces.
|
65
|
-
* Build a web UI.
|
66
|
-
* Test on Windows, macOS, Linux, and ChromeOS.
|
67
|
-
* Test on Android mobile devices.
|
64
|
+
## Action Menu
|
68
65
|
|
69
|
-
|
70
|
-
~~~~~~~~~~~~~~~
|
66
|
+
*(Coming soon)*
|
71
67
|
|
72
|
-
|
73
|
-
* custom XoMate AI system prompts
|
74
|
-
* edit master plan
|
75
|
-
* iteration allowance
|
76
|
-
* change tools
|
68
|
+
## Keyboard Shortcut
|
77
69
|
|
78
|
-
|
70
|
+
*(Coming soon)*
|
79
71
|
|
80
|
-
|
81
|
-
----------------------
|
72
|
+
## Customization
|
82
73
|
|
83
|
-
|
74
|
+
*(Coming soon)*
|
84
75
|
|
85
|
-
|
76
|
+
## License
|
86
77
|
|
87
|
-
|
78
|
+
This project is licensed under the Creative Commons Attribution-NonCommercial 4.0 International License - see the [LICENSE](LICENSE) file for details.
|
88
79
|
|
89
|
-
|
90
|
-
-------
|
80
|
+
## Acknowledgments
|
91
81
|
|
92
|
-
|
82
|
+
BibleMate AI is built upon the foundations of our other projects:
|
83
|
+
* [UniqueBible App](https.github.com/eliranwong/UniqueBible)
|
84
|
+
* [XoMate AI](https.github.com/eliranwong/xomate)
|
85
|
+
* [AgentMake AI](https.github.com/eliranwong/agentmake)
|
86
|
+
* [AgentMake AI MCP](https.github.com/eliranwong/agentmakemcp)
|
@@ -0,0 +1,335 @@
|
|
1
|
+
import logging
|
2
|
+
from fastmcp import FastMCP
|
3
|
+
from agentmake import agentmake
|
4
|
+
|
5
|
+
# Configure logging before creating the FastMCP server
|
6
|
+
logging.basicConfig(format="[%(levelname)s]: %(message)s", level=logging.ERROR)
|
7
|
+
|
8
|
+
mcp = FastMCP(name="BibleMate AI")
|
9
|
+
|
10
|
+
def getResponse(messages:list) -> str:
|
11
|
+
return messages[-1].get("content") if messages and "content" in messages[-1] else "Error!"
|
12
|
+
|
13
|
+
@mcp.tool
|
14
|
+
def compare_bible_translations(request:str) -> str:
|
15
|
+
"""compare Bible translations; bible verse reference(s) must be given"""
|
16
|
+
global agentmake, getResponse
|
17
|
+
messages = agentmake(request, **{'input_content_plugin': 'uba/every_single_ref', 'tool': 'uba/compare'})
|
18
|
+
return getResponse(messages)
|
19
|
+
|
20
|
+
@mcp.tool
|
21
|
+
def retrieve_bible_study_indexes(request:str) -> str:
|
22
|
+
"""retrieve smart indexes on studying a particular bible verse; bible verse reference must be given"""
|
23
|
+
global agentmake, getResponse
|
24
|
+
messages = agentmake(request, **{'input_content_plugin': 'uba/every_single_ref', 'tool': 'uba/index'})
|
25
|
+
return getResponse(messages)
|
26
|
+
|
27
|
+
@mcp.tool
|
28
|
+
def retrieve_bible_cross_references(request:str) -> str:
|
29
|
+
"""retrieve cross-references of Bible verses; bible verse reference(s) must be given"""
|
30
|
+
global agentmake, getResponse
|
31
|
+
messages = agentmake(request, **{'input_content_plugin': 'uba/every_single_ref', 'tool': 'uba/xref'})
|
32
|
+
return getResponse(messages)
|
33
|
+
|
34
|
+
@mcp.tool
|
35
|
+
def retrieve_english_bible_verses(request:str) -> str:
|
36
|
+
"""retrieve English Bible verses; bible verse reference(s) must be given"""
|
37
|
+
global agentmake, getResponse
|
38
|
+
messages = agentmake(request, **{'tool': 'uba/net'})
|
39
|
+
return getResponse(messages)
|
40
|
+
|
41
|
+
@mcp.tool
|
42
|
+
def search_bible_or_run_uba_command(request:str) -> str:
|
43
|
+
"""search the bible; run UniqueBible App UBA command; either search string or full command must be given"""
|
44
|
+
global agentmake, getResponse
|
45
|
+
messages = agentmake(request, **{'tool': 'uba/cmd'})
|
46
|
+
return getResponse(messages)
|
47
|
+
|
48
|
+
@mcp.tool
|
49
|
+
def read_bible_commentary(request:str) -> str:
|
50
|
+
"""read bible commentary; bible verse reference(s) must be given"""
|
51
|
+
global agentmake, getResponse
|
52
|
+
messages = agentmake(request, **{'tool': 'uba/ai_comment'})
|
53
|
+
return getResponse(messages)
|
54
|
+
|
55
|
+
@mcp.tool
|
56
|
+
def retrieve_chinese_bible_verses(request:str) -> str:
|
57
|
+
"""retrieve Chinese Bible verses; bible verse reference(s) must be given"""
|
58
|
+
global agentmake, getResponse
|
59
|
+
messages = agentmake(request, **{'tool': 'uba/cuv'})
|
60
|
+
return getResponse(messages)
|
61
|
+
|
62
|
+
@mcp.tool
|
63
|
+
def refine_bible_translation(request:str) -> str:
|
64
|
+
"""refine the translation of a Bible verse or passage"""
|
65
|
+
global agentmake, getResponse
|
66
|
+
messages = agentmake(request, **{'system': 'bible/translate'})
|
67
|
+
return getResponse(messages)
|
68
|
+
|
69
|
+
@mcp.tool
|
70
|
+
def write_pastor_prayer(request:str) -> str:
|
71
|
+
"""write a prayer, out of a church pastor heart, based on user input"""
|
72
|
+
global agentmake, getResponse
|
73
|
+
messages = agentmake(request, **{'system': 'bible/pray'})
|
74
|
+
return getResponse(messages)
|
75
|
+
|
76
|
+
@mcp.tool
|
77
|
+
def ask_theologian(request:str) -> str:
|
78
|
+
"""ask a theologian about the bible"""
|
79
|
+
global agentmake, getResponse
|
80
|
+
messages = agentmake(request, **{'system': 'bible/theologian'})
|
81
|
+
return getResponse(messages)
|
82
|
+
|
83
|
+
@mcp.tool
|
84
|
+
def quote_bible_verses(request:str) -> str:
|
85
|
+
"""quote multiple bible verses in response to user request"""
|
86
|
+
global agentmake, getResponse
|
87
|
+
messages = agentmake(request, **{'system': 'bible/quote'})
|
88
|
+
return getResponse(messages)
|
89
|
+
|
90
|
+
@mcp.tool
|
91
|
+
def anyalyze_psalms(request:str) -> str:
|
92
|
+
"""analyze the context and background of the Psalms in the bible"""
|
93
|
+
global agentmake, getResponse
|
94
|
+
messages = agentmake(request, **{'system': 'bible/david'})
|
95
|
+
return getResponse(messages)
|
96
|
+
|
97
|
+
@mcp.tool
|
98
|
+
def ask_pastor(request:str) -> str:
|
99
|
+
"""ask a church pastor about the bible"""
|
100
|
+
global agentmake, getResponse
|
101
|
+
messages = agentmake(request, **{'system': 'bible/billy'})
|
102
|
+
return getResponse(messages)
|
103
|
+
|
104
|
+
@mcp.tool
|
105
|
+
def ask_bible_scholar(request:str) -> str:
|
106
|
+
"""ask a bible scholar about the bible"""
|
107
|
+
global agentmake, getResponse
|
108
|
+
messages = agentmake(request, **{'system': 'bible/scholar'})
|
109
|
+
return getResponse(messages)
|
110
|
+
|
111
|
+
@mcp.tool
|
112
|
+
def explain_bible_meaning(request:str) -> str:
|
113
|
+
"""Explain the meaning of the user-given content in reference to the Bible"""
|
114
|
+
global agentmake, getResponse
|
115
|
+
messages = agentmake(request, **{'instruction': 'bible/meaning', 'system': 'auto', 'backend': 'googleai'})
|
116
|
+
return getResponse(messages)
|
117
|
+
|
118
|
+
@mcp.tool
|
119
|
+
def write_new_testament_historical_context(request:str) -> str:
|
120
|
+
"""write the Bible Historical Context of a New Testament passage in the bible; new testament bible book / chapter / passage / reference(s) must be given"""
|
121
|
+
global agentmake, getResponse
|
122
|
+
messages = agentmake(request, **{'instruction': 'bible/nt_context', 'system': 'auto', 'backend': 'googleai'})
|
123
|
+
return getResponse(messages)
|
124
|
+
|
125
|
+
@mcp.tool
|
126
|
+
def write_bible_questions(request:str) -> str:
|
127
|
+
"""Write thought-provoking questions for bible study group discussion; bible book / chapter / passage / reference(s) must be given"""
|
128
|
+
global agentmake, getResponse
|
129
|
+
messages = agentmake(request, **{'instruction': 'bible/questions', 'system': 'auto', 'backend': 'googleai'})
|
130
|
+
return getResponse(messages)
|
131
|
+
|
132
|
+
@mcp.tool
|
133
|
+
def write_bible_devotion(request:str) -> str:
|
134
|
+
"""Write a devotion on a bible passage; bible book / chapter / passage / reference(s) must be given"""
|
135
|
+
global agentmake, getResponse
|
136
|
+
messages = agentmake(request, **{'instruction': 'bible/devotion', 'system': 'auto', 'backend': 'googleai'})
|
137
|
+
return getResponse(messages)
|
138
|
+
|
139
|
+
@mcp.tool
|
140
|
+
def translate_hebrew_bible_verse(request:str) -> str:
|
141
|
+
"""Translate a Hebrew bible verse; Hebrew bible text must be given"""
|
142
|
+
global agentmake, getResponse
|
143
|
+
messages = agentmake(request, **{'instruction': 'bible/translate_hebrew', 'system': 'auto', 'backend': 'googleai'})
|
144
|
+
return getResponse(messages)
|
145
|
+
|
146
|
+
@mcp.tool
|
147
|
+
def write_bible_location_study(request:str) -> str:
|
148
|
+
"""write comprehensive information on a bible location; a bible location name must be given"""
|
149
|
+
global agentmake, getResponse
|
150
|
+
messages = agentmake(request, **{'instruction': 'bible/location', 'system': 'auto', 'backend': 'googleai'})
|
151
|
+
return getResponse(messages)
|
152
|
+
|
153
|
+
@mcp.tool
|
154
|
+
def translate_greek_bible_verse(request:str) -> str:
|
155
|
+
"""Translate a Greek bible verse: Greek bible text must be given"""
|
156
|
+
global agentmake, getResponse
|
157
|
+
messages = agentmake(request, **{'instruction': 'bible/translate_greek', 'system': 'auto', 'backend': 'googleai'})
|
158
|
+
return getResponse(messages)
|
159
|
+
|
160
|
+
@mcp.tool
|
161
|
+
def identify_bible_keywords(request:str) -> str:
|
162
|
+
"""Identify bible key words from the user-given content"""
|
163
|
+
global agentmake, getResponse
|
164
|
+
messages = agentmake(request, **{'instruction': 'bible/keywords', 'system': 'auto', 'backend': 'googleai'})
|
165
|
+
return getResponse(messages)
|
166
|
+
|
167
|
+
@mcp.tool
|
168
|
+
def study_old_testament_themes(request:str) -> str:
|
169
|
+
"""Study Bible Themes in a Old Testament passage; old testatment bible book / chapter / passage / reference(s) must be given"""
|
170
|
+
global agentmake, getResponse
|
171
|
+
messages = agentmake(request, **{'instruction': 'bible/ot_themes', 'system': 'auto', 'backend': 'googleai'})
|
172
|
+
return getResponse(messages)
|
173
|
+
|
174
|
+
@mcp.tool
|
175
|
+
def study_new_testament_themes(request:str) -> str:
|
176
|
+
"""Study Bible Themes in a New Testament passage; new testament bible book / chapter / passage / reference(s) must be given"""
|
177
|
+
global agentmake, getResponse
|
178
|
+
messages = agentmake(request, **{'instruction': 'bible/nt_themes', 'system': 'auto', 'backend': 'googleai'})
|
179
|
+
return getResponse(messages)
|
180
|
+
|
181
|
+
@mcp.tool
|
182
|
+
def write_old_testament_highlights(request:str) -> str:
|
183
|
+
"""Write Highlights in a Old Testament passage in the bible; old testament bible book / chapter / passage / reference(s) must be given"""
|
184
|
+
global agentmake, getResponse
|
185
|
+
messages = agentmake(request, **{'instruction': 'bible/ot_highligths', 'system': 'auto', 'backend': 'googleai'})
|
186
|
+
return getResponse(messages)
|
187
|
+
|
188
|
+
@mcp.tool
|
189
|
+
def write_bible_prayer(request:str) -> str:
|
190
|
+
"""Write a prayer pertaining to the user content in reference to the Bible"""
|
191
|
+
global agentmake, getResponse
|
192
|
+
messages = agentmake(request, **{'instruction': 'bible/prayer', 'system': 'auto', 'backend': 'googleai'})
|
193
|
+
return getResponse(messages)
|
194
|
+
|
195
|
+
@mcp.tool
|
196
|
+
def write_short_bible_prayer(request:str) -> str:
|
197
|
+
"""Write a short prayer, in one paragraph only, pertaining to the user content in reference to the Bible"""
|
198
|
+
global agentmake, getResponse
|
199
|
+
messages = agentmake(request, **{'instruction': 'bible/short_prayer', 'system': 'auto', 'backend': 'googleai'})
|
200
|
+
return getResponse(messages)
|
201
|
+
|
202
|
+
@mcp.tool
|
203
|
+
def write_bible_character_study(request:str) -> str:
|
204
|
+
"""Write comprehensive information on a given bible character in the bible; a bible character name must be given"""
|
205
|
+
global agentmake, getResponse
|
206
|
+
messages = agentmake(request, **{'instruction': 'bible/character', 'system': 'auto', 'backend': 'googleai'})
|
207
|
+
return getResponse(messages)
|
208
|
+
|
209
|
+
@mcp.tool
|
210
|
+
def write_bible_thought_progression(request:str) -> str:
|
211
|
+
"""write Bible Thought Progression of a bible book / chapter / passage; bible book / chapter / passage / reference(s) must be given"""
|
212
|
+
global agentmake, getResponse
|
213
|
+
messages = agentmake(request, **{'instruction': 'bible/flow', 'system': 'auto', 'backend': 'googleai'})
|
214
|
+
return getResponse(messages)
|
215
|
+
|
216
|
+
@mcp.tool
|
217
|
+
def quote_bible_promises(request:str) -> str:
|
218
|
+
"""Quote relevant Bible promises in response to user request"""
|
219
|
+
global agentmake, getResponse
|
220
|
+
messages = agentmake(request, **{'instruction': 'bible/promises', 'system': 'auto', 'backend': 'googleai'})
|
221
|
+
return getResponse(messages)
|
222
|
+
|
223
|
+
@mcp.tool
|
224
|
+
def write_bible_chapter_summary(request:str) -> str:
|
225
|
+
"""Write a detailed interpretation on a bible chapter; a bible chapter must be given"""
|
226
|
+
global agentmake, getResponse
|
227
|
+
messages = agentmake(request, **{'instruction': 'bible/chapter_summary', 'system': 'auto', 'backend': 'googleai'})
|
228
|
+
return getResponse(messages)
|
229
|
+
|
230
|
+
@mcp.tool
|
231
|
+
def write_bible_perspectives(request:str) -> str:
|
232
|
+
"""Write biblical perspectives and principles in relation to the user content"""
|
233
|
+
global agentmake, getResponse
|
234
|
+
messages = agentmake(request, **{'instruction': 'bible/perspective', 'system': 'auto', 'backend': 'googleai'})
|
235
|
+
return getResponse(messages)
|
236
|
+
|
237
|
+
@mcp.tool
|
238
|
+
def interpret_old_testament_verse(request:str) -> str:
|
239
|
+
"""Interpret the user-given bible verse from the Old Testament in the light of its context, together with insights of biblical Hebrew studies; an old testament bible verse / reference(s) must be given"""
|
240
|
+
global agentmake, getResponse
|
241
|
+
messages = agentmake(request, **{'instruction': 'bible/ot_meaning', 'system': 'auto', 'backend': 'googleai'})
|
242
|
+
return getResponse(messages)
|
243
|
+
|
244
|
+
@mcp.tool
|
245
|
+
def expound_bible_topic(request:str) -> str:
|
246
|
+
"""Expound the user-given topic in reference to the Bible; a topic must be given"""
|
247
|
+
global agentmake, getResponse
|
248
|
+
messages = agentmake(request, **{'instruction': 'bible/topic', 'system': 'auto', 'backend': 'googleai'})
|
249
|
+
return getResponse(messages)
|
250
|
+
|
251
|
+
@mcp.tool
|
252
|
+
def write_bible_theology(request:str) -> str:
|
253
|
+
"""write the theological messages conveyed in the user-given content, in reference to the Bible"""
|
254
|
+
global agentmake, getResponse
|
255
|
+
messages = agentmake(request, **{'instruction': 'bible/theology', 'system': 'auto', 'backend': 'googleai'})
|
256
|
+
return getResponse(messages)
|
257
|
+
|
258
|
+
@mcp.tool
|
259
|
+
def study_bible_themes(request:str) -> str:
|
260
|
+
"""Study Bible Themes in relation to the user content"""
|
261
|
+
global agentmake, getResponse
|
262
|
+
messages = agentmake(request, **{'instruction': 'bible/themes', 'system': 'auto', 'backend': 'googleai'})
|
263
|
+
return getResponse(messages)
|
264
|
+
|
265
|
+
@mcp.tool
|
266
|
+
def write_bible_canonical_context(request:str) -> str:
|
267
|
+
"""Write about canonical context of a bible book / chapter / passage; bible book / chapter / passage / reference(s) must be given"""
|
268
|
+
global agentmake, getResponse
|
269
|
+
messages = agentmake(request, **{'instruction': 'bible/canon', 'system': 'auto', 'backend': 'googleai'})
|
270
|
+
return getResponse(messages)
|
271
|
+
|
272
|
+
@mcp.tool
|
273
|
+
def write_bible_related_summary(request:str) -> str:
|
274
|
+
"""Write a summary on the user-given content in reference to the Bible"""
|
275
|
+
global agentmake, getResponse
|
276
|
+
messages = agentmake(request, **{'instruction': 'bible/summary', 'system': 'auto', 'backend': 'googleai'})
|
277
|
+
return getResponse(messages)
|
278
|
+
|
279
|
+
@mcp.tool
|
280
|
+
def interpret_new_testament_verse(request:str) -> str:
|
281
|
+
"""Interpret the user-given bible verse from the New Testament in the light of its context, together with insights of biblical Greek studies; a new testament bible verse / reference(s) must be given"""
|
282
|
+
global agentmake, getResponse
|
283
|
+
messages = agentmake(request, **{'instruction': 'bible/nt_meaning', 'system': 'auto', 'backend': 'googleai'})
|
284
|
+
return getResponse(messages)
|
285
|
+
|
286
|
+
@mcp.tool
|
287
|
+
def write_new_testament_highlights(request:str) -> str:
|
288
|
+
"""Write Highlights in a New Testament passage in the bible; new testament bible book / chapter / passage / reference(s) must be given"""
|
289
|
+
global agentmake, getResponse
|
290
|
+
messages = agentmake(request, **{'instruction': 'bible/nt_highlights', 'system': 'auto', 'backend': 'googleai'})
|
291
|
+
return getResponse(messages)
|
292
|
+
|
293
|
+
@mcp.tool
|
294
|
+
def write_bible_applications(request:str) -> str:
|
295
|
+
"""Provide detailed applications of a bible passages; bible book / chapter / passage / reference(s) must be given"""
|
296
|
+
global agentmake, getResponse
|
297
|
+
messages = agentmake(request, **{'instruction': 'bible/application', 'system': 'auto', 'backend': 'googleai'})
|
298
|
+
return getResponse(messages)
|
299
|
+
|
300
|
+
@mcp.tool
|
301
|
+
def write_bible_book_introduction(request:str) -> str:
|
302
|
+
"""Write a detailed introduction on a book in the bible; bible book must be given"""
|
303
|
+
global agentmake, getResponse
|
304
|
+
messages = agentmake(request, **{'instruction': 'bible/introduce_book', 'system': 'auto', 'backend': 'googleai'})
|
305
|
+
return getResponse(messages)
|
306
|
+
|
307
|
+
@mcp.tool
|
308
|
+
def write_old_testament_historical_context(request:str) -> str:
|
309
|
+
"""write the Bible Historical Context of a Old Testament passage in the bible; old testament bible book / chapter / passage / reference(s) must be given"""
|
310
|
+
global agentmake, getResponse
|
311
|
+
messages = agentmake(request, **{'instruction': 'bible/ot_context', 'system': 'auto', 'backend': 'googleai'})
|
312
|
+
return getResponse(messages)
|
313
|
+
|
314
|
+
@mcp.tool
|
315
|
+
def write_bible_outline(request:str) -> str:
|
316
|
+
"""provide a detailed outline of a bible book / chapter / passage; bible book / chapter / passage / reference(s) must be given"""
|
317
|
+
global agentmake, getResponse
|
318
|
+
messages = agentmake(request, **{'instruction': 'bible/outline', 'system': 'auto', 'backend': 'googleai'})
|
319
|
+
return getResponse(messages)
|
320
|
+
|
321
|
+
@mcp.tool
|
322
|
+
def write_bible_insights(request:str) -> str:
|
323
|
+
"""Write exegetical insights in detail on a bible passage; bible book / chapter / passage / reference(s) must be given"""
|
324
|
+
global agentmake, getResponse
|
325
|
+
messages = agentmake(request, **{'instruction': 'bible/insights', 'system': 'auto', 'backend': 'googleai'})
|
326
|
+
return getResponse(messages)
|
327
|
+
|
328
|
+
@mcp.tool
|
329
|
+
def write_bible_sermon(request:str) -> str:
|
330
|
+
"""Write a bible sermon based on a bible passage; bible book / chapter / passage / reference(s) must be given"""
|
331
|
+
global agentmake, getResponse
|
332
|
+
messages = agentmake(request, **{'instruction': 'bible/sermon', 'system': 'auto', 'backend': 'googleai'})
|
333
|
+
return getResponse(messages)
|
334
|
+
|
335
|
+
mcp.run(show_banner=False)
|
biblemate/config.py
ADDED
@@ -0,0 +1 @@
|
|
1
|
+
agent_mode=True
|
biblemate/core/systems.py
CHANGED
@@ -1,26 +1,31 @@
|
|
1
1
|
import os
|
2
2
|
from agentmake import PACKAGE_PATH, AGENTMAKE_USER_DIR, readTextFile
|
3
|
+
from pathlib import Path
|
4
|
+
|
5
|
+
# set up user_directory for customisation
|
6
|
+
user_directory = os.path.join(AGENTMAKE_USER_DIR, "biblemate")
|
7
|
+
Path(user_directory).mkdir(parents=True, exist_ok=True)
|
3
8
|
|
4
9
|
def get_system_suggestion(master_plan: str) -> str:
|
5
10
|
"""
|
6
11
|
create system prompt for suggestion
|
7
12
|
"""
|
8
|
-
possible_system_file_path_2 = os.path.join(PACKAGE_PATH, "systems", "
|
9
|
-
possible_system_file_path_1 = os.path.join(AGENTMAKE_USER_DIR, "systems", "
|
13
|
+
possible_system_file_path_2 = os.path.join(PACKAGE_PATH, "systems", "biblemate", "supervisor.md")
|
14
|
+
possible_system_file_path_1 = os.path.join(AGENTMAKE_USER_DIR, "systems", "biblemate", "supervisor.md")
|
10
15
|
return readTextFile(possible_system_file_path_2 if os.path.isfile(possible_system_file_path_2) else possible_system_file_path_1).format(master_plan=master_plan)
|
11
16
|
|
12
17
|
def get_system_tool_instruction(tool: str, tool_description: str = "") -> str:
|
13
18
|
"""
|
14
19
|
create system prompt for tool instruction
|
15
20
|
"""
|
16
|
-
possible_system_file_path_2 = os.path.join(PACKAGE_PATH, "systems", "
|
17
|
-
possible_system_file_path_1 = os.path.join(AGENTMAKE_USER_DIR, "systems", "
|
21
|
+
possible_system_file_path_2 = os.path.join(PACKAGE_PATH, "systems", "biblemate", "tool_instruction.md")
|
22
|
+
possible_system_file_path_1 = os.path.join(AGENTMAKE_USER_DIR, "systems", "biblemate", "tool_instruction.md")
|
18
23
|
return readTextFile(possible_system_file_path_2 if os.path.isfile(possible_system_file_path_2) else possible_system_file_path_1).format(tool=tool, tool_description=tool_description)
|
19
24
|
|
20
25
|
def get_system_tool_selection(available_tools: list, tool_descriptions: str) -> str:
|
21
26
|
"""
|
22
27
|
create system prompt for tool selection
|
23
28
|
"""
|
24
|
-
possible_system_file_path_2 = os.path.join(PACKAGE_PATH, "systems", "
|
25
|
-
possible_system_file_path_1 = os.path.join(AGENTMAKE_USER_DIR, "systems", "
|
29
|
+
possible_system_file_path_2 = os.path.join(PACKAGE_PATH, "systems", "biblemate", "tool_selection.md")
|
30
|
+
possible_system_file_path_1 = os.path.join(AGENTMAKE_USER_DIR, "systems", "biblemate", "tool_selection.md")
|
26
31
|
return readTextFile(possible_system_file_path_2 if os.path.isfile(possible_system_file_path_2) else possible_system_file_path_1).format(available_tools=available_tools, tool_descriptions=tool_descriptions)
|
biblemate/main.py
CHANGED
@@ -1,11 +1,12 @@
|
|
1
1
|
from biblemate.core.systems import *
|
2
2
|
from biblemate.ui.prompts import getInput
|
3
3
|
from biblemate.ui.info import get_banner
|
4
|
+
from biblemate import config
|
4
5
|
from pathlib import Path
|
5
6
|
import asyncio, re, os
|
6
7
|
from alive_progress import alive_bar
|
7
8
|
from fastmcp import Client
|
8
|
-
from agentmake import agentmake, writeTextFile, getCurrentDateTime, AGENTMAKE_USER_DIR, USER_OS, DEVELOPER_MODE
|
9
|
+
from agentmake import agentmake, getDictionaryOutput, edit_configurations, writeTextFile, getCurrentDateTime, AGENTMAKE_USER_DIR, USER_OS, DEVELOPER_MODE
|
9
10
|
from rich.console import Console
|
10
11
|
from rich.markdown import Markdown
|
11
12
|
from rich.progress import Progress, SpinnerColumn, TextColumn
|
@@ -13,23 +14,14 @@ from rich.terminal_theme import MONOKAI
|
|
13
14
|
if not USER_OS == "Windows":
|
14
15
|
import readline # for better input experience
|
15
16
|
|
16
|
-
# MCP server
|
17
|
-
|
18
|
-
client = Client("http://127.0.0.1:8083/mcp/") # !agentmakemcp agentmakemcp/examples/bible_study.py
|
17
|
+
# Client to interact with the built-in Bible Study MCP server
|
18
|
+
client = Client(os.path.join(os.path.dirname(os.path.realpath(__file__)), "bible_study_mcp.py"))
|
19
19
|
|
20
|
-
# TODO: allow overriding default AgentMake config
|
21
20
|
AGENTMAKE_CONFIG = {
|
22
|
-
"backend": None,
|
23
|
-
"model": None,
|
24
|
-
"model_keep_alive": None,
|
25
|
-
"temperature": None,
|
26
|
-
"max_tokens": None,
|
27
|
-
"context_window": None,
|
28
|
-
"batch_size": None,
|
29
|
-
"stream": None,
|
30
21
|
"print_on_terminal": False,
|
31
22
|
"word_wrap": False,
|
32
23
|
}
|
24
|
+
# TODO: place in config.py
|
33
25
|
MAX_STEPS = 50
|
34
26
|
|
35
27
|
async def main():
|
@@ -48,6 +40,18 @@ async def main():
|
|
48
40
|
tools_raw = await client.list_tools()
|
49
41
|
#print(tools_raw)
|
50
42
|
tools = {t.name: t.description for t in tools_raw}
|
43
|
+
tools_schema = {}
|
44
|
+
for t in tools_raw:
|
45
|
+
schema = {
|
46
|
+
"name": t.name,
|
47
|
+
"description": t.description,
|
48
|
+
"parameters": {
|
49
|
+
"type": "object",
|
50
|
+
"properties": t.inputSchema["properties"],
|
51
|
+
"required": t.inputSchema["required"],
|
52
|
+
},
|
53
|
+
}
|
54
|
+
tools_schema[t.name] = schema
|
51
55
|
|
52
56
|
available_tools = list(tools.keys())
|
53
57
|
if not "get_direct_text_response" in available_tools:
|
@@ -69,7 +73,30 @@ Get a static text-based response directly from a text-based AI model without usi
|
|
69
73
|
prompt_pattern = "|".join(prompt_list)
|
70
74
|
prompt_pattern = f"""^({prompt_pattern}) """
|
71
75
|
|
76
|
+
prompts_schema = {}
|
77
|
+
for p in prompts_raw:
|
78
|
+
arg_properties = {}
|
79
|
+
arg_required = []
|
80
|
+
for a in p.arguments:
|
81
|
+
arg_properties[a.name] = {
|
82
|
+
"type": "string",
|
83
|
+
"description": str(a.description) if a.description else "no description available",
|
84
|
+
}
|
85
|
+
if a.required:
|
86
|
+
arg_required.append(a.name)
|
87
|
+
schema = {
|
88
|
+
"name": p.name,
|
89
|
+
"description": p.description,
|
90
|
+
"parameters": {
|
91
|
+
"type": "object",
|
92
|
+
"properties": arg_properties,
|
93
|
+
"required": arg_required,
|
94
|
+
},
|
95
|
+
}
|
96
|
+
prompts_schema[p.name] = schema
|
97
|
+
|
72
98
|
user_request = ""
|
99
|
+
master_plan = ""
|
73
100
|
messages = []
|
74
101
|
|
75
102
|
while not user_request == ".quit":
|
@@ -110,20 +137,70 @@ Get a static text-based response directly from a text-based AI model without usi
|
|
110
137
|
# Await the custom async progress bar that awaits the task.
|
111
138
|
await async_alive_bar(task)
|
112
139
|
|
140
|
+
# backup
|
141
|
+
def backup():
|
142
|
+
nonlocal console, messages, master_plan
|
143
|
+
timestamp = getCurrentDateTime()
|
144
|
+
storagePath = os.path.join(AGENTMAKE_USER_DIR, "biblemate", timestamp)
|
145
|
+
Path(storagePath).mkdir(parents=True, exist_ok=True)
|
146
|
+
# Save full conversation
|
147
|
+
conversation_file = os.path.join(storagePath, "conversation.py")
|
148
|
+
writeTextFile(conversation_file, str(messages))
|
149
|
+
# Save master plan
|
150
|
+
writeTextFile(os.path.join(storagePath, "master_plan.md"), master_plan)
|
151
|
+
# Save html
|
152
|
+
html_file = os.path.join(storagePath, "conversation.html")
|
153
|
+
console.save_html(html_file, inline_styles=True, theme=MONOKAI)
|
154
|
+
# Save markdown
|
155
|
+
console.save_text(os.path.join(storagePath, "conversation.md"))
|
156
|
+
# Inform users of the backup location
|
157
|
+
print(f"Conversation backup saved to {storagePath}")
|
158
|
+
print(f"Report saved to {html_file}\n")
|
159
|
+
def write_config():
|
160
|
+
# TODO: support more configs
|
161
|
+
config_file = os.path.join(os.path.dirname(os.path.realpath(__file__)), "config.py")
|
162
|
+
writeTextFile(config_file, f"agent_mode={config.agent_mode}")
|
163
|
+
|
113
164
|
if messages:
|
114
165
|
console.rule()
|
115
166
|
|
116
167
|
# Original user request
|
117
168
|
# note: `python3 -m rich.emoji` for checking emoji
|
118
169
|
console.print("Enter your request :smiley: :" if not messages else "Enter a follow-up request :flexed_biceps: :")
|
119
|
-
|
170
|
+
action_list = {
|
171
|
+
".new": "new conversation",
|
172
|
+
".quit": "quit",
|
173
|
+
".backend": "change backend",
|
174
|
+
".chat": "enable chat mode",
|
175
|
+
".agent": "enable agent mode",
|
176
|
+
}
|
177
|
+
input_suggestions = list(action_list.keys())+prompt_list
|
120
178
|
user_request = await getInput("> ", input_suggestions)
|
121
179
|
while not user_request.strip():
|
122
180
|
user_request = await getInput("> ", input_suggestions)
|
123
|
-
# TODO: auto-prompt engineering based on the user request
|
124
181
|
|
125
|
-
|
126
|
-
|
182
|
+
# TODO: ui - radio list menu
|
183
|
+
if user_request in action_list:
|
184
|
+
if user_request == ".backend":
|
185
|
+
edit_configurations()
|
186
|
+
console.rule()
|
187
|
+
console.print("Restart to make the changes in the backend effective!", justify="center")
|
188
|
+
console.rule()
|
189
|
+
elif user_request == ".chat":
|
190
|
+
config.agent_mode = False
|
191
|
+
write_config()
|
192
|
+
console.rule()
|
193
|
+
console.print("Chat Mode Enabled", justify="center")
|
194
|
+
console.rule()
|
195
|
+
elif user_request == ".agent":
|
196
|
+
config.agent_mode = True
|
197
|
+
write_config()
|
198
|
+
console.rule()
|
199
|
+
console.print("Agent Mode Enabled", justify="center")
|
200
|
+
console.rule()
|
201
|
+
elif user_request in (".new", ".quit"):
|
202
|
+
backup() # backup
|
203
|
+
# reset
|
127
204
|
if user_request == ".new":
|
128
205
|
user_request = ""
|
129
206
|
messages = []
|
@@ -131,13 +208,29 @@ Get a static text-based response directly from a text-based AI model without usi
|
|
131
208
|
console.print(get_banner())
|
132
209
|
continue
|
133
210
|
|
134
|
-
|
211
|
+
# auto prompt engineering
|
212
|
+
user_request = agentmake(user_request, tool="improve_prompt", **AGENTMAKE_CONFIG)[-1].get("content", "").strip()[20:-4]
|
213
|
+
|
214
|
+
# Chat mode
|
215
|
+
if not config.agent_mode:
|
216
|
+
async def run_chat_mode():
|
217
|
+
nonlocal messages, user_request
|
218
|
+
messages = agentmake(messages if messages else user_request, system="auto", **AGENTMAKE_CONFIG)
|
219
|
+
await thinking(run_chat_mode)
|
220
|
+
console.print(Markdown(f"# User Request\n\n{messages[-2]['content']}\n\n# AI Response\n\n{messages[-1]['content']}"))
|
221
|
+
continue
|
222
|
+
|
135
223
|
if re.search(prompt_pattern, user_request):
|
136
|
-
print(111)
|
137
224
|
prompt_name = re.search(prompt_pattern, user_request).group(1)
|
138
225
|
user_request = user_request[len(prompt_name):]
|
139
226
|
# Call the MCP prompt
|
140
|
-
|
227
|
+
prompt_schema = prompts_schema[prompt_name[1:]]
|
228
|
+
prompt_properties = prompt_schema["parameters"]["properties"]
|
229
|
+
if len(prompt_properties) == 1 and "request" in prompt_properties: # AgentMake MCP Servers or alike
|
230
|
+
result = await client.get_prompt(prompt_name[1:], {"request": user_request})
|
231
|
+
else:
|
232
|
+
structured_output = getDictionaryOutput(messages=messages, schema=prompt_schema)
|
233
|
+
result = await client.get_prompt(prompt_name[1:], structured_output)
|
141
234
|
#print(result, "\n\n")
|
142
235
|
master_plan = result.messages[0].content.text
|
143
236
|
# display info
|
@@ -184,7 +277,7 @@ Available tools are: {available_tools}.
|
|
184
277
|
|
185
278
|
if not messages:
|
186
279
|
messages = [
|
187
|
-
{"role": "system", "content": "You are
|
280
|
+
{"role": "system", "content": "You are BibleMate, an autonomous AI agent."},
|
188
281
|
{"role": "user", "content": user_request},
|
189
282
|
]
|
190
283
|
else:
|
@@ -221,7 +314,7 @@ Available tools are: {available_tools}.
|
|
221
314
|
nonlocal next_step, next_tool, next_suggestion, tools
|
222
315
|
console.print(Markdown(f"## Next Instruction [{step}]"), "\n")
|
223
316
|
if next_tool == "get_direct_text_response":
|
224
|
-
next_step = agentmake(next_suggestion, system="
|
317
|
+
next_step = agentmake(next_suggestion, system="biblemate/direct_instruction", **AGENTMAKE_CONFIG)[-1].get("content", "").strip()
|
225
318
|
else:
|
226
319
|
next_tool_description = tools.get(next_tool, "No description available.")
|
227
320
|
system_tool_instruction = get_system_tool_instruction(next_tool, next_tool_description)
|
@@ -239,10 +332,16 @@ Available tools are: {available_tools}.
|
|
239
332
|
messages = agentmake(messages, system="auto", **AGENTMAKE_CONFIG)
|
240
333
|
else:
|
241
334
|
try:
|
242
|
-
|
335
|
+
tool_schema = tools_schema[next_tool]
|
336
|
+
tool_properties = tool_schema["parameters"]["properties"]
|
337
|
+
if len(tool_properties) == 1 and "request" in tool_properties: # AgentMake MCP Servers or alike
|
338
|
+
tool_result = await client.call_tool(next_tool, {"request": next_step})
|
339
|
+
else:
|
340
|
+
structured_output = getDictionaryOutput(messages=messages, schema=tool_schema)
|
341
|
+
tool_result = await client.call_tool(next_tool, structured_output)
|
243
342
|
tool_result = tool_result.content[0].text
|
244
343
|
messages[-1]["content"] += f"\n\n[Using tool `{next_tool}`]"
|
245
|
-
messages.append({"role": "assistant", "content": tool_result})
|
344
|
+
messages.append({"role": "assistant", "content": tool_result if tool_result.strip() else "Done!"})
|
246
345
|
except Exception as e:
|
247
346
|
if DEVELOPER_MODE:
|
248
347
|
console.print(f"Error: {e}\nFallback to direct response...\n\n")
|
@@ -268,21 +367,6 @@ Available tools are: {available_tools}.
|
|
268
367
|
console.print(Markdown(next_suggestion), "\n")
|
269
368
|
|
270
369
|
# Backup
|
271
|
-
|
272
|
-
|
273
|
-
|
274
|
-
# Save full conversation
|
275
|
-
conversation_file = os.path.join(storagePath, "conversation.py")
|
276
|
-
writeTextFile(conversation_file, str(messages))
|
277
|
-
# Save master plan
|
278
|
-
writeTextFile(os.path.join(storagePath, "master_plan.md"), master_plan)
|
279
|
-
# Save html
|
280
|
-
html_file = os.path.join(storagePath, "conversation.html")
|
281
|
-
console.save_html(html_file, inline_styles=True, theme=MONOKAI)
|
282
|
-
# Save text
|
283
|
-
console.save_text(os.path.join(storagePath, "conversation.md"))
|
284
|
-
# Inform users of the backup location
|
285
|
-
print(f"Conversation backup saved to {storagePath}")
|
286
|
-
print(f"HTML file saved to {html_file}\n")
|
287
|
-
|
288
|
-
asyncio.run(main())
|
370
|
+
backup()
|
371
|
+
|
372
|
+
asyncio.run(main())
|
biblemate/requirements.txt
CHANGED
biblemate/ui/info.py
CHANGED
@@ -5,10 +5,10 @@ from rich.align import Align
|
|
5
5
|
# Project title styling
|
6
6
|
def get_banner():
|
7
7
|
# Title styling
|
8
|
-
title = Text("
|
9
|
-
title.stylize("bold magenta underline", 0, len("
|
8
|
+
title = Text("BibleMate AI", style="bold magenta", justify="center")
|
9
|
+
title.stylize("bold magenta underline", 0, len("BibleMate AI"))
|
10
10
|
# Tagline styling
|
11
|
-
tagline = Text("
|
11
|
+
tagline = Text("Automate Your Bible Study", style="bold cyan", justify="center")
|
12
12
|
# Combine into a panel
|
13
13
|
banner_content = Align.center(
|
14
14
|
Text("\n") + title + Text("\n") + tagline + Text("\n"),
|
@@ -17,7 +17,7 @@ def get_banner():
|
|
17
17
|
banner = Panel(
|
18
18
|
banner_content,
|
19
19
|
border_style="bright_blue",
|
20
|
-
title="🚀
|
20
|
+
title="🚀 Praise the Lord!",
|
21
21
|
title_align="left",
|
22
22
|
subtitle="Eliran Wong",
|
23
23
|
subtitle_align="right",
|
biblemate/ui/prompts.py
CHANGED
@@ -85,11 +85,11 @@ async def getInput(prompt:str="Instruction: ", input_suggestions:list=None):
|
|
85
85
|
if not os.path.isdir(history_dir):
|
86
86
|
from pathlib import Path
|
87
87
|
Path(history_dir).mkdir(parents=True, exist_ok=True)
|
88
|
-
session = PromptSession(history=FileHistory(os.path.join(history_dir, "
|
88
|
+
session = PromptSession(history=FileHistory(os.path.join(history_dir, "biblemate_history")))
|
89
89
|
completer = FuzzyCompleter(WordCompleter(input_suggestions, ignore_case=True)) if input_suggestions else None
|
90
90
|
instruction = await session.prompt_async(
|
91
91
|
prompt,
|
92
|
-
bottom_toolbar="[ENTER] submit [TAB]
|
92
|
+
bottom_toolbar="[ENTER] submit [TAB] linebreak [Ctrl+N] new [Ctrl+Q] quit",
|
93
93
|
completer=completer,
|
94
94
|
key_bindings=bindings,
|
95
95
|
)
|
@@ -0,0 +1,119 @@
|
|
1
|
+
Metadata-Version: 2.1
|
2
|
+
Name: biblemate
|
3
|
+
Version: 0.0.15
|
4
|
+
Summary: BibleMate AI - Automate Your Bible Study
|
5
|
+
Home-page: https://toolmate.ai
|
6
|
+
Author: Eliran Wong
|
7
|
+
Author-email: support@marvel.bible
|
8
|
+
License: GNU General Public License (GPL)
|
9
|
+
Project-URL: Source, https://github.com/eliranwong/biblemate
|
10
|
+
Project-URL: Tracker, https://github.com/eliranwong/biblemate/issues
|
11
|
+
Project-URL: Documentation, https://github.com/eliranwong/biblemate/wiki
|
12
|
+
Project-URL: Funding, https://www.paypal.me/toolmate
|
13
|
+
Keywords: mcp agent toolmate ai anthropic azure chatgpt cohere deepseek genai github googleai groq llamacpp mistral ollama openai vertexai xai
|
14
|
+
Classifier: Development Status :: 5 - Production/Stable
|
15
|
+
Classifier: Intended Audience :: End Users/Desktop
|
16
|
+
Classifier: Topic :: Utilities
|
17
|
+
Classifier: Topic :: Scientific/Engineering :: Artificial Intelligence
|
18
|
+
Classifier: Topic :: Software Development :: Build Tools
|
19
|
+
Classifier: License :: OSI Approved :: GNU General Public License v3 or later (GPLv3+)
|
20
|
+
Classifier: Programming Language :: Python :: 3.8
|
21
|
+
Classifier: Programming Language :: Python :: 3.9
|
22
|
+
Classifier: Programming Language :: Python :: 3.10
|
23
|
+
Classifier: Programming Language :: Python :: 3.11
|
24
|
+
Classifier: Programming Language :: Python :: 3.12
|
25
|
+
Requires-Python: >=3.8, <3.13
|
26
|
+
Requires-Dist: agentmake >=1.0.65
|
27
|
+
Requires-Dist: agentmakemcp >=0.0.8
|
28
|
+
Requires-Dist: alive-progress
|
29
|
+
Requires-Dist: fastmcp[cli]
|
30
|
+
Requires-Dist: rich
|
31
|
+
Provides-Extra: genai
|
32
|
+
Requires-Dist: google-genai >=1.25.0 ; extra == 'genai'
|
33
|
+
|
34
|
+
# BibleMate AI
|
35
|
+
|
36
|
+
BibleMate AI is a groundbreaking, autonomous AI agent designed to revolutionize your Bible study. It can create study plans, coordinate multiple Bible tools, and take multi-step actions to complete complex Bible-related tasks, such as conducting an in-depth study of a particular Bible passage.
|
37
|
+
|
38
|
+
## Core Features
|
39
|
+
|
40
|
+
* **Autonomous AI Agent:** BibleMate AI can work independently to fulfill your bible study requests.
|
41
|
+
* **Multi-step Task Execution:** It can break down complex tasks into smaller, manageable steps and execute them sequentially.
|
42
|
+
* **Rich Toolset:** Comes with over 40 built-in bible tools, powered by our comprehensive bible suite, the [UniqueBible App](https://github.com/eliranwong/UniqueBible).
|
43
|
+
* **Customizable and Extensible:** Advanced users can customize existing tools or add new ones to suit their specific needs.
|
44
|
+
|
45
|
+
## Built-in Tools
|
46
|
+
|
47
|
+
To see the full list of built-in tools and their descriptions, please see the [TOOLS.md](TOOLS.md) file.
|
48
|
+
|
49
|
+
## Demo
|
50
|
+
|
51
|
+
See BibleMate AI in action in our YouTube video: [BibleMate AI Demo](https://www.youtube.com/watch?v=2BPZVufnKJU&feature=youtu.be)
|
52
|
+
|
53
|
+
## Our Other AI Projects
|
54
|
+
|
55
|
+
BibleMate AI was developed from our agent development kit (ADK), [AgentMake AI](https://github.com/eliranwong/agentmake), and was a side-project we explored while developing [BibleMate AI](https://github.com/eliranwong/BibleMate). Its success as a bible study tool prompted us to develop it further as a standalone project.
|
56
|
+
|
57
|
+
## BibleMate AI Agentic Workflow
|
58
|
+
|
59
|
+
1. **BibleMate AI** receives a request from a user.
|
60
|
+
2. **BibleMate AI** analyzes the request and determines that it requires multiple steps to complete.
|
61
|
+
3. **BibleMate AI** generates a `Master Plan` that outlines the steps needed to complete the request.
|
62
|
+
4. **BibleMate AI** sends the `Master Plan` to a supervisor agent, who reviews the prompt and provides suggestions for improvement.
|
63
|
+
5. **BibleMate AI** sends the suggestions to a bible tool selection agent, who selects the most appropriate bible tools for each step of the `Master Plan`.
|
64
|
+
6. **BibleMate AI** sends the selected bible tools and the `Master Plan` to an instruction generation agent, who converts the suggestions into clear and concise instructions for an AI assistant to follow.
|
65
|
+
7. **BibleMate AI** sends the instructions to an AI assistant, who executes the instructions using the selected bible tools.
|
66
|
+
8. **BibleMate AI** monitors the progress of the AI assistant and provides additional suggestions or instructions as needed.
|
67
|
+
9. Once all steps are completed, **BibleMate AI** provides a concise summary of the results to the user.
|
68
|
+
10. The user receives the final response, which fully resolves their original request.
|
69
|
+
|
70
|
+
## Getting Started
|
71
|
+
|
72
|
+
> pip install --upgrade biblemate
|
73
|
+
|
74
|
+
> biblemate
|
75
|
+
|
76
|
+
## Configure AI Backend
|
77
|
+
|
78
|
+
After BibleMate AI is launched, enter:
|
79
|
+
|
80
|
+
> .backend
|
81
|
+
|
82
|
+
A text editor is opened for you to edit the AgentMake AI settings. Change the `DEFAULT_AI_BACKEND` to your own choice of AI backend and enter API keys where appropriate.
|
83
|
+
|
84
|
+
## AI Modes
|
85
|
+
|
86
|
+
You can swap between two AI modes:
|
87
|
+
|
88
|
+
Chat mode – Provide users with direct text responses without using tools. This is useful when users have simple queries and need straightforward answers.
|
89
|
+
|
90
|
+
Agent mode - Fully autonomous agent designed to plan, orchestrate tools, and take multiple actions to address user requests.
|
91
|
+
|
92
|
+
How to swap?
|
93
|
+
|
94
|
+
* Enter `.chat` in BibleMate AI prompt to enable chat mode and disable agent mode.
|
95
|
+
* Enter `.agent` in BibleMate AI prompt to enable agent mode and disable chat mode.
|
96
|
+
|
97
|
+
## Action Menu
|
98
|
+
|
99
|
+
*(Coming soon)*
|
100
|
+
|
101
|
+
## Keyboard Shortcut
|
102
|
+
|
103
|
+
*(Coming soon)*
|
104
|
+
|
105
|
+
## Customization
|
106
|
+
|
107
|
+
*(Coming soon)*
|
108
|
+
|
109
|
+
## License
|
110
|
+
|
111
|
+
This project is licensed under the Creative Commons Attribution-NonCommercial 4.0 International License - see the [LICENSE](LICENSE) file for details.
|
112
|
+
|
113
|
+
## Acknowledgments
|
114
|
+
|
115
|
+
BibleMate AI is built upon the foundations of our other projects:
|
116
|
+
* [UniqueBible App](https.github.com/eliranwong/UniqueBible)
|
117
|
+
* [XoMate AI](https.github.com/eliranwong/xomate)
|
118
|
+
* [AgentMake AI](https.github.com/eliranwong/agentmake)
|
119
|
+
* [AgentMake AI MCP](https.github.com/eliranwong/agentmakemcp)
|
@@ -0,0 +1,15 @@
|
|
1
|
+
biblemate/README.md,sha256=Q2CfAjq0D4vMe6iFSrv-818eKEmMgnmqO38fX3Y7xl0,4136
|
2
|
+
biblemate/__init__.py,sha256=47DEQpj8HBSa-_TImW-5JCeuQeRkm5NMpJWZG3hSuFU,0
|
3
|
+
biblemate/bible_study_mcp.py,sha256=3CsDgR4Gf1f0ULsRf92WeN9WDogSEbW9k4kAIr1clys,15642
|
4
|
+
biblemate/config.py,sha256=ktpLv_5qdbf3FErxUIdCvVc9MO6kQH4Zt_omoJ7msIs,15
|
5
|
+
biblemate/main.py,sha256=lTUr9nY5Obbj75b3re6TG7tOJ8_ncjIqIhMoXQ2piq0,18483
|
6
|
+
biblemate/package_name.txt,sha256=WkkuEEkgw7EKpXV8GshpzhZlwRor1wpotTS7vP24b_g,9
|
7
|
+
biblemate/requirements.txt,sha256=_R2QI82CzbgXzl3QGlowj7R_L6LE47w2XUnM2NN_dFE,70
|
8
|
+
biblemate/core/systems.py,sha256=nG_NgcLSRhdaHuxuCPN5ZfJUhP88kdfwhRCvRk4RLjI,1874
|
9
|
+
biblemate/ui/info.py,sha256=QRCno0CYUHVoOtVkZIxVamZONmtI7KRmOT2YoUagY5s,811
|
10
|
+
biblemate/ui/prompts.py,sha256=mxdC5BU7NMok9MOm1E39MHSrxB9gSRqGY7HsOc--rRg,3484
|
11
|
+
biblemate-0.0.15.dist-info/METADATA,sha256=gjwisZfW4fbQNtjRWtcutqnnfXlhZttB3GTcfV1j55M,5631
|
12
|
+
biblemate-0.0.15.dist-info/WHEEL,sha256=oiQVh_5PnQM0E3gPdiz09WCNmwiHDMaGer_elqB3coM,92
|
13
|
+
biblemate-0.0.15.dist-info/entry_points.txt,sha256=tbEfTFr6LhPR1E_zP3CsPwJsmG-G4MCnJ3FcQEMiqo0,50
|
14
|
+
biblemate-0.0.15.dist-info/top_level.txt,sha256=pq9uX0tAS0bizZcZ5GW5zIoDLQBa-b5QDlDGsdHNgiU,10
|
15
|
+
biblemate-0.0.15.dist-info/RECORD,,
|
@@ -1,125 +0,0 @@
|
|
1
|
-
Metadata-Version: 2.1
|
2
|
-
Name: biblemate
|
3
|
-
Version: 0.0.11
|
4
|
-
Summary: AgentMake AI MCP Servers - Easy setup of MCP servers running AgentMake AI agentic components.
|
5
|
-
Home-page: https://toolmate.ai
|
6
|
-
Author: Eliran Wong
|
7
|
-
Author-email: support@toolmate.ai
|
8
|
-
License: GNU General Public License (GPL)
|
9
|
-
Project-URL: Source, https://github.com/eliranwong/xomateai
|
10
|
-
Project-URL: Tracker, https://github.com/eliranwong/xomateai/issues
|
11
|
-
Project-URL: Documentation, https://github.com/eliranwong/xomateai/wiki
|
12
|
-
Project-URL: Funding, https://www.paypal.me/toolmate
|
13
|
-
Keywords: mcp agent toolmate ai anthropic azure chatgpt cohere deepseek genai github googleai groq llamacpp mistral ollama openai vertexai xai
|
14
|
-
Classifier: Development Status :: 5 - Production/Stable
|
15
|
-
Classifier: Intended Audience :: End Users/Desktop
|
16
|
-
Classifier: Topic :: Utilities
|
17
|
-
Classifier: Topic :: Scientific/Engineering :: Artificial Intelligence
|
18
|
-
Classifier: Topic :: Software Development :: Build Tools
|
19
|
-
Classifier: License :: OSI Approved :: GNU General Public License v3 or later (GPLv3+)
|
20
|
-
Classifier: Programming Language :: Python :: 3.8
|
21
|
-
Classifier: Programming Language :: Python :: 3.9
|
22
|
-
Classifier: Programming Language :: Python :: 3.10
|
23
|
-
Classifier: Programming Language :: Python :: 3.11
|
24
|
-
Classifier: Programming Language :: Python :: 3.12
|
25
|
-
Requires-Python: >=3.8, <3.13
|
26
|
-
Requires-Dist: agentmake >=1.0.63
|
27
|
-
Requires-Dist: agentmakemcp >=0.0.8
|
28
|
-
Requires-Dist: alive-progress
|
29
|
-
Requires-Dist: fastmcp
|
30
|
-
Requires-Dist: rich
|
31
|
-
Provides-Extra: genai
|
32
|
-
Requires-Dist: google-genai >=1.25.0 ; extra == 'genai'
|
33
|
-
|
34
|
-
=========
|
35
|
-
XoMate AI
|
36
|
-
=========
|
37
|
-
|
38
|
-
Execute. Orchestrate. Automate.
|
39
|
-
|
40
|
-
**XoMate.AI is your autonomous execution engine—automating planning, orchestration, and execution of tasks using multiple tools to resolve user requests seamlessly.**
|
41
|
-
|
42
|
-
For professionals, teams, and innovators who need more than just chat-based AI, xomate.ai is an intelligent automation agent that plans, coordinates, and executes tasks across multiple tools. Unlike basic AI chatbots, xomate.ai doesn’t just answer—it gets things done.
|
43
|
-
|
44
|
-
Core Messaging
|
45
|
-
--------------
|
46
|
-
|
47
|
-
Elevator Pitch
|
48
|
-
~~~~~~~~~~~~~~
|
49
|
-
|
50
|
-
xomate.ai is an automation-first AI agent that takes your goals, creates a structured plan, and executes it by orchestrating multiple tools. It goes beyond conversation—delivering real results.
|
51
|
-
|
52
|
-
Value Propositions
|
53
|
-
~~~~~~~~~~~~~~~~~~
|
54
|
-
|
55
|
-
* **Execute**: Automatically carry out tasks from start to finish.
|
56
|
-
* **Orchestrate**: Seamlessly coordinate multiple tools and APIs.
|
57
|
-
* **Automate**: Save time and effort by letting xomate.ai handle complex workflows.
|
58
|
-
|
59
|
-
Key Differentiators
|
60
|
-
~~~~~~~~~~~~~~~~~~~
|
61
|
-
|
62
|
-
* Built on the `agentmake.ai <https://github.com/eliranwong/agentmake>`_ framework, proven through `LetMeDoIt.AI <https://github.com/eliranwong/letmedoit>`_, `ToolMate.AI <https://github.com/eliranwong/toolmate>`_ and `TeamGen AI <https://github.com/eliranwong/teamgenai>`_.
|
63
|
-
* Execution-focused, not just advisory.
|
64
|
-
* Flexible integration with existing tools and APIs.
|
65
|
-
* Scalable from individual users to enterprise workflows.
|
66
|
-
* **Versatile** – supports 16 AI backends and numerous models, leveraging the advantages of AgentMake AI.
|
67
|
-
* **Extensible** – capable of extending functionalities by interacting with Additional AgentMake AI tools or third-party MCP (Modal Context Protocol) servers.
|
68
|
-
|
69
|
-
XoMate AI Agentic Workflow
|
70
|
-
--------------------------
|
71
|
-
|
72
|
-
1. **XoMate AI** receives a request from a user.
|
73
|
-
2. **XoMate AI** analyzes the request and determines that it requires multiple steps to complete.
|
74
|
-
3. **XoMate AI** generates a ``Master Prompt`` that outlines the steps needed to complete the request.
|
75
|
-
4. **XoMate AI** sends the ``Master Prompt`` to a supervisor agent, who reviews the prompt and provides suggestions for improvement.
|
76
|
-
5. **XoMate AI** sends the suggestions to a tool selection agent, who selects the most appropriate tools for each step of the ``Master Prompt``.
|
77
|
-
6. **XoMate AI** sends the selected tools and the ``Master Prompt`` to an instruction generation agent, who converts the suggestions into clear and concise instructions for an AI assistant to follow.
|
78
|
-
7. **XoMate AI** sends the instructions to an AI assistant, who executes the instructions using the selected tools. When the selected tool is not an internal tool, built in with XoMate AI, XoMate AI calls the external tool via interacting with the MCP (Modal Context Protocol) servers, configured by users.
|
79
|
-
8. **XoMate AI** monitors the progress of the AI assistant and provides additional suggestions or instructions as needed.
|
80
|
-
9. Once all steps are completed, **XoMate AI** provides a concise summary of the results to the user.
|
81
|
-
10. The user receives the final response, which fully resolves their original request.
|
82
|
-
|
83
|
-
Development in Progress
|
84
|
-
-----------------------
|
85
|
-
|
86
|
-
1. Agentic workflow developed and tested.
|
87
|
-
2. Core code built for the agentic workflow.
|
88
|
-
3. Tested with AgentMake AI MCP servers.
|
89
|
-
|
90
|
-
Pending
|
91
|
-
~~~~~~~
|
92
|
-
|
93
|
-
* Build an action plan agent to handle random requests.
|
94
|
-
* Refine code and improve effectiveness.
|
95
|
-
* Test with third-party systems.
|
96
|
-
* Select frequently used AgentMake AI tools to include in the main library as built-in tools.
|
97
|
-
* Build CLI/TUI interfaces.
|
98
|
-
* Build a web UI.
|
99
|
-
* Test on Windows, macOS, Linux, and ChromeOS.
|
100
|
-
* Test on Android mobile devices.
|
101
|
-
|
102
|
-
Custom Features
|
103
|
-
~~~~~~~~~~~~~~~
|
104
|
-
|
105
|
-
* options to unload some or all built-in tools
|
106
|
-
* custom XoMate AI system prompts
|
107
|
-
* edit master plan
|
108
|
-
* iteration allowance
|
109
|
-
* change tools
|
110
|
-
|
111
|
-
... more ...
|
112
|
-
|
113
|
-
Install (upcoming ...)
|
114
|
-
----------------------
|
115
|
-
|
116
|
-
.. code-block:: bash
|
117
|
-
|
118
|
-
pip install --upgrade xomateai
|
119
|
-
|
120
|
-
Attention: The ``xomateai`` package is currently mirroring the ``agentmakemcp`` package. Once the development of XoMate AI reaches the production stage, the actual ``xomateai`` package will be uploaded.
|
121
|
-
|
122
|
-
License
|
123
|
-
-------
|
124
|
-
|
125
|
-
This project is licensed under the MIT License - see the `LICENSE <LICENSE>`_ file for details.
|
@@ -1,13 +0,0 @@
|
|
1
|
-
biblemate/README.md,sha256=6_KBDZbqqeGJRmsVSfcI2cFJnIhsNru2RXf_jdhs0Qc,4372
|
2
|
-
biblemate/__init__.py,sha256=47DEQpj8HBSa-_TImW-5JCeuQeRkm5NMpJWZG3hSuFU,0
|
3
|
-
biblemate/main.py,sha256=T42FJ8d3jX-wswuHAWB7odAoyEsmdWRXdRYBkiSNpa4,14230
|
4
|
-
biblemate/package_name.txt,sha256=WkkuEEkgw7EKpXV8GshpzhZlwRor1wpotTS7vP24b_g,9
|
5
|
-
biblemate/requirements.txt,sha256=3jbAuJYrYEaFdfYPZciQYXtPeTL1qY2k_Frwaj209qY,65
|
6
|
-
biblemate/core/systems.py,sha256=YQUY0rGxUxHIw4JMMim_WJIoVMvtyLgV3b3mlMPbwgk,1669
|
7
|
-
biblemate/ui/info.py,sha256=MSebTb3dPIHSUaZ-Y5X6FFCvYIIWJC0iIdhmTyiK1GA,811
|
8
|
-
biblemate/ui/prompts.py,sha256=UD4Ty1y2Oo4Ov25j2LIgFz6PTBobi8KbB9J9RAbVRks,3485
|
9
|
-
biblemate-0.0.11.dist-info/METADATA,sha256=8pqhET-OVtpGejT6tisw3z4ZM9z6o8mGMe1J0GppLCs,5911
|
10
|
-
biblemate-0.0.11.dist-info/WHEEL,sha256=oiQVh_5PnQM0E3gPdiz09WCNmwiHDMaGer_elqB3coM,92
|
11
|
-
biblemate-0.0.11.dist-info/entry_points.txt,sha256=tbEfTFr6LhPR1E_zP3CsPwJsmG-G4MCnJ3FcQEMiqo0,50
|
12
|
-
biblemate-0.0.11.dist-info/top_level.txt,sha256=pq9uX0tAS0bizZcZ5GW5zIoDLQBa-b5QDlDGsdHNgiU,10
|
13
|
-
biblemate-0.0.11.dist-info/RECORD,,
|
File without changes
|
File without changes
|
File without changes
|