biblemate 0.0.20__tar.gz → 0.0.22__tar.gz
This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
- {biblemate-0.0.20 → biblemate-0.0.22}/PKG-INFO +53 -7
- {biblemate-0.0.20 → biblemate-0.0.22}/biblemate/README.md +52 -6
- {biblemate-0.0.20 → biblemate-0.0.22}/biblemate/main.py +113 -71
- {biblemate-0.0.20 → biblemate-0.0.22}/biblemate/requirements.txt +1 -1
- {biblemate-0.0.20 → biblemate-0.0.22}/biblemate.egg-info/PKG-INFO +53 -7
- {biblemate-0.0.20 → biblemate-0.0.22}/biblemate.egg-info/requires.txt +1 -1
- {biblemate-0.0.20 → biblemate-0.0.22}/setup.py +1 -1
- {biblemate-0.0.20 → biblemate-0.0.22}/biblemate/__init__.py +0 -0
- {biblemate-0.0.20 → biblemate-0.0.22}/biblemate/bible_study_mcp.py +0 -0
- {biblemate-0.0.20 → biblemate-0.0.22}/biblemate/config.py +0 -0
- {biblemate-0.0.20 → biblemate-0.0.22}/biblemate/core/systems.py +0 -0
- {biblemate-0.0.20 → biblemate-0.0.22}/biblemate/package_name.txt +0 -0
- {biblemate-0.0.20 → biblemate-0.0.22}/biblemate/ui/info.py +0 -0
- {biblemate-0.0.20 → biblemate-0.0.22}/biblemate/ui/prompts.py +0 -0
- {biblemate-0.0.20 → biblemate-0.0.22}/biblemate.egg-info/SOURCES.txt +0 -0
- {biblemate-0.0.20 → biblemate-0.0.22}/biblemate.egg-info/dependency_links.txt +0 -0
- {biblemate-0.0.20 → biblemate-0.0.22}/biblemate.egg-info/entry_points.txt +0 -0
- {biblemate-0.0.20 → biblemate-0.0.22}/biblemate.egg-info/top_level.txt +0 -0
- {biblemate-0.0.20 → biblemate-0.0.22}/setup.cfg +0 -0
@@ -1,6 +1,6 @@
|
|
1
1
|
Metadata-Version: 2.1
|
2
2
|
Name: biblemate
|
3
|
-
Version: 0.0.
|
3
|
+
Version: 0.0.22
|
4
4
|
Summary: BibleMate AI - Automate Your Bible Study
|
5
5
|
Home-page: https://toolmate.ai
|
6
6
|
Author: Eliran Wong
|
@@ -88,17 +88,63 @@ How to swap?
|
|
88
88
|
* Enter `.chat` in BibleMate AI prompt to enable chat mode and disable agent mode.
|
89
89
|
* Enter `.agent` in BibleMate AI prompt to enable agent mode and disable chat mode.
|
90
90
|
|
91
|
-
##
|
91
|
+
## Manual Tool Selection
|
92
|
+
|
93
|
+
In some cases, you may want to specify a particular tool for a simple task, rather than having a tool automatically selected in the fully automatic `agent mode`.
|
94
|
+
|
95
|
+
You can specify a single tool by prefixing a tool name with `@` at the beginning of your prompt. For example,
|
96
|
+
|
97
|
+
```
|
98
|
+
@retrieve_bible_cross_references Deut 6:4; John 3:16
|
99
|
+
```
|
100
|
+
|
101
|
+
Watch this video: https://youtu.be/50m1KRj6uhs
|
92
102
|
|
93
|
-
|
103
|
+
## Custom Master Plan with Multiple Tools
|
94
104
|
|
95
|
-
|
105
|
+
In some cases, you may want to specify a `custom plan` with multiple tools specified for different steps for a complex task, rather than having a `master plan` automatically generated in fully automatic agent mode.
|
96
106
|
|
97
|
-
|
107
|
+
You can use a custom 'Master Plan' of your own, instead of one generated by BibleMate AI. To do this, start your BibleMate AI prompt with '@@' followed by your own master plan for a Bible study. For example,
|
98
108
|
|
99
|
-
|
109
|
+
```
|
110
|
+
@@ Analyze John 3:16 with the following steps:
|
111
|
+
1. Call tool 'retrieve_english_bible_verses' for Bible text,
|
112
|
+
2. Call tool 'retrieve_bible_cross_references' for Bible cross-references,
|
113
|
+
3. Call tool 'interpret_new_testament_verse' for interpretation, and
|
114
|
+
4. Call tool 'write_bible_theology' to explain its theology.
|
115
|
+
```
|
116
|
+
|
117
|
+
Watch this video: https://youtu.be/Lejq0sAx030
|
118
|
+
|
119
|
+
The '@@' trick works even when you are using 'chat' mode with 'agent' mode disabled.
|
120
|
+
|
121
|
+
## Action Menu
|
100
122
|
|
101
|
-
|
123
|
+
There is a set of predefined entries, that starts with a dot sign `.`:
|
124
|
+
|
125
|
+
- `.new` - new conversation
|
126
|
+
- `.quit` - quit
|
127
|
+
- `.backend` - change backend
|
128
|
+
- `.chat` - enable chat mode
|
129
|
+
- `.agent` - enable agent mode
|
130
|
+
- `.tools` - list available tools
|
131
|
+
- `.backup` - backup conversation
|
132
|
+
- `.open` - open a file or directory, e.g. `.open /home/user/report.html`
|
133
|
+
|
134
|
+
## Keyboard Shortcuts
|
135
|
+
|
136
|
+
The following key bindings are supported in BibleMate AI prompt field:
|
137
|
+
|
138
|
+
- `Ctrl+N` new conversation
|
139
|
+
- `Ctrl+Q` quit
|
140
|
+
- `Ctrl+C` copy selected prompt text
|
141
|
+
- `Ctrl+V` paste text in a prompt
|
142
|
+
- `Ctrl+I` or `TAB` new line
|
143
|
+
- `Ctrl+Z` clear prompt text
|
144
|
+
- `Esc+a` jump to the beginning of a prompt
|
145
|
+
- `Esc+z` jump to the end of a prompt
|
146
|
+
- `Esc+b` or `HOME` jump to the beginning of a line in a prompt
|
147
|
+
- `Esc+e` or `END` jump to the end of a line in a prompt
|
102
148
|
|
103
149
|
## License
|
104
150
|
|
@@ -61,17 +61,63 @@ How to swap?
|
|
61
61
|
* Enter `.chat` in BibleMate AI prompt to enable chat mode and disable agent mode.
|
62
62
|
* Enter `.agent` in BibleMate AI prompt to enable agent mode and disable chat mode.
|
63
63
|
|
64
|
-
##
|
64
|
+
## Manual Tool Selection
|
65
|
+
|
66
|
+
In some cases, you may want to specify a particular tool for a simple task, rather than having a tool automatically selected in the fully automatic `agent mode`.
|
67
|
+
|
68
|
+
You can specify a single tool by prefixing a tool name with `@` at the beginning of your prompt. For example,
|
69
|
+
|
70
|
+
```
|
71
|
+
@retrieve_bible_cross_references Deut 6:4; John 3:16
|
72
|
+
```
|
73
|
+
|
74
|
+
Watch this video: https://youtu.be/50m1KRj6uhs
|
65
75
|
|
66
|
-
|
76
|
+
## Custom Master Plan with Multiple Tools
|
67
77
|
|
68
|
-
|
78
|
+
In some cases, you may want to specify a `custom plan` with multiple tools specified for different steps for a complex task, rather than having a `master plan` automatically generated in fully automatic agent mode.
|
69
79
|
|
70
|
-
|
80
|
+
You can use a custom 'Master Plan' of your own, instead of one generated by BibleMate AI. To do this, start your BibleMate AI prompt with '@@' followed by your own master plan for a Bible study. For example,
|
71
81
|
|
72
|
-
|
82
|
+
```
|
83
|
+
@@ Analyze John 3:16 with the following steps:
|
84
|
+
1. Call tool 'retrieve_english_bible_verses' for Bible text,
|
85
|
+
2. Call tool 'retrieve_bible_cross_references' for Bible cross-references,
|
86
|
+
3. Call tool 'interpret_new_testament_verse' for interpretation, and
|
87
|
+
4. Call tool 'write_bible_theology' to explain its theology.
|
88
|
+
```
|
89
|
+
|
90
|
+
Watch this video: https://youtu.be/Lejq0sAx030
|
91
|
+
|
92
|
+
The '@@' trick works even when you are using 'chat' mode with 'agent' mode disabled.
|
93
|
+
|
94
|
+
## Action Menu
|
73
95
|
|
74
|
-
|
96
|
+
There is a set of predefined entries, that starts with a dot sign `.`:
|
97
|
+
|
98
|
+
- `.new` - new conversation
|
99
|
+
- `.quit` - quit
|
100
|
+
- `.backend` - change backend
|
101
|
+
- `.chat` - enable chat mode
|
102
|
+
- `.agent` - enable agent mode
|
103
|
+
- `.tools` - list available tools
|
104
|
+
- `.backup` - backup conversation
|
105
|
+
- `.open` - open a file or directory, e.g. `.open /home/user/report.html`
|
106
|
+
|
107
|
+
## Keyboard Shortcuts
|
108
|
+
|
109
|
+
The following key bindings are supported in BibleMate AI prompt field:
|
110
|
+
|
111
|
+
- `Ctrl+N` new conversation
|
112
|
+
- `Ctrl+Q` quit
|
113
|
+
- `Ctrl+C` copy selected prompt text
|
114
|
+
- `Ctrl+V` paste text in a prompt
|
115
|
+
- `Ctrl+I` or `TAB` new line
|
116
|
+
- `Ctrl+Z` clear prompt text
|
117
|
+
- `Esc+a` jump to the beginning of a prompt
|
118
|
+
- `Esc+z` jump to the end of a prompt
|
119
|
+
- `Esc+b` or `HOME` jump to the beginning of a line in a prompt
|
120
|
+
- `Esc+e` or `END` jump to the end of a line in a prompt
|
75
121
|
|
76
122
|
## License
|
77
123
|
|
@@ -39,6 +39,7 @@ async def main_async():
|
|
39
39
|
tools_raw = await client.list_tools()
|
40
40
|
#print(tools_raw)
|
41
41
|
tools = {t.name: t.description for t in tools_raw}
|
42
|
+
tools = dict(sorted(tools.items()))
|
42
43
|
tools_schema = {}
|
43
44
|
for t in tools_raw:
|
44
45
|
schema = {
|
@@ -55,6 +56,7 @@ async def main_async():
|
|
55
56
|
available_tools = list(tools.keys())
|
56
57
|
if not "get_direct_text_response" in available_tools:
|
57
58
|
available_tools.insert(0, "get_direct_text_response")
|
59
|
+
available_tools_pattern = "|".join(available_tools)
|
58
60
|
|
59
61
|
# add tool description for get_direct_text_response if not exists
|
60
62
|
if not "get_direct_text_response" in tools:
|
@@ -126,13 +128,14 @@ Get a static text-based response directly from a text-based AI model without usi
|
|
126
128
|
bar() # Update the bar
|
127
129
|
await asyncio.sleep(0.01) # Yield control back to the event loop
|
128
130
|
return task.result()
|
129
|
-
async def
|
131
|
+
async def process_tool(tool, tool_instruction, step_number=None):
|
130
132
|
"""
|
131
133
|
Manages the async task and the progress bar.
|
132
134
|
"""
|
133
|
-
|
135
|
+
if step_number:
|
136
|
+
print(f"# Starting Step [{step_number}]...")
|
134
137
|
# Create the async task but don't await it yet.
|
135
|
-
task = asyncio.create_task(
|
138
|
+
task = asyncio.create_task(run_tool(tool, tool_instruction))
|
136
139
|
# Await the custom async progress bar that awaits the task.
|
137
140
|
await async_alive_bar(task)
|
138
141
|
|
@@ -173,9 +176,12 @@ Get a static text-based response directly from a text-based AI model without usi
|
|
173
176
|
".chat": "enable chat mode",
|
174
177
|
".agent": "enable agent mode",
|
175
178
|
".tools": "list available tools",
|
179
|
+
#".resources": "list available resources",
|
180
|
+
#".prompts": "list available prompts",
|
181
|
+
".backup": "backup conversation",
|
176
182
|
".open": "open a file or directory",
|
177
183
|
}
|
178
|
-
input_suggestions = list(action_list.keys())+prompt_list
|
184
|
+
input_suggestions = list(action_list.keys())+[f"@{t} " for t in available_tools]+prompt_list
|
179
185
|
user_request = await getInput("> ", input_suggestions)
|
180
186
|
while not user_request.strip():
|
181
187
|
user_request = await getInput("> ", input_suggestions)
|
@@ -190,7 +196,9 @@ Get a static text-based response directly from a text-based AI model without usi
|
|
190
196
|
|
191
197
|
# TODO: ui - radio list menu
|
192
198
|
if user_request in action_list:
|
193
|
-
if user_request == ".
|
199
|
+
if user_request == ".backup":
|
200
|
+
backup()
|
201
|
+
elif user_request == ".tools":
|
194
202
|
console.rule()
|
195
203
|
tools_descriptions = [f"- `{name}`: {description}" for name, description in tools.items()]
|
196
204
|
console.print(Markdown("## Available Tools\n\n"+"\n".join(tools_descriptions)))
|
@@ -222,14 +230,68 @@ Get a static text-based response directly from a text-based AI model without usi
|
|
222
230
|
console.print(get_banner())
|
223
231
|
continue
|
224
232
|
|
225
|
-
#
|
226
|
-
|
227
|
-
|
228
|
-
|
229
|
-
|
233
|
+
# Check if a single tool is specified
|
234
|
+
specified_tool = ""
|
235
|
+
if re.search(f"""^@({available_tools_pattern}) """, user_request):
|
236
|
+
specified_tool = re.search(f"""^@({available_tools_pattern}) """, user_request).group(1)
|
237
|
+
user_request = user_request[len(specified_tool)+2:]
|
238
|
+
elif user_request.startswith("@@"):
|
239
|
+
specified_tool = "@@"
|
240
|
+
master_plan = user_request[2:].strip()
|
241
|
+
async def refine_custom_plan():
|
242
|
+
nonlocal messages, user_request, master_plan
|
243
|
+
# Prompt engineering
|
244
|
+
#master_plan = agentmake(messages if messages else master_plan, follow_up_prompt=master_plan if messages else None, tool="improve_prompt", **AGENTMAKE_CONFIG)[-1].get("content", "").strip()[20:-4]
|
245
|
+
# Summarize user request in one-sentence instruction
|
246
|
+
user_request = agentmake(master_plan, tool="biblemate/summarize_task_instruction", **AGENTMAKE_CONFIG)[-1].get("content", "").strip()[15:-4]
|
247
|
+
await thinking(refine_custom_plan)
|
248
|
+
# display info
|
249
|
+
console.print(Markdown(f"# User Request\n\n{user_request}\n\n# Master plan\n\n{master_plan}"))
|
250
|
+
|
251
|
+
# Prompt Engineering
|
252
|
+
if not specified_tool == "@@":
|
253
|
+
async def run_prompt_engineering():
|
254
|
+
nonlocal user_request
|
255
|
+
user_request = agentmake(messages if messages else user_request, follow_up_prompt=user_request if messages else None, tool="improve_prompt", **AGENTMAKE_CONFIG)[-1].get("content", "").strip()[20:-4]
|
256
|
+
await thinking(run_prompt_engineering, "Prompt Engineering ...")
|
257
|
+
|
258
|
+
if not messages:
|
259
|
+
messages = [
|
260
|
+
{"role": "system", "content": "You are BibleMate, an autonomous AI agent."},
|
261
|
+
{"role": "user", "content": user_request},
|
262
|
+
]
|
263
|
+
else:
|
264
|
+
messages.append({"role": "user", "content": user_request})
|
265
|
+
|
266
|
+
async def run_tool(tool, tool_instruction):
|
267
|
+
nonlocal messages
|
268
|
+
if tool == "get_direct_text_response":
|
269
|
+
messages = agentmake(messages, system="auto", **AGENTMAKE_CONFIG)
|
270
|
+
else:
|
271
|
+
try:
|
272
|
+
tool_schema = tools_schema[tool]
|
273
|
+
tool_properties = tool_schema["parameters"]["properties"]
|
274
|
+
if len(tool_properties) == 1 and "request" in tool_properties: # AgentMake MCP Servers or alike
|
275
|
+
tool_result = await client.call_tool(tool, {"request": tool_instruction})
|
276
|
+
else:
|
277
|
+
structured_output = getDictionaryOutput(messages=messages, schema=tool_schema)
|
278
|
+
tool_result = await client.call_tool(tool, structured_output)
|
279
|
+
tool_result = tool_result.content[0].text
|
280
|
+
messages[-1]["content"] += f"\n\n[Using tool `{tool}`]"
|
281
|
+
messages.append({"role": "assistant", "content": tool_result if tool_result.strip() else "Tool error!"})
|
282
|
+
except Exception as e:
|
283
|
+
if DEVELOPER_MODE:
|
284
|
+
console.print(f"Error: {e}\nFallback to direct response...\n\n")
|
285
|
+
messages = agentmake(messages, system="auto", **AGENTMAKE_CONFIG)
|
286
|
+
|
287
|
+
# user specify a single tool
|
288
|
+
if specified_tool and not specified_tool == "@@":
|
289
|
+
await process_tool(specified_tool, user_request)
|
290
|
+
console.print(Markdown(f"# User Request\n\n{messages[-2]['content']}\n\n# AI Response\n\n{messages[-1]['content']}"))
|
291
|
+
continue
|
230
292
|
|
231
293
|
# Chat mode
|
232
|
-
if not config.agent_mode:
|
294
|
+
if not config.agent_mode and not specified_tool == "@@":
|
233
295
|
async def run_chat_mode():
|
234
296
|
nonlocal messages, user_request
|
235
297
|
messages = agentmake(messages if messages else user_request, system="auto", **AGENTMAKE_CONFIG)
|
@@ -237,30 +299,35 @@ Get a static text-based response directly from a text-based AI model without usi
|
|
237
299
|
console.print(Markdown(f"# User Request\n\n{messages[-2]['content']}\n\n# AI Response\n\n{messages[-1]['content']}"))
|
238
300
|
continue
|
239
301
|
|
240
|
-
|
241
|
-
|
242
|
-
|
243
|
-
|
244
|
-
|
245
|
-
|
246
|
-
|
247
|
-
|
302
|
+
# agent mode
|
303
|
+
|
304
|
+
# generate master plan
|
305
|
+
if not master_plan:
|
306
|
+
if re.search(prompt_pattern, user_request):
|
307
|
+
prompt_name = re.search(prompt_pattern, user_request).group(1)
|
308
|
+
user_request = user_request[len(prompt_name):]
|
309
|
+
# Call the MCP prompt
|
310
|
+
prompt_schema = prompts_schema[prompt_name[1:]]
|
311
|
+
prompt_properties = prompt_schema["parameters"]["properties"]
|
312
|
+
if len(prompt_properties) == 1 and "request" in prompt_properties: # AgentMake MCP Servers or alike
|
313
|
+
result = await client.get_prompt(prompt_name[1:], {"request": user_request})
|
314
|
+
else:
|
315
|
+
structured_output = getDictionaryOutput(messages=messages, schema=prompt_schema)
|
316
|
+
result = await client.get_prompt(prompt_name[1:], structured_output)
|
317
|
+
#print(result, "\n\n")
|
318
|
+
master_plan = result.messages[0].content.text
|
319
|
+
# display info# display info
|
320
|
+
console.print(Markdown(f"# User Request\n\n{user_request}\n\n# Master plan\n\n{master_plan}"))
|
321
|
+
console.print(Markdown(f"# User Request\n\n{user_request}\n\n# Master plan\n\n{master_plan}"))
|
248
322
|
else:
|
249
|
-
|
250
|
-
|
251
|
-
|
252
|
-
|
253
|
-
|
254
|
-
|
255
|
-
|
256
|
-
|
257
|
-
console.print(Markdown(f"# User Request\n\n{user_request}"), "\n")
|
258
|
-
# Generate master plan
|
259
|
-
master_plan = ""
|
260
|
-
async def generate_master_plan():
|
261
|
-
nonlocal master_plan
|
262
|
-
# Create initial prompt to create master plan
|
263
|
-
initial_prompt = f"""Provide me with the `Preliminary Action Plan` and the `Measurable Outcome` for resolving `My Request`.
|
323
|
+
# display info
|
324
|
+
console.print(Markdown(f"# User Request\n\n{user_request}"), "\n")
|
325
|
+
# Generate master plan
|
326
|
+
master_plan = ""
|
327
|
+
async def generate_master_plan():
|
328
|
+
nonlocal master_plan
|
329
|
+
# Create initial prompt to create master plan
|
330
|
+
initial_prompt = f"""Provide me with the `Preliminary Action Plan` and the `Measurable Outcome` for resolving `My Request`.
|
264
331
|
|
265
332
|
# Available Tools
|
266
333
|
|
@@ -271,13 +338,14 @@ Available tools are: {available_tools}.
|
|
271
338
|
# My Request
|
272
339
|
|
273
340
|
{user_request}"""
|
274
|
-
|
275
|
-
|
276
|
-
|
277
|
-
|
278
|
-
|
279
|
-
|
280
|
-
|
341
|
+
console.print(Markdown("# Master plan"), "\n")
|
342
|
+
print()
|
343
|
+
master_plan = agentmake(messages+[{"role": "user", "content": initial_prompt}], system="create_action_plan", **AGENTMAKE_CONFIG)[-1].get("content", "").strip()
|
344
|
+
await thinking(generate_master_plan)
|
345
|
+
# display info
|
346
|
+
console.print(Markdown(master_plan), "\n\n")
|
347
|
+
|
348
|
+
# Step suggestion system message
|
281
349
|
system_suggestion = get_system_suggestion(master_plan)
|
282
350
|
|
283
351
|
# Tool selection systemm message
|
@@ -292,14 +360,6 @@ Available tools are: {available_tools}.
|
|
292
360
|
await thinking(get_first_suggestion)
|
293
361
|
console.print(Markdown(next_suggestion), "\n\n")
|
294
362
|
|
295
|
-
if not messages:
|
296
|
-
messages = [
|
297
|
-
{"role": "system", "content": "You are BibleMate, an autonomous AI agent."},
|
298
|
-
{"role": "user", "content": user_request},
|
299
|
-
]
|
300
|
-
else:
|
301
|
-
messages.append({"role": "user", "content": user_request})
|
302
|
-
|
303
363
|
step = 1
|
304
364
|
while not ("DONE" in next_suggestion or re.sub("^[^A-Za-z]*?([A-Za-z]+?)[^A-Za-z]*?$", r"\1", next_suggestion).upper() == "DONE"):
|
305
365
|
|
@@ -343,28 +403,7 @@ Available tools are: {available_tools}.
|
|
343
403
|
messages.append({"role": "assistant", "content": "Please provide me with an initial instruction to begin."})
|
344
404
|
messages.append({"role": "user", "content": next_step})
|
345
405
|
|
346
|
-
|
347
|
-
nonlocal messages, next_tool, next_step
|
348
|
-
if next_tool == "get_direct_text_response":
|
349
|
-
messages = agentmake(messages, system="auto", **AGENTMAKE_CONFIG)
|
350
|
-
else:
|
351
|
-
try:
|
352
|
-
tool_schema = tools_schema[next_tool]
|
353
|
-
tool_properties = tool_schema["parameters"]["properties"]
|
354
|
-
if len(tool_properties) == 1 and "request" in tool_properties: # AgentMake MCP Servers or alike
|
355
|
-
tool_result = await client.call_tool(next_tool, {"request": next_step})
|
356
|
-
else:
|
357
|
-
structured_output = getDictionaryOutput(messages=messages, schema=tool_schema)
|
358
|
-
tool_result = await client.call_tool(next_tool, structured_output)
|
359
|
-
tool_result = tool_result.content[0].text
|
360
|
-
messages[-1]["content"] += f"\n\n[Using tool `{next_tool}`]"
|
361
|
-
messages.append({"role": "assistant", "content": tool_result if tool_result.strip() else "Done!"})
|
362
|
-
except Exception as e:
|
363
|
-
if DEVELOPER_MODE:
|
364
|
-
console.print(f"Error: {e}\nFallback to direct response...\n\n")
|
365
|
-
messages = agentmake(messages, system="auto", **AGENTMAKE_CONFIG)
|
366
|
-
await process_step_async(step)
|
367
|
-
|
406
|
+
await process_tool(next_tool, next_step, step_number=step)
|
368
407
|
console.print(Markdown(f"\n## Output [{step}]\n\n{messages[-1]["content"]}"))
|
369
408
|
|
370
409
|
# iteration count
|
@@ -382,6 +421,9 @@ Available tools are: {available_tools}.
|
|
382
421
|
await thinking(get_next_suggestion)
|
383
422
|
#print()
|
384
423
|
console.print(Markdown(next_suggestion), "\n")
|
424
|
+
|
425
|
+
if messages[-1].get("role") == "user":
|
426
|
+
messages.append({"role": "assistant", "content": next_suggestion})
|
385
427
|
|
386
428
|
# Backup
|
387
429
|
backup()
|
@@ -1,6 +1,6 @@
|
|
1
1
|
Metadata-Version: 2.1
|
2
2
|
Name: biblemate
|
3
|
-
Version: 0.0.
|
3
|
+
Version: 0.0.22
|
4
4
|
Summary: BibleMate AI - Automate Your Bible Study
|
5
5
|
Home-page: https://toolmate.ai
|
6
6
|
Author: Eliran Wong
|
@@ -88,17 +88,63 @@ How to swap?
|
|
88
88
|
* Enter `.chat` in BibleMate AI prompt to enable chat mode and disable agent mode.
|
89
89
|
* Enter `.agent` in BibleMate AI prompt to enable agent mode and disable chat mode.
|
90
90
|
|
91
|
-
##
|
91
|
+
## Manual Tool Selection
|
92
|
+
|
93
|
+
In some cases, you may want to specify a particular tool for a simple task, rather than having a tool automatically selected in the fully automatic `agent mode`.
|
94
|
+
|
95
|
+
You can specify a single tool by prefixing a tool name with `@` at the beginning of your prompt. For example,
|
96
|
+
|
97
|
+
```
|
98
|
+
@retrieve_bible_cross_references Deut 6:4; John 3:16
|
99
|
+
```
|
100
|
+
|
101
|
+
Watch this video: https://youtu.be/50m1KRj6uhs
|
92
102
|
|
93
|
-
|
103
|
+
## Custom Master Plan with Multiple Tools
|
94
104
|
|
95
|
-
|
105
|
+
In some cases, you may want to specify a `custom plan` with multiple tools specified for different steps for a complex task, rather than having a `master plan` automatically generated in fully automatic agent mode.
|
96
106
|
|
97
|
-
|
107
|
+
You can use a custom 'Master Plan' of your own, instead of one generated by BibleMate AI. To do this, start your BibleMate AI prompt with '@@' followed by your own master plan for a Bible study. For example,
|
98
108
|
|
99
|
-
|
109
|
+
```
|
110
|
+
@@ Analyze John 3:16 with the following steps:
|
111
|
+
1. Call tool 'retrieve_english_bible_verses' for Bible text,
|
112
|
+
2. Call tool 'retrieve_bible_cross_references' for Bible cross-references,
|
113
|
+
3. Call tool 'interpret_new_testament_verse' for interpretation, and
|
114
|
+
4. Call tool 'write_bible_theology' to explain its theology.
|
115
|
+
```
|
116
|
+
|
117
|
+
Watch this video: https://youtu.be/Lejq0sAx030
|
118
|
+
|
119
|
+
The '@@' trick works even when you are using 'chat' mode with 'agent' mode disabled.
|
120
|
+
|
121
|
+
## Action Menu
|
100
122
|
|
101
|
-
|
123
|
+
There is a set of predefined entries, that starts with a dot sign `.`:
|
124
|
+
|
125
|
+
- `.new` - new conversation
|
126
|
+
- `.quit` - quit
|
127
|
+
- `.backend` - change backend
|
128
|
+
- `.chat` - enable chat mode
|
129
|
+
- `.agent` - enable agent mode
|
130
|
+
- `.tools` - list available tools
|
131
|
+
- `.backup` - backup conversation
|
132
|
+
- `.open` - open a file or directory, e.g. `.open /home/user/report.html`
|
133
|
+
|
134
|
+
## Keyboard Shortcuts
|
135
|
+
|
136
|
+
The following key bindings are supported in BibleMate AI prompt field:
|
137
|
+
|
138
|
+
- `Ctrl+N` new conversation
|
139
|
+
- `Ctrl+Q` quit
|
140
|
+
- `Ctrl+C` copy selected prompt text
|
141
|
+
- `Ctrl+V` paste text in a prompt
|
142
|
+
- `Ctrl+I` or `TAB` new line
|
143
|
+
- `Ctrl+Z` clear prompt text
|
144
|
+
- `Esc+a` jump to the beginning of a prompt
|
145
|
+
- `Esc+z` jump to the end of a prompt
|
146
|
+
- `Esc+b` or `HOME` jump to the beginning of a line in a prompt
|
147
|
+
- `Esc+e` or `END` jump to the end of a line in a prompt
|
102
148
|
|
103
149
|
## License
|
104
150
|
|
@@ -27,7 +27,7 @@ with open(os.path.join(package, "requirements.txt"), "r") as fileObj:
|
|
27
27
|
# https://packaging.python.org/en/latest/guides/distributing-packages-using-setuptools/
|
28
28
|
setup(
|
29
29
|
name=package,
|
30
|
-
version="0.0.
|
30
|
+
version="0.0.22",
|
31
31
|
python_requires=">=3.8, <3.13",
|
32
32
|
description=f"BibleMate AI - Automate Your Bible Study",
|
33
33
|
long_description=long_description,
|
File without changes
|
File without changes
|
File without changes
|
File without changes
|
File without changes
|
File without changes
|
File without changes
|
File without changes
|
File without changes
|
File without changes
|
File without changes
|
File without changes
|