vectara-agentic 0.1.15__tar.gz → 0.1.17__tar.gz

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.

Potentially problematic release.


This version of vectara-agentic might be problematic. Click here for more details.

Files changed (25) hide show
  1. {vectara_agentic-0.1.15/vectara_agentic.egg-info → vectara_agentic-0.1.17}/PKG-INFO +69 -23
  2. {vectara_agentic-0.1.15 → vectara_agentic-0.1.17}/README.md +61 -15
  3. {vectara_agentic-0.1.15 → vectara_agentic-0.1.17}/requirements.txt +7 -7
  4. {vectara_agentic-0.1.15 → vectara_agentic-0.1.17}/setup.py +1 -1
  5. {vectara_agentic-0.1.15 → vectara_agentic-0.1.17}/vectara_agentic/__init__.py +2 -1
  6. {vectara_agentic-0.1.15 → vectara_agentic-0.1.17}/vectara_agentic/_callback.py +0 -1
  7. {vectara_agentic-0.1.15 → vectara_agentic-0.1.17}/vectara_agentic/_observability.py +18 -16
  8. {vectara_agentic-0.1.15 → vectara_agentic-0.1.17}/vectara_agentic/_prompts.py +13 -8
  9. {vectara_agentic-0.1.15 → vectara_agentic-0.1.17}/vectara_agentic/agent.py +105 -65
  10. vectara_agentic-0.1.17/vectara_agentic/agent_endpoint.py +63 -0
  11. {vectara_agentic-0.1.15 → vectara_agentic-0.1.17}/vectara_agentic/tools.py +69 -65
  12. {vectara_agentic-0.1.15 → vectara_agentic-0.1.17}/vectara_agentic/tools_catalog.py +24 -21
  13. {vectara_agentic-0.1.15 → vectara_agentic-0.1.17}/vectara_agentic/types.py +0 -1
  14. {vectara_agentic-0.1.15 → vectara_agentic-0.1.17}/vectara_agentic/utils.py +16 -11
  15. {vectara_agentic-0.1.15 → vectara_agentic-0.1.17/vectara_agentic.egg-info}/PKG-INFO +69 -23
  16. {vectara_agentic-0.1.15 → vectara_agentic-0.1.17}/vectara_agentic.egg-info/SOURCES.txt +1 -0
  17. {vectara_agentic-0.1.15 → vectara_agentic-0.1.17}/vectara_agentic.egg-info/requires.txt +7 -7
  18. {vectara_agentic-0.1.15 → vectara_agentic-0.1.17}/LICENSE +0 -0
  19. {vectara_agentic-0.1.15 → vectara_agentic-0.1.17}/MANIFEST.in +0 -0
  20. {vectara_agentic-0.1.15 → vectara_agentic-0.1.17}/setup.cfg +0 -0
  21. {vectara_agentic-0.1.15 → vectara_agentic-0.1.17}/tests/__init__.py +0 -0
  22. {vectara_agentic-0.1.15 → vectara_agentic-0.1.17}/tests/test_agent.py +0 -0
  23. {vectara_agentic-0.1.15 → vectara_agentic-0.1.17}/tests/test_tools.py +0 -0
  24. {vectara_agentic-0.1.15 → vectara_agentic-0.1.17}/vectara_agentic.egg-info/dependency_links.txt +0 -0
  25. {vectara_agentic-0.1.15 → vectara_agentic-0.1.17}/vectara_agentic.egg-info/top_level.txt +0 -0
@@ -1,6 +1,6 @@
1
1
  Metadata-Version: 2.1
2
2
  Name: vectara_agentic
3
- Version: 0.1.15
3
+ Version: 0.1.17
4
4
  Summary: A Python package for creating AI Assistants and AI Agents with Vectara
5
5
  Home-page: https://github.com/vectara/py-vectara-agentic
6
6
  Author: Ofer Mendelevitch
@@ -16,25 +16,25 @@ Classifier: Topic :: Software Development :: Libraries :: Python Modules
16
16
  Requires-Python: >=3.10
17
17
  Description-Content-Type: text/markdown
18
18
  License-File: LICENSE
19
- Requires-Dist: llama-index==0.11.13
19
+ Requires-Dist: llama-index==0.11.20
20
20
  Requires-Dist: llama-index-indices-managed-vectara==0.2.2
21
21
  Requires-Dist: llama-index-agent-llm-compiler==0.2.0
22
22
  Requires-Dist: llama-index-agent-openai==0.3.4
23
- Requires-Dist: llama-index-llms-openai==0.2.9
24
- Requires-Dist: llama-index-llms-anthropic==0.3.1
23
+ Requires-Dist: llama-index-llms-openai==0.2.16
24
+ Requires-Dist: llama-index-llms-anthropic==0.3.7
25
25
  Requires-Dist: llama-index-llms-together==0.2.0
26
26
  Requires-Dist: llama-index-llms-groq==0.2.0
27
- Requires-Dist: llama-index-llms-fireworks==0.2.0
28
- Requires-Dist: llama-index-llms-cohere==0.3.0
29
- Requires-Dist: llama-index-llms-gemini==0.3.5
27
+ Requires-Dist: llama-index-llms-fireworks==0.2.2
28
+ Requires-Dist: llama-index-llms-cohere==0.3.1
29
+ Requires-Dist: llama-index-llms-gemini==0.3.7
30
30
  Requires-Dist: llama-index-tools-yahoo-finance==0.2.0
31
31
  Requires-Dist: llama-index-tools-arxiv==0.2.0
32
32
  Requires-Dist: llama-index-tools-database==0.2.0
33
33
  Requires-Dist: llama-index-tools-google==0.2.0
34
34
  Requires-Dist: llama-index-tools-tavily_research==0.2.0
35
+ Requires-Dist: llama-index-tools-neo4j==0.2.0
35
36
  Requires-Dist: tavily-python==0.5.0
36
37
  Requires-Dist: yahoo-finance==1.4.0
37
- Requires-Dist: llama-index-tools-neo4j==0.2.0
38
38
  Requires-Dist: openinference-instrumentation-llama-index==3.0.2
39
39
  Requires-Dist: arize-phoenix==4.35.1
40
40
  Requires-Dist: arize-phoenix-otel==0.5.1
@@ -68,13 +68,29 @@ Requires-Dist: dill==0.3.8
68
68
 
69
69
  ## ✨ Overview
70
70
 
71
- `vectara-agentic` is a Python library for developing powerful AI assistants using Vectara and Agentic-RAG. It leverages the LlamaIndex Agent framework, customized for use with Vectara.
71
+ `vectara-agentic` is a Python library for developing powerful AI assistants and agents using Vectara and Agentic-RAG. It leverages the LlamaIndex Agent framework, customized for use with Vectara.
72
+
73
+ <p align="center">
74
+ <img src="https://raw.githubusercontent.com/vectara/py-vectara-agentic/main/.github/assets/diagram1.png" alt="Agentic RAG diagram" width="100%" style="vertical-align: middle;">
75
+ </p>
72
76
 
73
- ### Key Features
77
+ ### Features
74
78
 
79
+ - Enables easy creation of custom AI assistants and agents.
80
+ - Create a Vectara RAG tool with a single line of code.
75
81
  - Supports `ReAct`, `OpenAIAgent` and `LLMCompiler` agent types.
76
82
  - Includes pre-built tools for various domains (e.g., finance, legal).
77
- - Enables easy creation of custom AI assistants and agents.
83
+ - Integrates with various LLM inference services like OpenAI, Anthropic, Gemini, GROQ, Together.AI, Cohere and Fireworks
84
+ - Built-in support for observability with Arize Phoenix
85
+
86
+ ### 📚 Example AI Assistants
87
+
88
+ Check out our example AI assistants:
89
+
90
+ - [Financial Assistant](https://huggingface.co/spaces/vectara/finance-chat)
91
+ - [Justice Harvard Teaching Assistant](https://huggingface.co/spaces/vectara/Justice-Harvard)
92
+ - [Legal Assistant](https://huggingface.co/spaces/vectara/legal-agent)
93
+
78
94
 
79
95
  ### Prerequisites
80
96
 
@@ -197,7 +213,15 @@ When creating a VectaraToolFactory, you can pass in a `vectara_api_key`, `vectar
197
213
 
198
214
  ## ℹ️ Additional Information
199
215
 
200
- ### Agent Diagnostics
216
+ ### About Custom Instructions for your Agent
217
+
218
+ The custom instructions you provide to the agent guide its behavior.
219
+ Here are some guidelines when creating your instructions:
220
+ - Write precise and clear instructions, without overcomplicating.
221
+ - Consider edge cases and unusual or atypical scenarios.
222
+ - Be cautious to not over-specify behavior based on your primary use-case, as it may limit the agent's ability to behave properly in others.
223
+
224
+ ### Diagnostics
201
225
 
202
226
  The `Agent` class defines a few helpful methods to help you understand the internals of your application.
203
227
  * The `report()` method prints out the agent object’s type, the tools, and the LLMs used for the main agent and tool calling.
@@ -224,21 +248,43 @@ Then you can use Arize Phoenix in three ways:
224
248
  Now when you run your agent, all call traces are sent to Phoenix and recorded.
225
249
  In addition, vectara-agentic also records `FCS` (factual consistency score, aka HHEM) values into Arize for every Vectara RAG call. You can see those results in the `Feedback` column of the arize UI.
226
250
 
227
- ### About Custom Instructions
251
+ ## 🌐 API Endpoint
228
252
 
229
- The custom instructions you provide to the agent guide its behavior.
230
- Here are some guidelines when creating your instructions:
231
- - Write precise and clear instructions, without overcomplicating.
232
- - Consider edge cases and unusual or atypical scenarios.
233
- - Be cautious to not over-specify behavior based on your primary use-case, as it may limit the agent's ability to behave properly in others.
253
+ `vectara-agentic` can be easily hosted locally or on a remote machine behind an API endpoint, by following theses steps:
234
254
 
235
- ## 📚 Examples
255
+ ### Step 1: Setup your API key
256
+ Ensure that you have your API key set up as an environment variable:
236
257
 
237
- Check out our example AI assistants:
258
+ ```
259
+ export VECTARA_AGENTIC_API_KEY=<YOUR-ENDPOINT-API-KEY>
260
+ ```
238
261
 
239
- - [Financial Assistant](https://huggingface.co/spaces/vectara/finance-chat)
240
- - [Justice Harvard Teaching Assistant](https://huggingface.co/spaces/vectara/Justice-Harvard)
241
- - [Legal Assistant](https://huggingface.co/spaces/vectara/legal-agent)
262
+ ### Step 2: Start the API Server
263
+ Initialize the agent and start the FastAPI server by following this example:
264
+
265
+
266
+ ```
267
+ from agent import Agent
268
+ from agent_endpoint import start_app
269
+ agent = Agent(...) # Initialize your agent with appropriate parameters
270
+ start_app(agent)
271
+ ```
272
+
273
+ You can customize the host and port by passing them as arguments to `start_app()`:
274
+ * Default: host="0.0.0.0" and port=8000.
275
+ For example:
276
+ ```
277
+ start_app(agent, host="0.0.0.0", port=8000)
278
+ ```
279
+
280
+ ### Step 3: Access the API Endpoint
281
+ Once the server is running, you can interact with it using curl or any HTTP client. For example:
282
+
283
+ ```
284
+ curl -G "http://<remote-server-ip>:8000/chat" \
285
+ --data-urlencode "message=What is Vectara?" \
286
+ -H "X-API-Key: <YOUR-API-KEY>"
287
+ ```
242
288
 
243
289
  ## 🤝 Contributing
244
290
 
@@ -20,13 +20,29 @@
20
20
 
21
21
  ## ✨ Overview
22
22
 
23
- `vectara-agentic` is a Python library for developing powerful AI assistants using Vectara and Agentic-RAG. It leverages the LlamaIndex Agent framework, customized for use with Vectara.
23
+ `vectara-agentic` is a Python library for developing powerful AI assistants and agents using Vectara and Agentic-RAG. It leverages the LlamaIndex Agent framework, customized for use with Vectara.
24
24
 
25
- ### Key Features
25
+ <p align="center">
26
+ <img src="https://raw.githubusercontent.com/vectara/py-vectara-agentic/main/.github/assets/diagram1.png" alt="Agentic RAG diagram" width="100%" style="vertical-align: middle;">
27
+ </p>
28
+
29
+ ### Features
26
30
 
31
+ - Enables easy creation of custom AI assistants and agents.
32
+ - Create a Vectara RAG tool with a single line of code.
27
33
  - Supports `ReAct`, `OpenAIAgent` and `LLMCompiler` agent types.
28
34
  - Includes pre-built tools for various domains (e.g., finance, legal).
29
- - Enables easy creation of custom AI assistants and agents.
35
+ - Integrates with various LLM inference services like OpenAI, Anthropic, Gemini, GROQ, Together.AI, Cohere and Fireworks
36
+ - Built-in support for observability with Arize Phoenix
37
+
38
+ ### 📚 Example AI Assistants
39
+
40
+ Check out our example AI assistants:
41
+
42
+ - [Financial Assistant](https://huggingface.co/spaces/vectara/finance-chat)
43
+ - [Justice Harvard Teaching Assistant](https://huggingface.co/spaces/vectara/Justice-Harvard)
44
+ - [Legal Assistant](https://huggingface.co/spaces/vectara/legal-agent)
45
+
30
46
 
31
47
  ### Prerequisites
32
48
 
@@ -149,7 +165,15 @@ When creating a VectaraToolFactory, you can pass in a `vectara_api_key`, `vectar
149
165
 
150
166
  ## ℹ️ Additional Information
151
167
 
152
- ### Agent Diagnostics
168
+ ### About Custom Instructions for your Agent
169
+
170
+ The custom instructions you provide to the agent guide its behavior.
171
+ Here are some guidelines when creating your instructions:
172
+ - Write precise and clear instructions, without overcomplicating.
173
+ - Consider edge cases and unusual or atypical scenarios.
174
+ - Be cautious to not over-specify behavior based on your primary use-case, as it may limit the agent's ability to behave properly in others.
175
+
176
+ ### Diagnostics
153
177
 
154
178
  The `Agent` class defines a few helpful methods to help you understand the internals of your application.
155
179
  * The `report()` method prints out the agent object’s type, the tools, and the LLMs used for the main agent and tool calling.
@@ -176,21 +200,43 @@ Then you can use Arize Phoenix in three ways:
176
200
  Now when you run your agent, all call traces are sent to Phoenix and recorded.
177
201
  In addition, vectara-agentic also records `FCS` (factual consistency score, aka HHEM) values into Arize for every Vectara RAG call. You can see those results in the `Feedback` column of the arize UI.
178
202
 
179
- ### About Custom Instructions
203
+ ## 🌐 API Endpoint
180
204
 
181
- The custom instructions you provide to the agent guide its behavior.
182
- Here are some guidelines when creating your instructions:
183
- - Write precise and clear instructions, without overcomplicating.
184
- - Consider edge cases and unusual or atypical scenarios.
185
- - Be cautious to not over-specify behavior based on your primary use-case, as it may limit the agent's ability to behave properly in others.
205
+ `vectara-agentic` can be easily hosted locally or on a remote machine behind an API endpoint, by following theses steps:
186
206
 
187
- ## 📚 Examples
207
+ ### Step 1: Setup your API key
208
+ Ensure that you have your API key set up as an environment variable:
188
209
 
189
- Check out our example AI assistants:
210
+ ```
211
+ export VECTARA_AGENTIC_API_KEY=<YOUR-ENDPOINT-API-KEY>
212
+ ```
190
213
 
191
- - [Financial Assistant](https://huggingface.co/spaces/vectara/finance-chat)
192
- - [Justice Harvard Teaching Assistant](https://huggingface.co/spaces/vectara/Justice-Harvard)
193
- - [Legal Assistant](https://huggingface.co/spaces/vectara/legal-agent)
214
+ ### Step 2: Start the API Server
215
+ Initialize the agent and start the FastAPI server by following this example:
216
+
217
+
218
+ ```
219
+ from agent import Agent
220
+ from agent_endpoint import start_app
221
+ agent = Agent(...) # Initialize your agent with appropriate parameters
222
+ start_app(agent)
223
+ ```
224
+
225
+ You can customize the host and port by passing them as arguments to `start_app()`:
226
+ * Default: host="0.0.0.0" and port=8000.
227
+ For example:
228
+ ```
229
+ start_app(agent, host="0.0.0.0", port=8000)
230
+ ```
231
+
232
+ ### Step 3: Access the API Endpoint
233
+ Once the server is running, you can interact with it using curl or any HTTP client. For example:
234
+
235
+ ```
236
+ curl -G "http://<remote-server-ip>:8000/chat" \
237
+ --data-urlencode "message=What is Vectara?" \
238
+ -H "X-API-Key: <YOUR-API-KEY>"
239
+ ```
194
240
 
195
241
  ## 🤝 Contributing
196
242
 
@@ -1,22 +1,22 @@
1
- llama-index==0.11.13
1
+ llama-index==0.11.20
2
2
  llama-index-indices-managed-vectara==0.2.2
3
3
  llama-index-agent-llm-compiler==0.2.0
4
4
  llama-index-agent-openai==0.3.4
5
- llama-index-llms-openai==0.2.9
6
- llama-index-llms-anthropic==0.3.1
5
+ llama-index-llms-openai==0.2.16
6
+ llama-index-llms-anthropic==0.3.7
7
7
  llama-index-llms-together==0.2.0
8
8
  llama-index-llms-groq==0.2.0
9
- llama-index-llms-fireworks==0.2.0
10
- llama-index-llms-cohere==0.3.0
11
- llama-index-llms-gemini==0.3.5
9
+ llama-index-llms-fireworks==0.2.2
10
+ llama-index-llms-cohere==0.3.1
11
+ llama-index-llms-gemini==0.3.7
12
12
  llama-index-tools-yahoo-finance==0.2.0
13
13
  llama-index-tools-arxiv==0.2.0
14
14
  llama-index-tools-database==0.2.0
15
15
  llama-index-tools-google==0.2.0
16
16
  llama-index-tools-tavily_research==0.2.0
17
+ llama-index-tools-neo4j==0.2.0
17
18
  tavily-python==0.5.0
18
19
  yahoo-finance==1.4.0
19
- llama-index-tools-neo4j==0.2.0
20
20
  openinference-instrumentation-llama-index==3.0.2
21
21
  arize-phoenix==4.35.1
22
22
  arize-phoenix-otel==0.5.1
@@ -8,7 +8,7 @@ def read_requirements():
8
8
 
9
9
  setup(
10
10
  name="vectara_agentic",
11
- version="0.1.15",
11
+ version="0.1.17",
12
12
  author="Ofer Mendelevitch",
13
13
  author_email="ofer@vectara.com",
14
14
  description="A Python package for creating AI Assistants and AI Agents with Vectara",
@@ -3,7 +3,7 @@ vectara_agentic package.
3
3
  """
4
4
 
5
5
  # Define the package version
6
- __version__ = "0.1.15"
6
+ __version__ = "0.1.17"
7
7
 
8
8
  # Import classes and functions from modules
9
9
  # from .module1 import Class1, function1
@@ -12,6 +12,7 @@ __version__ = "0.1.15"
12
12
 
13
13
  # Any initialization code
14
14
  def initialize_package():
15
+ """print a message when the package is initialized."""
15
16
  print(f"Initializing vectara-agentic version {__version__}...")
16
17
 
17
18
 
@@ -66,7 +66,6 @@ class AgentCallbackHandler(BaseCallbackHandler):
66
66
 
67
67
  def _handle_agent_step(self, payload: dict) -> None:
68
68
  """Calls self.fn() with the information about agent step."""
69
- print(f"Handling agent step: {payload}")
70
69
  if EventPayload.MESSAGES in payload:
71
70
  msg = str(payload.get(EventPayload.MESSAGES))
72
71
  if self.fn:
@@ -1,19 +1,21 @@
1
+ """
2
+ Observability for Vectara Agentic.
3
+ """
1
4
  import os
2
5
  import json
6
+ from typing import Optional, Union
3
7
  import pandas as pd
4
-
5
8
  from .types import ObserverType
6
9
 
7
10
  def setup_observer() -> bool:
8
11
  '''
9
12
  Setup the observer.
10
13
  '''
14
+ import phoenix as px
15
+ from openinference.instrumentation.llama_index import LlamaIndexInstrumentor
16
+ from phoenix.otel import register
11
17
  observer = ObserverType(os.getenv("VECTARA_AGENTIC_OBSERVER_TYPE", "NO_OBSERVER"))
12
18
  if observer == ObserverType.ARIZE_PHOENIX:
13
- import phoenix as px
14
- from phoenix.otel import register
15
- from openinference.instrumentation.llama_index import LlamaIndexInstrumentor
16
-
17
19
  phoenix_endpoint = os.getenv("PHOENIX_ENDPOINT", None)
18
20
  if not phoenix_endpoint:
19
21
  px.launch_app()
@@ -21,7 +23,7 @@ def setup_observer() -> bool:
21
23
  elif 'app.phoenix.arize.com' in phoenix_endpoint: # hosted on Arizze
22
24
  phoenix_api_key = os.getenv("PHOENIX_API_KEY", None)
23
25
  if not phoenix_api_key:
24
- raise Exception("Arize Phoenix API key not set. Please set PHOENIX_API_KEY environment variable.")
26
+ raise ValueError("Arize Phoenix API key not set. Please set PHOENIX_API_KEY environment variable.")
25
27
  os.environ["PHOENIX_CLIENT_HEADERS"] = f"api_key={phoenix_api_key}"
26
28
  os.environ["PHOENIX_COLLECTOR_ENDPOINT"] = "https://app.phoenix.arize.com"
27
29
  tracer_provider = register(endpoint=phoenix_endpoint, project_name="vectara-agentic")
@@ -29,12 +31,11 @@ def setup_observer() -> bool:
29
31
  tracer_provider = register(endpoint=phoenix_endpoint, project_name="vectara-agentic")
30
32
  LlamaIndexInstrumentor().instrument(tracer_provider=tracer_provider)
31
33
  return True
32
- else:
33
- print("No observer set.")
34
- return False
34
+ print("No observer set.")
35
+ return False
35
36
 
36
37
 
37
- def _extract_fcs_value(output):
38
+ def _extract_fcs_value(output: Union[str, dict]) -> Optional[float]:
38
39
  '''
39
40
  Extract the FCS value from the output.
40
41
  '''
@@ -49,7 +50,7 @@ def _extract_fcs_value(output):
49
50
  return None
50
51
 
51
52
 
52
- def _find_top_level_parent_id(row, all_spans):
53
+ def _find_top_level_parent_id(row: pd.Series, all_spans: pd.DataFrame) -> Optional[str]:
53
54
  '''
54
55
  Find the top level parent id for the given span.
55
56
  '''
@@ -67,14 +68,13 @@ def _find_top_level_parent_id(row, all_spans):
67
68
  return current_id
68
69
 
69
70
 
70
- def eval_fcs():
71
+ def eval_fcs() -> None:
71
72
  '''
72
73
  Evaluate the FCS score for the VectaraQueryEngine._query span.
73
74
  '''
75
+ import phoenix as px
74
76
  from phoenix.trace.dsl import SpanQuery
75
77
  from phoenix.trace import SpanEvaluations
76
- import phoenix as px
77
-
78
78
  query = SpanQuery().select(
79
79
  "output.value",
80
80
  "parent_id",
@@ -83,8 +83,10 @@ def eval_fcs():
83
83
  client = px.Client()
84
84
  all_spans = client.query_spans(query, project_name="vectara-agentic")
85
85
  vectara_spans = all_spans[all_spans['name'] == 'VectaraQueryEngine._query'].copy()
86
- vectara_spans['top_level_parent_id'] = vectara_spans.apply(lambda row: _find_top_level_parent_id(row, all_spans), axis=1)
87
- vectara_spans['score'] = vectara_spans['output.value'].apply(lambda x: _extract_fcs_value(x))
86
+ vectara_spans['top_level_parent_id'] = vectara_spans.apply(
87
+ lambda row: _find_top_level_parent_id(row, all_spans), axis=1
88
+ )
89
+ vectara_spans['score'] = vectara_spans['output.value'].apply(_extract_fcs_value)
88
90
 
89
91
  vectara_spans.reset_index(inplace=True)
90
92
  top_level_spans = vectara_spans.copy()
@@ -7,10 +7,10 @@ GENERAL_INSTRUCTIONS = """
7
7
  - Use tools as your main source of information, do not respond without using a tool. Do not respond based on pre-trained knowledge.
8
8
  - When using a tool with arguments, simplify the query as much as possible if you use the tool with arguments.
9
9
  For example, if the original query is "revenue for apple in 2021", you can use the tool with a query "revenue" with arguments year=2021 and company=apple.
10
- - If you can't answer the question with the information provided by the tools, try to rephrase the question and call a tool again,
10
+ - If you can't answer the question with the information provided by a tool, try to rephrase the question and call the tool again,
11
11
  or break the question into sub-questions and call a tool for each sub-question, then combine the answers to provide a complete response.
12
12
  For example if asked "what is the population of France and Germany", you can call the tool twice, once for each country.
13
- - If a query tool provides citations or references in markdown as part of its response, include the citations in your response.
13
+ - If a query tool provides citations or references in markdown as part of its response, include the references in your response.
14
14
  - When providing links in your response, where possible put the name of the website or source of information for the displayed text. Don't just say 'source'.
15
15
  - If after retrying you can't get the information or answer the question, respond with "I don't know".
16
16
  - Your response should never be the input to a tool, only the output.
@@ -21,6 +21,14 @@ GENERAL_INSTRUCTIONS = """
21
21
  - If including latex equations in the markdown response, make sure the equations are on a separate line and enclosed in double dollar signs.
22
22
  - Always respond in the language of the question, and in text (no images, videos or code).
23
23
  - Always call the "get_bad_topics" tool to determine the topics you are not allowed to discuss or respond to.
24
+ - If you are provided with database tools use them for analytical queries (such as counting, calculating max, min, average, sum, or other statistics).
25
+ For each database, the database tools include: x_list_tables, x_load_data, x_describe_tables, and x_load_sample_data, where 'x' in the database name.
26
+ The x_list_tables tool provides a list of available tables in the x database.
27
+ Always use the x_describe_tables tool to understand the schema of each table, before you access data from that table.
28
+ Always use the x_load_sample_data tool to understand the column names, and the unique values in each column, so you can use them in your queries.
29
+ Some times the user may ask for a specific column value, but the actual value in the table may be different, and you will need to use the correct value.
30
+ - Never call x_load_data to retrieve values from each row in the table.
31
+ - Do not mention table names or database names in your response.
24
32
  """
25
33
 
26
34
  #
@@ -65,10 +73,7 @@ IMPORTANT - FOLLOW THESE INSTRUCTIONS CAREFULLY:
65
73
  {INSTRUCTIONS}
66
74
  {custom_instructions}
67
75
 
68
- ## Input
69
- The user will specify a task or a question in text.
70
-
71
- ### Output Format
76
+ ## Output Format
72
77
 
73
78
  Please answer in the same language as the question and use the following format:
74
79
 
@@ -95,12 +100,12 @@ At that point, you MUST respond in the one of the following two formats (and do
95
100
 
96
101
  ```
97
102
  Thought: I can answer without using any more tools. I'll use the user's language to answer
98
- Answer: [your answer here (In the same language as the user's question, and maintain any references/citations)]
103
+ Answer: [your answer here (In the same language as the user's question, and maintain any references)]
99
104
  ```
100
105
 
101
106
  ```
102
107
  Thought: I cannot answer the question with the provided tools.
103
- Answer: [your answer here (In the same language as the user's question, and maintain any references/citations)]
108
+ Answer: [your answer here (In the same language as the user's question)]
104
109
  ```
105
110
 
106
111
  ## Current Conversation