vectara-agentic 0.1.28__tar.gz → 0.2.1__tar.gz

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.

Potentially problematic release.


This version of vectara-agentic might be problematic. Click here for more details.

Files changed (30) hide show
  1. {vectara_agentic-0.1.28/vectara_agentic.egg-info → vectara_agentic-0.2.1}/PKG-INFO +32 -5
  2. {vectara_agentic-0.1.28 → vectara_agentic-0.2.1}/README.md +29 -2
  3. {vectara_agentic-0.1.28 → vectara_agentic-0.2.1}/requirements.txt +2 -2
  4. vectara_agentic-0.2.1/tests/endpoint.py +42 -0
  5. {vectara_agentic-0.1.28 → vectara_agentic-0.2.1}/tests/test_agent.py +34 -20
  6. vectara_agentic-0.2.1/tests/test_private_llm.py +67 -0
  7. {vectara_agentic-0.1.28 → vectara_agentic-0.2.1}/tests/test_tools.py +37 -14
  8. {vectara_agentic-0.1.28 → vectara_agentic-0.2.1}/vectara_agentic/_callback.py +46 -36
  9. {vectara_agentic-0.1.28 → vectara_agentic-0.2.1}/vectara_agentic/_prompts.py +3 -1
  10. {vectara_agentic-0.1.28 → vectara_agentic-0.2.1}/vectara_agentic/_version.py +1 -1
  11. {vectara_agentic-0.1.28 → vectara_agentic-0.2.1}/vectara_agentic/agent.py +152 -42
  12. {vectara_agentic-0.1.28 → vectara_agentic-0.2.1}/vectara_agentic/agent_config.py +9 -0
  13. {vectara_agentic-0.1.28 → vectara_agentic-0.2.1}/vectara_agentic/db_tools.py +2 -2
  14. {vectara_agentic-0.1.28 → vectara_agentic-0.2.1}/vectara_agentic/tools.py +91 -26
  15. {vectara_agentic-0.1.28 → vectara_agentic-0.2.1}/vectara_agentic/tools_catalog.py +1 -1
  16. {vectara_agentic-0.1.28 → vectara_agentic-0.2.1}/vectara_agentic/types.py +3 -2
  17. {vectara_agentic-0.1.28 → vectara_agentic-0.2.1}/vectara_agentic/utils.py +4 -0
  18. {vectara_agentic-0.1.28 → vectara_agentic-0.2.1/vectara_agentic.egg-info}/PKG-INFO +32 -5
  19. {vectara_agentic-0.1.28 → vectara_agentic-0.2.1}/vectara_agentic.egg-info/SOURCES.txt +2 -0
  20. {vectara_agentic-0.1.28 → vectara_agentic-0.2.1}/vectara_agentic.egg-info/requires.txt +2 -2
  21. {vectara_agentic-0.1.28 → vectara_agentic-0.2.1}/LICENSE +0 -0
  22. {vectara_agentic-0.1.28 → vectara_agentic-0.2.1}/MANIFEST.in +0 -0
  23. {vectara_agentic-0.1.28 → vectara_agentic-0.2.1}/setup.cfg +0 -0
  24. {vectara_agentic-0.1.28 → vectara_agentic-0.2.1}/setup.py +0 -0
  25. {vectara_agentic-0.1.28 → vectara_agentic-0.2.1}/tests/__init__.py +0 -0
  26. {vectara_agentic-0.1.28 → vectara_agentic-0.2.1}/vectara_agentic/__init__.py +0 -0
  27. {vectara_agentic-0.1.28 → vectara_agentic-0.2.1}/vectara_agentic/_observability.py +0 -0
  28. {vectara_agentic-0.1.28 → vectara_agentic-0.2.1}/vectara_agentic/agent_endpoint.py +0 -0
  29. {vectara_agentic-0.1.28 → vectara_agentic-0.2.1}/vectara_agentic.egg-info/dependency_links.txt +0 -0
  30. {vectara_agentic-0.1.28 → vectara_agentic-0.2.1}/vectara_agentic.egg-info/top_level.txt +0 -0
@@ -1,6 +1,6 @@
1
1
  Metadata-Version: 2.2
2
2
  Name: vectara_agentic
3
- Version: 0.1.28
3
+ Version: 0.2.1
4
4
  Summary: A Python package for creating AI Assistants and AI Agents with Vectara
5
5
  Home-page: https://github.com/vectara/py-vectara-agentic
6
6
  Author: Ofer Mendelevitch
@@ -17,7 +17,7 @@ Requires-Python: >=3.10
17
17
  Description-Content-Type: text/markdown
18
18
  License-File: LICENSE
19
19
  Requires-Dist: llama-index==0.12.11
20
- Requires-Dist: llama-index-indices-managed-vectara==0.3.1
20
+ Requires-Dist: llama-index-indices-managed-vectara==0.4.0
21
21
  Requires-Dist: llama-index-agent-llm-compiler==0.3.0
22
22
  Requires-Dist: llama-index-agent-lats==0.3.0
23
23
  Requires-Dist: llama-index-agent-openai==0.4.3
@@ -51,7 +51,7 @@ Requires-Dist: pydantic==2.10.3
51
51
  Requires-Dist: retrying==1.3.4
52
52
  Requires-Dist: python-dotenv==1.0.1
53
53
  Requires-Dist: tiktoken==0.8.0
54
- Requires-Dist: dill>=0.3.7
54
+ Requires-Dist: cloudpickle>=3.1.1
55
55
  Requires-Dist: httpx==0.27.2
56
56
  Dynamic: author
57
57
  Dynamic: author-email
@@ -135,7 +135,7 @@ from vectara_agentic.tools import VectaraToolFactory
135
135
  vec_factory = VectaraToolFactory(
136
136
  vectara_api_key=os.environ['VECTARA_API_KEY'],
137
137
  vectara_customer_id=os.environ['VECTARA_CUSTOMER_ID'],
138
- vectara_corpus_id=os.environ['VECTARA_CORPUS_ID']
138
+ vectara_corpus_key=os.environ['VECTARA_CORPUS_KEY']
139
139
  )
140
140
  ```
141
141
 
@@ -315,6 +315,10 @@ def mult_func(x, y):
315
315
  mult_tool = ToolsFactory().create_tool(mult_func)
316
316
  ```
317
317
 
318
+ Note: When you define your own Python functions as tools, implement them at the top module level,
319
+ and not as nested functions. Nested functions are not supported if you use serialization
320
+ (dumps/loads or from_dict/to_dict).
321
+
318
322
  ## 🛠️ Configuration
319
323
 
320
324
  ## Configuring Vectara-agentic
@@ -352,10 +356,31 @@ If any of these are not provided, `AgentConfig` first tries to read the values f
352
356
 
353
357
  When creating a `VectaraToolFactory`, you can pass in a `vectara_api_key`, `vectara_customer_id`, and `vectara_corpus_id` to the factory.
354
358
 
355
- If not passed in, it will be taken from the environment variables (`VECTARA_API_KEY`, `VECTARA_CUSTOMER_ID` and `VECTARA_CORPUS_ID`). Note that `VECTARA_CORPUS_ID` can be a single ID or a comma-separated list of IDs (if you want to query multiple corpora).
359
+ If not passed in, it will be taken from the environment variables (`VECTARA_API_KEY` and `VECTARA_CORPUS_KEY`). Note that `VECTARA_CORPUS_KEY` can be a single KEY or a comma-separated list of KEYs (if you want to query multiple corpora).
356
360
 
357
361
  These values will be used as credentials when creating Vectara tools - in `create_rag_tool()` and `create_search_tool()`.
358
362
 
363
+ ## Setting up a privately hosted LLM
364
+
365
+ If you want to setup vectara-agentic to use your own self-hosted LLM endpoint, follow the example below
366
+
367
+ ```python
368
+ config = AgentConfig(
369
+ agent_type=AgentType.REACT,
370
+ main_llm_provider=ModelProvider.PRIVATE,
371
+ main_llm_model_name="meta-llama/Meta-Llama-3.1-8B-Instruct",
372
+ private_llm_api_base="http://vllm-server.company.com/v1",
373
+ private_llm_api_key="TEST_API_KEY",
374
+ )
375
+ agent = Agent(agent_config=config, tools=tools, topic=topic,
376
+ custom_instructions=custom_instructions)
377
+ ```
378
+
379
+ In this case we specify the Main LLM provider to be privately hosted with Llama-3.1-8B as the model.
380
+ - The `ModelProvider.PRIVATE` specifies a privately hosted LLM.
381
+ - The `private_llm_api_base` specifies the api endpoint to use, and the `private_llm_api_key`
382
+ specifies the private API key requires to use this service.
383
+
359
384
  ## ℹ️ Additional Information
360
385
 
361
386
  ### About Custom Instructions for your Agent
@@ -376,6 +401,8 @@ The `Agent` class defines a few helpful methods to help you understand the inter
376
401
 
377
402
  The `Agent` class supports serialization. Use the `dumps()` to serialize and `loads()` to read back from a serialized stream.
378
403
 
404
+ Note: due to cloudpickle limitations, if a tool contains Python `weakref` objects, serialization won't work and an exception will be raised.
405
+
379
406
  ### Observability
380
407
 
381
408
  vectara-agentic supports observability via the existing integration of LlamaIndex and Arize Phoenix.
@@ -68,7 +68,7 @@ from vectara_agentic.tools import VectaraToolFactory
68
68
  vec_factory = VectaraToolFactory(
69
69
  vectara_api_key=os.environ['VECTARA_API_KEY'],
70
70
  vectara_customer_id=os.environ['VECTARA_CUSTOMER_ID'],
71
- vectara_corpus_id=os.environ['VECTARA_CORPUS_ID']
71
+ vectara_corpus_key=os.environ['VECTARA_CORPUS_KEY']
72
72
  )
73
73
  ```
74
74
 
@@ -248,6 +248,10 @@ def mult_func(x, y):
248
248
  mult_tool = ToolsFactory().create_tool(mult_func)
249
249
  ```
250
250
 
251
+ Note: When you define your own Python functions as tools, implement them at the top module level,
252
+ and not as nested functions. Nested functions are not supported if you use serialization
253
+ (dumps/loads or from_dict/to_dict).
254
+
251
255
  ## 🛠️ Configuration
252
256
 
253
257
  ## Configuring Vectara-agentic
@@ -285,10 +289,31 @@ If any of these are not provided, `AgentConfig` first tries to read the values f
285
289
 
286
290
  When creating a `VectaraToolFactory`, you can pass in a `vectara_api_key`, `vectara_customer_id`, and `vectara_corpus_id` to the factory.
287
291
 
288
- If not passed in, it will be taken from the environment variables (`VECTARA_API_KEY`, `VECTARA_CUSTOMER_ID` and `VECTARA_CORPUS_ID`). Note that `VECTARA_CORPUS_ID` can be a single ID or a comma-separated list of IDs (if you want to query multiple corpora).
292
+ If not passed in, it will be taken from the environment variables (`VECTARA_API_KEY` and `VECTARA_CORPUS_KEY`). Note that `VECTARA_CORPUS_KEY` can be a single KEY or a comma-separated list of KEYs (if you want to query multiple corpora).
289
293
 
290
294
  These values will be used as credentials when creating Vectara tools - in `create_rag_tool()` and `create_search_tool()`.
291
295
 
296
+ ## Setting up a privately hosted LLM
297
+
298
+ If you want to setup vectara-agentic to use your own self-hosted LLM endpoint, follow the example below
299
+
300
+ ```python
301
+ config = AgentConfig(
302
+ agent_type=AgentType.REACT,
303
+ main_llm_provider=ModelProvider.PRIVATE,
304
+ main_llm_model_name="meta-llama/Meta-Llama-3.1-8B-Instruct",
305
+ private_llm_api_base="http://vllm-server.company.com/v1",
306
+ private_llm_api_key="TEST_API_KEY",
307
+ )
308
+ agent = Agent(agent_config=config, tools=tools, topic=topic,
309
+ custom_instructions=custom_instructions)
310
+ ```
311
+
312
+ In this case we specify the Main LLM provider to be privately hosted with Llama-3.1-8B as the model.
313
+ - The `ModelProvider.PRIVATE` specifies a privately hosted LLM.
314
+ - The `private_llm_api_base` specifies the api endpoint to use, and the `private_llm_api_key`
315
+ specifies the private API key requires to use this service.
316
+
292
317
  ## ℹ️ Additional Information
293
318
 
294
319
  ### About Custom Instructions for your Agent
@@ -309,6 +334,8 @@ The `Agent` class defines a few helpful methods to help you understand the inter
309
334
 
310
335
  The `Agent` class supports serialization. Use the `dumps()` to serialize and `loads()` to read back from a serialized stream.
311
336
 
337
+ Note: due to cloudpickle limitations, if a tool contains Python `weakref` objects, serialization won't work and an exception will be raised.
338
+
312
339
  ### Observability
313
340
 
314
341
  vectara-agentic supports observability via the existing integration of LlamaIndex and Arize Phoenix.
@@ -1,5 +1,5 @@
1
1
  llama-index==0.12.11
2
- llama-index-indices-managed-vectara==0.3.1
2
+ llama-index-indices-managed-vectara==0.4.0
3
3
  llama-index-agent-llm-compiler==0.3.0
4
4
  llama-index-agent-lats==0.3.0
5
5
  llama-index-agent-openai==0.4.3
@@ -33,5 +33,5 @@ pydantic==2.10.3
33
33
  retrying==1.3.4
34
34
  python-dotenv==1.0.1
35
35
  tiktoken==0.8.0
36
- dill>=0.3.7
36
+ cloudpickle>=3.1.1
37
37
  httpx==0.27.2
@@ -0,0 +1,42 @@
1
+ from openai import OpenAI
2
+ from flask import Flask, request, jsonify
3
+ from functools import wraps
4
+
5
+ app = Flask(__name__)
6
+
7
+ # Set your OpenAI API key (ensure you've set this in your environment)
8
+
9
+ EXPECTED_API_KEY = "TEST_API_KEY"
10
+
11
+ def require_api_key(f):
12
+ @wraps(f)
13
+ def decorated_function(*args, **kwargs):
14
+ api_key = request.headers.get("Authorization").split("Bearer ")[-1]
15
+ if not api_key or api_key != EXPECTED_API_KEY:
16
+ return jsonify({"error": "Unauthorized"}), 401
17
+ return f(*args, **kwargs)
18
+ return decorated_function
19
+
20
+ @app.before_request
21
+ def log_request_info():
22
+ app.logger.info("Request received: %s %s", request.method, request.path)
23
+
24
+ @app.route("/v1/chat/completions", methods=["POST"])
25
+ @require_api_key
26
+ def chat_completions():
27
+ app.logger.info("Received request on /v1/chat/completions")
28
+ data = request.get_json()
29
+ if not data:
30
+ return jsonify({"error": "Invalid JSON payload"}), 400
31
+
32
+ client = OpenAI()
33
+ try:
34
+ completion = client.chat.completions.create(**data)
35
+ return jsonify(completion.model_dump()), 200
36
+ except Exception as e:
37
+ return jsonify({"error": str(e)}), 400
38
+
39
+
40
+ if __name__ == "__main__":
41
+ # Run on port 5000 by default; adjust as needed.
42
+ app.run(debug=True, port=5000)
@@ -1,9 +1,10 @@
1
1
  import unittest
2
2
  from datetime import date
3
3
 
4
- from vectara_agentic.agent import _get_prompt, Agent, AgentType, FunctionTool
4
+ from vectara_agentic.agent import _get_prompt, Agent, AgentType
5
5
  from vectara_agentic.agent_config import AgentConfig
6
6
  from vectara_agentic.types import ModelProvider, ObserverType
7
+ from vectara_agentic.tools import ToolsFactory
7
8
 
8
9
  class TestAgentPackage(unittest.TestCase):
9
10
  def test_get_prompt(self):
@@ -23,16 +24,11 @@ class TestAgentPackage(unittest.TestCase):
23
24
  def mult(x, y):
24
25
  return x * y
25
26
 
26
- tools = [
27
- FunctionTool.from_defaults(
28
- fn=mult, name="mult", description="Multiplication functions"
29
- )
30
- ]
27
+ tools = [ToolsFactory().create_tool(mult)]
31
28
  topic = "AI"
32
29
  custom_instructions = "Always do as your mother tells you!"
33
30
  agent = Agent(tools, topic, custom_instructions)
34
31
  self.assertEqual(agent.agent_type, AgentType.OPENAI)
35
- self.assertEqual(agent.tools, tools)
36
32
  self.assertEqual(agent._topic, topic)
37
33
  self.assertEqual(agent._custom_instructions, custom_instructions)
38
34
 
@@ -40,7 +36,7 @@ class TestAgentPackage(unittest.TestCase):
40
36
  self.assertEqual(
41
37
  agent.chat(
42
38
  "What is 5 times 10. Only give the answer, nothing else"
43
- ).replace("$", "\\$"),
39
+ ).response.replace("$", "\\$"),
44
40
  "50",
45
41
  )
46
42
 
@@ -48,11 +44,7 @@ class TestAgentPackage(unittest.TestCase):
48
44
  def mult(x, y):
49
45
  return x * y
50
46
 
51
- tools = [
52
- FunctionTool.from_defaults(
53
- fn=mult, name="mult", description="Multiplication functions"
54
- )
55
- ]
47
+ tools = [ToolsFactory().create_tool(mult)]
56
48
  topic = "AI topic"
57
49
  instructions = "Always do as your father tells you, if your mother agrees!"
58
50
  config = AgentConfig(
@@ -70,7 +62,6 @@ class TestAgentPackage(unittest.TestCase):
70
62
  custom_instructions=instructions,
71
63
  agent_config=config
72
64
  )
73
- self.assertEqual(agent.tools, tools)
74
65
  self.assertEqual(agent._topic, topic)
75
66
  self.assertEqual(agent._custom_instructions, instructions)
76
67
  self.assertEqual(agent.agent_type, AgentType.REACT)
@@ -78,19 +69,36 @@ class TestAgentPackage(unittest.TestCase):
78
69
  self.assertEqual(agent.agent_config.main_llm_provider, ModelProvider.ANTHROPIC)
79
70
  self.assertEqual(agent.agent_config.tool_llm_provider, ModelProvider.TOGETHER)
80
71
 
81
- # To run this test, you must have OPENAI_API_KEY in your environment
72
+ # To run this test, you must have ANTHROPIC_API_KEY and TOGETHER_API_KEY in your environment
82
73
  self.assertEqual(
83
74
  agent.chat(
84
75
  "What is 5 times 10. Only give the answer, nothing else"
85
- ).replace("$", "\\$"),
76
+ ).response.replace("$", "\\$"),
86
77
  "50",
87
78
  )
88
79
 
80
+ def test_multiturn(self):
81
+ def mult(x, y):
82
+ return x * y
83
+
84
+ tools = [ToolsFactory().create_tool(mult)]
85
+ topic = "AI topic"
86
+ instructions = "Always do as your father tells you, if your mother agrees!"
87
+ agent = Agent(
88
+ tools=tools,
89
+ topic=topic,
90
+ custom_instructions=instructions,
91
+ )
92
+
93
+ agent.chat("What is 5 times 10. Only give the answer, nothing else")
94
+ agent.chat("what is 3 times 7. Only give the answer, nothing else")
95
+ res = agent.chat("multiply the results of the last two questions. Output only the answer.")
96
+ self.assertEqual(res.response, "1050")
97
+
89
98
  def test_from_corpus(self):
90
99
  agent = Agent.from_corpus(
91
100
  tool_name="RAG Tool",
92
- vectara_customer_id="4584783",
93
- vectara_corpus_id="4",
101
+ vectara_corpus_key="corpus_key",
94
102
  vectara_api_key="api_key",
95
103
  data_description="information",
96
104
  assistant_specialty="question answering",
@@ -102,16 +110,22 @@ class TestAgentPackage(unittest.TestCase):
102
110
  def test_serialization(self):
103
111
  agent = Agent.from_corpus(
104
112
  tool_name="RAG Tool",
105
- vectara_customer_id="4584783",
106
- vectara_corpus_id="4",
113
+ vectara_corpus_key="corpus_key",
107
114
  vectara_api_key="api_key",
108
115
  data_description="information",
109
116
  assistant_specialty="question answering",
110
117
  )
111
118
 
112
119
  agent_reloaded = agent.loads(agent.dumps())
120
+ agent_reloaded_again = agent_reloaded.loads(agent_reloaded.dumps())
121
+
113
122
  self.assertIsInstance(agent_reloaded, Agent)
114
123
  self.assertEqual(agent, agent_reloaded)
124
+ self.assertEqual(agent.agent_type, agent_reloaded.agent_type)
125
+
126
+ self.assertIsInstance(agent_reloaded, Agent)
127
+ self.assertEqual(agent, agent_reloaded_again)
128
+ self.assertEqual(agent.agent_type, agent_reloaded_again.agent_type)
115
129
 
116
130
 
117
131
  if __name__ == "__main__":
@@ -0,0 +1,67 @@
1
+ import os
2
+ import unittest
3
+ import subprocess
4
+ import time
5
+ import requests
6
+ import signal
7
+
8
+ from vectara_agentic.agent import Agent, AgentType
9
+ from vectara_agentic.agent_config import AgentConfig
10
+ from vectara_agentic.types import ModelProvider
11
+ from vectara_agentic.tools import ToolsFactory
12
+
13
+ class TestPrivateLLM(unittest.TestCase):
14
+
15
+ @classmethod
16
+ def setUp(cls):
17
+ # Start the Flask server as a subprocess
18
+ cls.flask_process = subprocess.Popen(
19
+ ['flask', 'run', '--port=5000'],
20
+ env={**os.environ, 'FLASK_APP': 'tests.endpoint:app', 'FLASK_ENV': 'development'},
21
+ stdout=None, stderr=None,
22
+ )
23
+ # Wait for the server to start
24
+ timeout = 10
25
+ url = 'http://127.0.0.1:5000/'
26
+ for _ in range(timeout):
27
+ try:
28
+ requests.get(url)
29
+ return
30
+ except requests.ConnectionError:
31
+ time.sleep(1)
32
+ raise RuntimeError(f"Failed to start Flask server at {url}")
33
+
34
+ @classmethod
35
+ def tearDown(cls):
36
+ # Terminate the Flask server
37
+ cls.flask_process.send_signal(signal.SIGINT)
38
+ cls.flask_process.wait()
39
+
40
+ def test_endpoint(self):
41
+ def mult(x, y):
42
+ return x * y
43
+
44
+ tools = [ToolsFactory().create_tool(mult)]
45
+ topic = "calculator"
46
+ custom_instructions = "you are an agent specializing in math, assisting a user."
47
+ config = AgentConfig(
48
+ agent_type=AgentType.REACT,
49
+ main_llm_provider=ModelProvider.PRIVATE,
50
+ main_llm_model_name="gpt-4o",
51
+ private_llm_api_base="http://127.0.0.1:5000/v1",
52
+ private_llm_api_key="TEST_API_KEY",
53
+ )
54
+ agent = Agent(agent_config=config, tools=tools, topic=topic,
55
+ custom_instructions=custom_instructions)
56
+
57
+ # To run this test, you must have OPENAI_API_KEY in your environment
58
+ self.assertEqual(
59
+ agent.chat(
60
+ "What is 5 times 10. Only give the answer, nothing else"
61
+ ).response.replace("$", "\\$"),
62
+ "50",
63
+ )
64
+
65
+
66
+ if __name__ == "__main__":
67
+ unittest.main()
@@ -1,22 +1,23 @@
1
1
  import unittest
2
2
 
3
+ from pydantic import Field, BaseModel
4
+
3
5
  from vectara_agentic.tools import VectaraTool, VectaraToolFactory, ToolsFactory, ToolType
4
6
  from vectara_agentic.agent import Agent
5
- from pydantic import Field, BaseModel
7
+ from vectara_agentic.agent_config import AgentConfig
8
+
6
9
  from llama_index.core.tools import FunctionTool
7
10
 
8
11
 
9
12
  class TestToolsPackage(unittest.TestCase):
10
13
  def test_vectara_tool_factory(self):
11
- vectara_customer_id = "4584783"
12
- vectara_corpus_id = "4"
14
+ vectara_corpus_key = "corpus_key"
13
15
  vectara_api_key = "api_key"
14
16
  vec_factory = VectaraToolFactory(
15
- vectara_customer_id, vectara_corpus_id, vectara_api_key
17
+ vectara_corpus_key, vectara_api_key
16
18
  )
17
19
 
18
- self.assertEqual(vectara_customer_id, vec_factory.vectara_customer_id)
19
- self.assertEqual(vectara_corpus_id, vec_factory.vectara_corpus_id)
20
+ self.assertEqual(vectara_corpus_key, vec_factory.vectara_corpus_key)
20
21
  self.assertEqual(vectara_api_key, vec_factory.vectara_api_key)
21
22
 
22
23
  class QueryToolArgs(BaseModel):
@@ -59,16 +60,11 @@ class TestToolsPackage(unittest.TestCase):
59
60
  self.assertEqual(arxiv_tool.metadata.tool_type, ToolType.QUERY)
60
61
 
61
62
  def test_public_repo(self):
62
- vectara_customer_id = "1366999410"
63
- vectara_corpus_id = "1"
63
+ vectara_corpus_key = "vectara-docs_1"
64
64
  vectara_api_key = "zqt_UXrBcnI2UXINZkrv4g1tQPhzj02vfdtqYJIDiA"
65
65
 
66
- class QueryToolArgs(BaseModel):
67
- query: str = Field(description="The user query")
68
-
69
66
  agent = Agent.from_corpus(
70
- vectara_customer_id=vectara_customer_id,
71
- vectara_corpus_id=vectara_corpus_id,
67
+ vectara_corpus_key=vectara_corpus_key,
72
68
  vectara_api_key=vectara_api_key,
73
69
  tool_name="ask_vectara",
74
70
  data_description="data from Vectara website",
@@ -76,7 +72,34 @@ class TestToolsPackage(unittest.TestCase):
76
72
  vectara_summarizer="mockingbird-1.0-2024-07-16"
77
73
  )
78
74
 
79
- self.assertIn("Vectara is an end-to-end platform", agent.chat("What is Vectara?"))
75
+ self.assertIn("Vectara is an end-to-end platform", str(agent.chat("What is Vectara?")))
76
+
77
+ def test_class_method_as_tool(self):
78
+ class TestClass:
79
+ def __init__(self):
80
+ pass
81
+
82
+ def mult(self, x, y):
83
+ return x * y
84
+
85
+ test_class = TestClass()
86
+ tools = [ToolsFactory().create_tool(test_class.mult)]
87
+ topic = "AI topic"
88
+ instructions = "Always do as your father tells you, if your mother agrees!"
89
+ config = AgentConfig()
90
+ agent = Agent(
91
+ tools=tools,
92
+ topic=topic,
93
+ custom_instructions=instructions,
94
+ agent_config=config
95
+ )
96
+
97
+ self.assertEqual(
98
+ agent.chat(
99
+ "What is 5 times 10. Only give the answer, nothing else"
100
+ ).response.replace("$", "\\$"),
101
+ "50",
102
+ )
80
103
 
81
104
 
82
105
  if __name__ == "__main__":
@@ -146,7 +146,8 @@ class AgentCallbackHandler(BaseCallbackHandler):
146
146
  if EventPayload.MESSAGES in payload:
147
147
  response = str(payload.get(EventPayload.RESPONSE))
148
148
  if response and response not in ["None", "assistant: None"]:
149
- self.fn(AgentStatusType.AGENT_UPDATE, response)
149
+ if self.fn:
150
+ self.fn(AgentStatusType.AGENT_UPDATE, response)
150
151
  else:
151
152
  print(f"No messages or prompt found in payload {payload}")
152
153
 
@@ -156,23 +157,27 @@ class AgentCallbackHandler(BaseCallbackHandler):
156
157
  tool = payload.get(EventPayload.TOOL)
157
158
  if tool:
158
159
  tool_name = tool.name
159
- self.fn(
160
- AgentStatusType.TOOL_CALL,
161
- f"Executing '{tool_name}' with arguments: {fcall}",
162
- )
160
+ if self.fn:
161
+ self.fn(
162
+ AgentStatusType.TOOL_CALL,
163
+ f"Executing '{tool_name}' with arguments: {fcall}",
164
+ )
163
165
  elif EventPayload.FUNCTION_OUTPUT in payload:
164
166
  response = str(payload.get(EventPayload.FUNCTION_OUTPUT))
165
- self.fn(AgentStatusType.TOOL_OUTPUT, response)
167
+ if self.fn:
168
+ self.fn(AgentStatusType.TOOL_OUTPUT, response)
166
169
  else:
167
170
  print(f"No function call or output found in payload {payload}")
168
171
 
169
172
  def _handle_agent_step(self, payload: dict) -> None:
170
173
  if EventPayload.MESSAGES in payload:
171
174
  msg = str(payload.get(EventPayload.MESSAGES))
172
- self.fn(AgentStatusType.AGENT_STEP, msg)
175
+ if self.fn:
176
+ self.fn(AgentStatusType.AGENT_STEP, msg)
173
177
  elif EventPayload.RESPONSE in payload:
174
178
  response = str(payload.get(EventPayload.RESPONSE))
175
- self.fn(AgentStatusType.AGENT_STEP, response)
179
+ if self.fn:
180
+ self.fn(AgentStatusType.AGENT_STEP, response)
176
181
  else:
177
182
  print(f"No messages or prompt found in payload {payload}")
178
183
 
@@ -181,10 +186,11 @@ class AgentCallbackHandler(BaseCallbackHandler):
181
186
  if EventPayload.MESSAGES in payload:
182
187
  response = str(payload.get(EventPayload.RESPONSE))
183
188
  if response and response not in ["None", "assistant: None"]:
184
- if inspect.iscoroutinefunction(self.fn):
185
- await self.fn(AgentStatusType.AGENT_UPDATE, response)
186
- else:
187
- self.fn(AgentStatusType.AGENT_UPDATE, response)
189
+ if self.fn:
190
+ if inspect.iscoroutinefunction(self.fn):
191
+ await self.fn(AgentStatusType.AGENT_UPDATE, response)
192
+ else:
193
+ self.fn(AgentStatusType.AGENT_UPDATE, response)
188
194
  else:
189
195
  print(f"No messages or prompt found in payload {payload}")
190
196
 
@@ -194,37 +200,41 @@ class AgentCallbackHandler(BaseCallbackHandler):
194
200
  tool = payload.get(EventPayload.TOOL)
195
201
  if tool:
196
202
  tool_name = tool.name
203
+ if self.fn:
204
+ if inspect.iscoroutinefunction(self.fn):
205
+ await self.fn(
206
+ AgentStatusType.TOOL_CALL,
207
+ f"Executing '{tool_name}' with arguments: {fcall}",
208
+ )
209
+ else:
210
+ self.fn(
211
+ AgentStatusType.TOOL_CALL,
212
+ f"Executing '{tool_name}' with arguments: {fcall}",
213
+ )
214
+ elif EventPayload.FUNCTION_OUTPUT in payload:
215
+ if self.fn:
216
+ response = str(payload.get(EventPayload.FUNCTION_OUTPUT))
197
217
  if inspect.iscoroutinefunction(self.fn):
198
- await self.fn(
199
- AgentStatusType.TOOL_CALL,
200
- f"Executing '{tool_name}' with arguments: {fcall}",
201
- )
218
+ await self.fn(AgentStatusType.TOOL_OUTPUT, response)
202
219
  else:
203
- self.fn(
204
- AgentStatusType.TOOL_CALL,
205
- f"Executing '{tool_name}' with arguments: {fcall}",
206
- )
207
- elif EventPayload.FUNCTION_OUTPUT in payload:
208
- response = str(payload.get(EventPayload.FUNCTION_OUTPUT))
209
- if inspect.iscoroutinefunction(self.fn):
210
- await self.fn(AgentStatusType.TOOL_OUTPUT, response)
211
- else:
212
- self.fn(AgentStatusType.TOOL_OUTPUT, response)
220
+ self.fn(AgentStatusType.TOOL_OUTPUT, response)
213
221
  else:
214
222
  print(f"No function call or output found in payload {payload}")
215
223
 
216
224
  async def _ahandle_agent_step(self, payload: dict) -> None:
217
225
  if EventPayload.MESSAGES in payload:
218
- msg = str(payload.get(EventPayload.MESSAGES))
219
- if inspect.iscoroutinefunction(self.fn):
220
- await self.fn(AgentStatusType.AGENT_STEP, msg)
221
- else:
222
- self.fn(AgentStatusType.AGENT_STEP, msg)
226
+ if self.fn:
227
+ msg = str(payload.get(EventPayload.MESSAGES))
228
+ if inspect.iscoroutinefunction(self.fn):
229
+ await self.fn(AgentStatusType.AGENT_STEP, msg)
230
+ else:
231
+ self.fn(AgentStatusType.AGENT_STEP, msg)
223
232
  elif EventPayload.RESPONSE in payload:
224
- response = str(payload.get(EventPayload.RESPONSE))
225
- if inspect.iscoroutinefunction(self.fn):
226
- await self.fn(AgentStatusType.AGENT_STEP, response)
227
- else:
228
- self.fn(AgentStatusType.AGENT_STEP, response)
233
+ if self.fn:
234
+ response = str(payload.get(EventPayload.RESPONSE))
235
+ if inspect.iscoroutinefunction(self.fn):
236
+ await self.fn(AgentStatusType.AGENT_STEP, response)
237
+ else:
238
+ self.fn(AgentStatusType.AGENT_STEP, response)
229
239
  else:
230
240
  print(f"No messages or prompt found in payload {payload}")
@@ -5,7 +5,9 @@ This file contains the prompt templates for the different types of agents.
5
5
  # General (shared) instructions
6
6
  GENERAL_INSTRUCTIONS = """
7
7
  - Use tools as your main source of information, do not respond without using a tool. Do not respond based on pre-trained knowledge.
8
- - Always call the 'get_current_date' tool to ensure you know the exact date when a user asks a question.
8
+ - Before responding to a user query that requires knowledge of the current date, call the 'get_current_date' tool to get the current date.
9
+ Never rely on previous knowledge of the current date.
10
+ Example queries that require the current date: "What is the revenue of Apple last october?" or "What was the stock price 5 days ago?".
9
11
  - When using a tool with arguments, simplify the query as much as possible if you use the tool with arguments.
10
12
  For example, if the original query is "revenue for apple in 2021", you can use the tool with a query "revenue" with arguments year=2021 and company=apple.
11
13
  - If a tool responds with "I do not have enough information", try one of the following:
@@ -1,4 +1,4 @@
1
1
  """
2
2
  Define the version of the package.
3
3
  """
4
- __version__ = "0.1.28"
4
+ __version__ = "0.2.1"