khoj 2.0.0b13.dev5__py3-none-any.whl → 2.0.0b13.dev23__py3-none-any.whl
This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
- khoj/app/README.md +1 -1
- khoj/app/urls.py +1 -0
- khoj/database/adapters/__init__.py +4 -4
- khoj/database/management/commands/delete_orphaned_fileobjects.py +0 -1
- khoj/database/migrations/0064_remove_conversation_temp_id_alter_conversation_id.py +1 -1
- khoj/database/migrations/0075_migrate_generated_assets_and_validate.py +1 -1
- khoj/database/models/__init__.py +6 -6
- khoj/database/tests.py +0 -2
- khoj/interface/compiled/404/index.html +2 -2
- khoj/interface/compiled/_next/static/chunks/{9245.a04e92d034540234.js → 1225.ecac11e7421504c4.js} +3 -3
- khoj/interface/compiled/_next/static/chunks/1320.ae930ad00affe685.js +5 -0
- khoj/interface/compiled/_next/static/chunks/{1327-3b1a41af530fa8ee.js → 1327-e254819a9172cfa7.js} +1 -1
- khoj/interface/compiled/_next/static/chunks/1626.15a8acc0d6639ec6.js +1 -0
- khoj/interface/compiled/_next/static/chunks/{3489.c523fe96a2eee74f.js → 1940.d082758bd04e08ae.js} +1 -1
- khoj/interface/compiled/_next/static/chunks/{2327-ea623ca2d22f78e9.js → 2327-438aaec1657c5ada.js} +1 -1
- khoj/interface/compiled/_next/static/chunks/2475.57a0d0fd93d07af0.js +93 -0
- khoj/interface/compiled/_next/static/chunks/2481.5ce6524ba0a73f90.js +55 -0
- khoj/interface/compiled/_next/static/chunks/297.4c4c823ff6e3255b.js +174 -0
- khoj/interface/compiled/_next/static/chunks/{5639-09e2009a2adedf8b.js → 3260-82d2521fab032ff1.js} +68 -23
- khoj/interface/compiled/_next/static/chunks/3353.1c6d553216a1acae.js +1 -0
- khoj/interface/compiled/_next/static/chunks/3855.f7b8131f78af046e.js +1 -0
- khoj/interface/compiled/_next/static/chunks/3973.dc54a39586ab48be.js +1 -0
- khoj/interface/compiled/_next/static/chunks/4241.c1cd170f7f37ac59.js +24 -0
- khoj/interface/compiled/_next/static/chunks/{4327.8d2a1b8f1ea78208.js → 4327.f3704dc398c67113.js} +19 -19
- khoj/interface/compiled/_next/static/chunks/4505.f09454a346269c3f.js +117 -0
- khoj/interface/compiled/_next/static/chunks/4801.96a152d49742b644.js +1 -0
- khoj/interface/compiled/_next/static/chunks/5427-a95ec748e52abb75.js +1 -0
- khoj/interface/compiled/_next/static/chunks/549.2bd27f59a91a9668.js +148 -0
- khoj/interface/compiled/_next/static/chunks/5765.71b1e1207b76b03f.js +1 -0
- khoj/interface/compiled/_next/static/chunks/584.d7ce3505f169b706.js +1 -0
- khoj/interface/compiled/_next/static/chunks/6240.34f7c1fa692edd61.js +24 -0
- khoj/interface/compiled/_next/static/chunks/6d3fe5a5-f9f3c16e0bc0cdf9.js +10 -0
- khoj/interface/compiled/_next/static/chunks/{7127-0f4a2a77d97fb5fa.js → 7127-97b83757db125ba6.js} +1 -1
- khoj/interface/compiled/_next/static/chunks/7200-93ab0072359b8028.js +1 -0
- khoj/interface/compiled/_next/static/chunks/{2612.bcf5a623b3da209e.js → 7553.f5ad54b1f6e92c49.js} +2 -2
- khoj/interface/compiled/_next/static/chunks/7626-1b630f1654172341.js +1 -0
- khoj/interface/compiled/_next/static/chunks/764.dadd316e8e16d191.js +63 -0
- khoj/interface/compiled/_next/static/chunks/78.08169ab541abab4f.js +43 -0
- khoj/interface/compiled/_next/static/chunks/784.e03acf460df213d1.js +1 -0
- khoj/interface/compiled/_next/static/chunks/{9537-d9ab442ce15d1e20.js → 8072-e1440cb482a0940e.js} +1 -1
- khoj/interface/compiled/_next/static/chunks/{3265.924139c4146ee344.js → 8086.8d39887215807fcd.js} +1 -1
- khoj/interface/compiled/_next/static/chunks/8168.f074ab8c7c16d82d.js +59 -0
- khoj/interface/compiled/_next/static/chunks/{8694.2bd9c2f65d8c5847.js → 8223.1705878fa7a09292.js} +1 -1
- khoj/interface/compiled/_next/static/chunks/8483.94f6c9e2bee86f50.js +215 -0
- khoj/interface/compiled/_next/static/chunks/{8888.ebe0e552b59e7fed.js → 8810.fc0e479de78c7c61.js} +1 -1
- khoj/interface/compiled/_next/static/chunks/8828.bc74dc4ce94e78f6.js +1 -0
- khoj/interface/compiled/_next/static/chunks/{7303.d0612f812a967a08.js → 8909.14ac3f43d0070cf1.js} +5 -5
- khoj/interface/compiled/_next/static/chunks/90542734.b1a1629065ba199b.js +1 -0
- khoj/interface/compiled/_next/static/chunks/9167.098534184f03fe92.js +56 -0
- khoj/interface/compiled/_next/static/chunks/{4980.63500d68b3bb1222.js → 9537.e934ce37bf314509.js} +5 -5
- khoj/interface/compiled/_next/static/chunks/9574.3fe8e26e95bf1c34.js +1 -0
- khoj/interface/compiled/_next/static/chunks/9599.ec50b5296c27dae9.js +1 -0
- khoj/interface/compiled/_next/static/chunks/9643.b34248df52ffc77c.js +262 -0
- khoj/interface/compiled/_next/static/chunks/9747.2fd9065b1435abb1.js +1 -0
- khoj/interface/compiled/_next/static/chunks/9922.98f2b2a9959b4ebe.js +1 -0
- khoj/interface/compiled/_next/static/chunks/app/agents/layout-e00fb81dca656a10.js +1 -0
- khoj/interface/compiled/_next/static/chunks/app/agents/page-e291b49977f43880.js +1 -0
- khoj/interface/compiled/_next/static/chunks/app/automations/page-198b26df6e09bbb0.js +1 -0
- khoj/interface/compiled/_next/static/chunks/app/chat/layout-33934fc2d6ae6838.js +1 -0
- khoj/interface/compiled/_next/static/chunks/app/chat/{page-8e1c4f2af3c9429e.js → page-dfcc1e8e2ad62873.js} +1 -1
- khoj/interface/compiled/_next/static/chunks/app/{page-2b3056cba8aa96ce.js → page-1567cac7b79a7c59.js} +1 -1
- khoj/interface/compiled/_next/static/chunks/app/settings/{page-8be3b35178abf2ec.js → page-6081362437c82470.js} +1 -1
- khoj/interface/compiled/_next/static/chunks/app/share/chat/{page-4a4b0c0f4749c2b2.js → page-e0dcb1762f8c8f88.js} +1 -1
- khoj/interface/compiled/_next/static/chunks/webpack-5393aad3d824e0cb.js +1 -0
- khoj/interface/compiled/_next/static/css/{2945c4a857922f3b.css → c34713c98384ee87.css} +1 -1
- khoj/interface/compiled/agents/index.html +2 -2
- khoj/interface/compiled/agents/index.txt +3 -3
- khoj/interface/compiled/automations/index.html +2 -2
- khoj/interface/compiled/automations/index.txt +4 -4
- khoj/interface/compiled/chat/index.html +2 -2
- khoj/interface/compiled/chat/index.txt +3 -3
- khoj/interface/compiled/index.html +2 -2
- khoj/interface/compiled/index.txt +3 -3
- khoj/interface/compiled/search/index.html +2 -2
- khoj/interface/compiled/search/index.txt +3 -3
- khoj/interface/compiled/settings/index.html +2 -2
- khoj/interface/compiled/settings/index.txt +5 -5
- khoj/interface/compiled/share/chat/index.html +2 -2
- khoj/interface/compiled/share/chat/index.txt +3 -3
- khoj/main.py +3 -3
- khoj/manage.py +1 -0
- khoj/processor/content/github/github_to_entries.py +6 -6
- khoj/processor/content/images/image_to_entries.py +0 -1
- khoj/processor/content/markdown/markdown_to_entries.py +2 -3
- khoj/processor/content/notion/notion_to_entries.py +5 -5
- khoj/processor/content/org_mode/org_to_entries.py +4 -5
- khoj/processor/content/org_mode/orgnode.py +4 -4
- khoj/processor/content/plaintext/plaintext_to_entries.py +1 -2
- khoj/processor/content/text_to_entries.py +1 -2
- khoj/processor/conversation/google/utils.py +3 -3
- khoj/processor/conversation/openai/gpt.py +65 -28
- khoj/processor/conversation/openai/utils.py +358 -22
- khoj/processor/conversation/prompts.py +11 -5
- khoj/processor/conversation/utils.py +20 -11
- khoj/processor/embeddings.py +0 -2
- khoj/processor/image/generate.py +3 -3
- khoj/processor/operator/__init__.py +2 -2
- khoj/processor/operator/grounding_agent.py +15 -2
- khoj/processor/operator/grounding_agent_uitars.py +34 -23
- khoj/processor/operator/operator_agent_anthropic.py +29 -4
- khoj/processor/operator/operator_agent_base.py +1 -1
- khoj/processor/operator/operator_agent_binary.py +4 -4
- khoj/processor/operator/operator_agent_openai.py +21 -6
- khoj/processor/operator/operator_environment_browser.py +1 -1
- khoj/processor/operator/operator_environment_computer.py +1 -1
- khoj/processor/speech/text_to_speech.py +0 -1
- khoj/processor/tools/online_search.py +1 -1
- khoj/processor/tools/run_code.py +1 -1
- khoj/routers/api.py +1 -2
- khoj/routers/api_agents.py +1 -2
- khoj/routers/api_automation.py +1 -1
- khoj/routers/api_chat.py +10 -16
- khoj/routers/api_model.py +0 -1
- khoj/routers/api_subscription.py +1 -1
- khoj/routers/email.py +4 -4
- khoj/routers/helpers.py +35 -24
- khoj/routers/research.py +2 -4
- khoj/search_filter/base_filter.py +2 -4
- khoj/search_type/text_search.py +1 -2
- khoj/utils/constants.py +3 -0
- khoj/utils/helpers.py +4 -4
- khoj/utils/initialization.py +1 -3
- khoj/utils/models.py +2 -4
- khoj/utils/rawconfig.py +1 -2
- khoj/utils/state.py +1 -1
- {khoj-2.0.0b13.dev5.dist-info → khoj-2.0.0b13.dev23.dist-info}/METADATA +3 -2
- {khoj-2.0.0b13.dev5.dist-info → khoj-2.0.0b13.dev23.dist-info}/RECORD +139 -137
- khoj/interface/compiled/_next/static/chunks/1191.b547ec13349b4aed.js +0 -1
- khoj/interface/compiled/_next/static/chunks/1588.f0558a0bdffc4761.js +0 -117
- khoj/interface/compiled/_next/static/chunks/1918.925cb4a35518d258.js +0 -43
- khoj/interface/compiled/_next/static/chunks/2849.dc00ae5ba7219cfc.js +0 -1
- khoj/interface/compiled/_next/static/chunks/303.fe76de943e930fbd.js +0 -1
- khoj/interface/compiled/_next/static/chunks/4533.586e74b45a2bde25.js +0 -55
- khoj/interface/compiled/_next/static/chunks/4551.82ce1476b5516bc2.js +0 -5
- khoj/interface/compiled/_next/static/chunks/4748.0edd37cba3ea2809.js +0 -59
- khoj/interface/compiled/_next/static/chunks/5210.cd35a1c1ec594a20.js +0 -93
- khoj/interface/compiled/_next/static/chunks/5329.f8b3c5b3d16159cd.js +0 -1
- khoj/interface/compiled/_next/static/chunks/5427-13d6ffd380fdfab7.js +0 -1
- khoj/interface/compiled/_next/static/chunks/558-c14e76cff03f6a60.js +0 -1
- khoj/interface/compiled/_next/static/chunks/5830.8876eccb82da9b7d.js +0 -262
- khoj/interface/compiled/_next/static/chunks/6230.88a71d8145347b3f.js +0 -1
- khoj/interface/compiled/_next/static/chunks/7161.77e0530a40ad5ca8.js +0 -1
- khoj/interface/compiled/_next/static/chunks/7200-ac3b2e37ff30e126.js +0 -1
- khoj/interface/compiled/_next/static/chunks/7505.c31027a3695bdebb.js +0 -148
- khoj/interface/compiled/_next/static/chunks/7760.35649cc21d9585bd.js +0 -56
- khoj/interface/compiled/_next/static/chunks/83.48e2db193a940052.js +0 -1
- khoj/interface/compiled/_next/static/chunks/8427.844694e06133fb51.js +0 -1
- khoj/interface/compiled/_next/static/chunks/8665.4db7e6b2e8933497.js +0 -174
- khoj/interface/compiled/_next/static/chunks/872.caf84cc1a39ae59f.js +0 -1
- khoj/interface/compiled/_next/static/chunks/8890.6e8a59e4de6978bc.js +0 -215
- khoj/interface/compiled/_next/static/chunks/8950.5f2272e0ac923f9e.js +0 -1
- khoj/interface/compiled/_next/static/chunks/90542734.2c21f16f18b22411.js +0 -1
- khoj/interface/compiled/_next/static/chunks/9202.c703864fcedc8d1f.js +0 -63
- khoj/interface/compiled/_next/static/chunks/9320.6aca4885d541aa44.js +0 -24
- khoj/interface/compiled/_next/static/chunks/9535.f78cd92d03331e55.js +0 -1
- khoj/interface/compiled/_next/static/chunks/9968.b111fc002796da81.js +0 -1
- khoj/interface/compiled/_next/static/chunks/app/agents/layout-4e2a134ec26aa606.js +0 -1
- khoj/interface/compiled/_next/static/chunks/app/agents/page-9a4610474cd59a71.js +0 -1
- khoj/interface/compiled/_next/static/chunks/app/automations/page-f7bb9d777b7745d4.js +0 -1
- khoj/interface/compiled/_next/static/chunks/app/chat/layout-ad4d1792ab1a4108.js +0 -1
- khoj/interface/compiled/_next/static/chunks/f3e3247b-1758d4651e4457c2.js +0 -10
- khoj/interface/compiled/_next/static/chunks/webpack-ee14d29b64c5ab47.js +0 -1
- /khoj/interface/compiled/_next/static/{XfWrWDAk5VXeZ88OdP652 → Q7tm150g44Fs4H1CGytNf}/_buildManifest.js +0 -0
- /khoj/interface/compiled/_next/static/{XfWrWDAk5VXeZ88OdP652 → Q7tm150g44Fs4H1CGytNf}/_ssgManifest.js +0 -0
- /khoj/interface/compiled/_next/static/chunks/{1915-fbfe167c84ad60c5.js → 1915-5c6508f6ebb62a30.js} +0 -0
- /khoj/interface/compiled/_next/static/chunks/{2117-e78b6902ad6f75ec.js → 2117-080746c8e170c81a.js} +0 -0
- /khoj/interface/compiled/_next/static/chunks/{2939-4d4084c5b888b960.js → 2939-4af3fd24b8ffc9ad.js} +0 -0
- /khoj/interface/compiled/_next/static/chunks/{4447-d6cf93724d57e34b.js → 4447-cd95608f8e93e711.js} +0 -0
- /khoj/interface/compiled/_next/static/chunks/{8667-4b7790573b08c50d.js → 8667-50b03a89e82e0ba7.js} +0 -0
- /khoj/interface/compiled/_next/static/chunks/{9139-ce1ae935dac9c871.js → 9139-8ac4d9feb10f8869.js} +0 -0
- /khoj/interface/compiled/_next/static/chunks/app/search/{page-4885df3cd175c957.js → page-3639e50ec3e9acfd.js} +0 -0
- {khoj-2.0.0b13.dev5.dist-info → khoj-2.0.0b13.dev23.dist-info}/WHEEL +0 -0
- {khoj-2.0.0b13.dev5.dist-info → khoj-2.0.0b13.dev23.dist-info}/entry_points.txt +0 -0
- {khoj-2.0.0b13.dev5.dist-info → khoj-2.0.0b13.dev23.dist-info}/licenses/LICENSE +0 -0
@@ -1,6 +1,5 @@
|
|
1
1
|
import logging
|
2
2
|
import re
|
3
|
-
from pathlib import Path
|
4
3
|
from typing import Dict, List, Tuple
|
5
4
|
|
6
5
|
import urllib3
|
@@ -97,7 +96,7 @@ class PlaintextToEntries(TextToEntries):
|
|
97
96
|
for parsed_entry in parsed_entries:
|
98
97
|
raw_filename = entry_to_file_map[parsed_entry]
|
99
98
|
# Check if raw_filename is a URL. If so, save it as is. If not, convert it to a Path.
|
100
|
-
if
|
99
|
+
if isinstance(raw_filename, str) and re.search(r"^https?://", raw_filename):
|
101
100
|
# Escape the URL to avoid issues with special characters
|
102
101
|
entry_filename = urllib3.util.parse_url(raw_filename).url
|
103
102
|
else:
|
@@ -30,8 +30,7 @@ class TextToEntries(ABC):
|
|
30
30
|
self.date_filter = DateFilter()
|
31
31
|
|
32
32
|
@abstractmethod
|
33
|
-
def process(self, files: dict[str, str], user: KhojUser, regenerate: bool = False) -> Tuple[int, int]:
|
34
|
-
...
|
33
|
+
def process(self, files: dict[str, str], user: KhojUser, regenerate: bool = False) -> Tuple[int, int]: ...
|
35
34
|
|
36
35
|
@staticmethod
|
37
36
|
def hash_func(key: str) -> Callable:
|
@@ -194,7 +194,7 @@ def gemini_completion_with_backoff(
|
|
194
194
|
or not response.candidates[0].content
|
195
195
|
or response.candidates[0].content.parts is None
|
196
196
|
):
|
197
|
-
raise ValueError(
|
197
|
+
raise ValueError("Failed to get response from model.")
|
198
198
|
raw_content = [part.model_dump() for part in response.candidates[0].content.parts]
|
199
199
|
if response.function_calls:
|
200
200
|
function_calls = [
|
@@ -212,7 +212,7 @@ def gemini_completion_with_backoff(
|
|
212
212
|
response = None
|
213
213
|
# Handle 429 rate limit errors directly
|
214
214
|
if e.code == 429:
|
215
|
-
response_text =
|
215
|
+
response_text = "My brain is exhausted. Can you please try again in a bit?"
|
216
216
|
# Log the full error details for debugging
|
217
217
|
logger.error(f"Gemini ClientError: {e.code} {e.status}. Details: {e.details}")
|
218
218
|
# Handle other errors
|
@@ -361,7 +361,7 @@ def handle_gemini_response(
|
|
361
361
|
|
362
362
|
# Ensure we have a proper list of candidates
|
363
363
|
if not isinstance(candidates, list):
|
364
|
-
message =
|
364
|
+
message = "\nUnexpected response format. Try again."
|
365
365
|
stopped = True
|
366
366
|
return message, stopped
|
367
367
|
|
@@ -9,6 +9,9 @@ from khoj.processor.conversation.openai.utils import (
|
|
9
9
|
clean_response_schema,
|
10
10
|
completion_with_backoff,
|
11
11
|
get_structured_output_support,
|
12
|
+
is_openai_api,
|
13
|
+
responses_chat_completion_with_backoff,
|
14
|
+
responses_completion_with_backoff,
|
12
15
|
to_openai_tools,
|
13
16
|
)
|
14
17
|
from khoj.processor.conversation.utils import (
|
@@ -43,31 +46,52 @@ def send_message_to_model(
|
|
43
46
|
model_kwargs: Dict[str, Any] = {}
|
44
47
|
json_support = get_structured_output_support(model, api_base_url)
|
45
48
|
if tools and json_support == StructuredOutputSupport.TOOL:
|
46
|
-
model_kwargs["tools"] = to_openai_tools(tools)
|
49
|
+
model_kwargs["tools"] = to_openai_tools(tools, use_responses_api=is_openai_api(api_base_url))
|
47
50
|
elif response_schema and json_support >= StructuredOutputSupport.SCHEMA:
|
48
51
|
# Drop unsupported fields from schema passed to OpenAI APi
|
49
52
|
cleaned_response_schema = clean_response_schema(response_schema)
|
50
|
-
|
51
|
-
"
|
52
|
-
|
53
|
-
|
54
|
-
|
55
|
-
|
56
|
-
|
57
|
-
|
53
|
+
if is_openai_api(api_base_url):
|
54
|
+
model_kwargs["text"] = {
|
55
|
+
"format": {
|
56
|
+
"type": "json_schema",
|
57
|
+
"strict": True,
|
58
|
+
"name": response_schema.__name__,
|
59
|
+
"schema": cleaned_response_schema,
|
60
|
+
}
|
61
|
+
}
|
62
|
+
else:
|
63
|
+
model_kwargs["response_format"] = {
|
64
|
+
"type": "json_schema",
|
65
|
+
"json_schema": {
|
66
|
+
"schema": cleaned_response_schema,
|
67
|
+
"name": response_schema.__name__,
|
68
|
+
"strict": True,
|
69
|
+
},
|
70
|
+
}
|
58
71
|
elif response_type == "json_object" and json_support == StructuredOutputSupport.OBJECT:
|
59
72
|
model_kwargs["response_format"] = {"type": response_type}
|
60
73
|
|
61
74
|
# Get Response from GPT
|
62
|
-
|
63
|
-
|
64
|
-
|
65
|
-
|
66
|
-
|
67
|
-
|
68
|
-
|
69
|
-
|
70
|
-
|
75
|
+
if is_openai_api(api_base_url):
|
76
|
+
return responses_completion_with_backoff(
|
77
|
+
messages=messages,
|
78
|
+
model_name=model,
|
79
|
+
openai_api_key=api_key,
|
80
|
+
api_base_url=api_base_url,
|
81
|
+
deepthought=deepthought,
|
82
|
+
model_kwargs=model_kwargs,
|
83
|
+
tracer=tracer,
|
84
|
+
)
|
85
|
+
else:
|
86
|
+
return completion_with_backoff(
|
87
|
+
messages=messages,
|
88
|
+
model_name=model,
|
89
|
+
openai_api_key=api_key,
|
90
|
+
api_base_url=api_base_url,
|
91
|
+
deepthought=deepthought,
|
92
|
+
model_kwargs=model_kwargs,
|
93
|
+
tracer=tracer,
|
94
|
+
)
|
71
95
|
|
72
96
|
|
73
97
|
async def converse_openai(
|
@@ -163,13 +187,26 @@ async def converse_openai(
|
|
163
187
|
logger.debug(f"Conversation Context for GPT: {messages_to_print(messages)}")
|
164
188
|
|
165
189
|
# Get Response from GPT
|
166
|
-
|
167
|
-
|
168
|
-
|
169
|
-
|
170
|
-
|
171
|
-
|
172
|
-
|
173
|
-
|
174
|
-
|
175
|
-
|
190
|
+
if is_openai_api(api_base_url):
|
191
|
+
async for chunk in responses_chat_completion_with_backoff(
|
192
|
+
messages=messages,
|
193
|
+
model_name=model,
|
194
|
+
temperature=temperature,
|
195
|
+
openai_api_key=api_key,
|
196
|
+
api_base_url=api_base_url,
|
197
|
+
deepthought=deepthought,
|
198
|
+
tracer=tracer,
|
199
|
+
):
|
200
|
+
yield chunk
|
201
|
+
else:
|
202
|
+
# For non-OpenAI APIs, use the chat completion method
|
203
|
+
async for chunk in chat_completion_with_backoff(
|
204
|
+
messages=messages,
|
205
|
+
model_name=model,
|
206
|
+
temperature=temperature,
|
207
|
+
openai_api_key=api_key,
|
208
|
+
api_base_url=api_base_url,
|
209
|
+
deepthought=deepthought,
|
210
|
+
tracer=tracer,
|
211
|
+
):
|
212
|
+
yield chunk
|
@@ -2,7 +2,6 @@ import json
|
|
2
2
|
import logging
|
3
3
|
import os
|
4
4
|
from copy import deepcopy
|
5
|
-
from functools import partial
|
6
5
|
from time import perf_counter
|
7
6
|
from typing import AsyncGenerator, Dict, Generator, List, Literal, Optional, Union
|
8
7
|
from urllib.parse import urlparse
|
@@ -22,6 +21,8 @@ from openai.types.chat.chat_completion_chunk import (
|
|
22
21
|
Choice,
|
23
22
|
ChoiceDelta,
|
24
23
|
)
|
24
|
+
from openai.types.responses import Response as OpenAIResponse
|
25
|
+
from openai.types.responses import ResponseFunctionToolCall, ResponseReasoningItem
|
25
26
|
from pydantic import BaseModel
|
26
27
|
from tenacity import (
|
27
28
|
before_sleep_log,
|
@@ -54,6 +55,26 @@ openai_clients: Dict[str, openai.OpenAI] = {}
|
|
54
55
|
openai_async_clients: Dict[str, openai.AsyncOpenAI] = {}
|
55
56
|
|
56
57
|
|
58
|
+
def _extract_text_for_instructions(content: Union[str, List, Dict, None]) -> str:
|
59
|
+
"""Extract plain text from a message content suitable for Responses API instructions."""
|
60
|
+
if content is None:
|
61
|
+
return ""
|
62
|
+
if isinstance(content, str):
|
63
|
+
return content
|
64
|
+
if isinstance(content, list):
|
65
|
+
texts: List[str] = []
|
66
|
+
for part in content:
|
67
|
+
if isinstance(part, dict) and part.get("type") == "input_text" and part.get("text"):
|
68
|
+
texts.append(str(part.get("text")))
|
69
|
+
return "\n\n".join(texts)
|
70
|
+
if isinstance(content, dict):
|
71
|
+
# If a single part dict was passed
|
72
|
+
if content.get("type") == "input_text" and content.get("text"):
|
73
|
+
return str(content.get("text"))
|
74
|
+
# Fallback to string conversion
|
75
|
+
return str(content)
|
76
|
+
|
77
|
+
|
57
78
|
@retry(
|
58
79
|
retry=(
|
59
80
|
retry_if_exception_type(openai._exceptions.APITimeoutError)
|
@@ -284,9 +305,9 @@ async def chat_completion_with_backoff(
|
|
284
305
|
if len(system_messages) > 0:
|
285
306
|
first_system_message_index, first_system_message = system_messages[0]
|
286
307
|
first_system_message_content = first_system_message["content"]
|
287
|
-
formatted_messages[first_system_message_index][
|
288
|
-
"
|
289
|
-
|
308
|
+
formatted_messages[first_system_message_index]["content"] = (
|
309
|
+
f"{first_system_message_content}\nFormatting re-enabled"
|
310
|
+
)
|
290
311
|
elif is_twitter_reasoning_model(model_name, api_base_url):
|
291
312
|
reasoning_effort = "high" if deepthought else "low"
|
292
313
|
# Grok-4 models do not support reasoning_effort parameter
|
@@ -391,6 +412,287 @@ async def chat_completion_with_backoff(
|
|
391
412
|
commit_conversation_trace(messages, aggregated_response, tracer)
|
392
413
|
|
393
414
|
|
415
|
+
@retry(
|
416
|
+
retry=(
|
417
|
+
retry_if_exception_type(openai._exceptions.APITimeoutError)
|
418
|
+
| retry_if_exception_type(openai._exceptions.APIError)
|
419
|
+
| retry_if_exception_type(openai._exceptions.APIConnectionError)
|
420
|
+
| retry_if_exception_type(openai._exceptions.RateLimitError)
|
421
|
+
| retry_if_exception_type(openai._exceptions.APIStatusError)
|
422
|
+
| retry_if_exception_type(ValueError)
|
423
|
+
),
|
424
|
+
wait=wait_random_exponential(min=1, max=10),
|
425
|
+
stop=stop_after_attempt(3),
|
426
|
+
before_sleep=before_sleep_log(logger, logging.DEBUG),
|
427
|
+
reraise=True,
|
428
|
+
)
|
429
|
+
def responses_completion_with_backoff(
|
430
|
+
messages: List[ChatMessage],
|
431
|
+
model_name: str,
|
432
|
+
temperature=0.6,
|
433
|
+
openai_api_key=None,
|
434
|
+
api_base_url=None,
|
435
|
+
deepthought: bool = False,
|
436
|
+
model_kwargs: dict = {},
|
437
|
+
tracer: dict = {},
|
438
|
+
) -> ResponseWithThought:
|
439
|
+
"""
|
440
|
+
Synchronous helper using the OpenAI Responses API in streaming mode under the hood.
|
441
|
+
Aggregates streamed deltas and returns a ResponseWithThought.
|
442
|
+
"""
|
443
|
+
client_key = f"{openai_api_key}--{api_base_url}"
|
444
|
+
client = openai_clients.get(client_key)
|
445
|
+
if not client:
|
446
|
+
client = get_openai_client(openai_api_key, api_base_url)
|
447
|
+
openai_clients[client_key] = client
|
448
|
+
|
449
|
+
formatted_messages = format_message_for_api(messages, api_base_url)
|
450
|
+
# Move the first system message to Responses API instructions
|
451
|
+
instructions: Optional[str] = None
|
452
|
+
if formatted_messages and formatted_messages[0].get("role") == "system":
|
453
|
+
instructions = _extract_text_for_instructions(formatted_messages[0].get("content")) or None
|
454
|
+
formatted_messages = formatted_messages[1:]
|
455
|
+
|
456
|
+
model_kwargs = deepcopy(model_kwargs)
|
457
|
+
model_kwargs["top_p"] = model_kwargs.get("top_p", 0.95)
|
458
|
+
# Configure thinking for openai reasoning models
|
459
|
+
if is_openai_reasoning_model(model_name, api_base_url):
|
460
|
+
temperature = 1
|
461
|
+
reasoning_effort = "medium" if deepthought else "low"
|
462
|
+
model_kwargs["reasoning"] = {"effort": reasoning_effort, "summary": "auto"}
|
463
|
+
# Remove unsupported params for reasoning models
|
464
|
+
model_kwargs.pop("top_p", None)
|
465
|
+
model_kwargs.pop("stop", None)
|
466
|
+
|
467
|
+
read_timeout = 300 if is_local_api(api_base_url) else 60
|
468
|
+
|
469
|
+
# Stream and aggregate
|
470
|
+
model_response: OpenAIResponse = client.responses.create(
|
471
|
+
input=formatted_messages,
|
472
|
+
instructions=instructions,
|
473
|
+
model=model_name,
|
474
|
+
temperature=temperature,
|
475
|
+
timeout=httpx.Timeout(30, read=read_timeout), # type: ignore
|
476
|
+
store=False,
|
477
|
+
include=["reasoning.encrypted_content"],
|
478
|
+
**model_kwargs,
|
479
|
+
)
|
480
|
+
if not model_response or not isinstance(model_response, OpenAIResponse) or not model_response.output:
|
481
|
+
raise ValueError(f"Empty response returned by {model_name}.")
|
482
|
+
|
483
|
+
raw_content = [item.model_dump() for item in model_response.output]
|
484
|
+
aggregated_text = model_response.output_text
|
485
|
+
thoughts = ""
|
486
|
+
tool_calls: List[ToolCall] = []
|
487
|
+
for item in model_response.output:
|
488
|
+
if isinstance(item, ResponseFunctionToolCall):
|
489
|
+
tool_calls.append(ToolCall(name=item.name, args=json.loads(item.arguments), id=item.call_id))
|
490
|
+
elif isinstance(item, ResponseReasoningItem):
|
491
|
+
thoughts = "\n\n".join([summary.text for summary in item.summary])
|
492
|
+
|
493
|
+
if tool_calls:
|
494
|
+
if thoughts and aggregated_text:
|
495
|
+
# If there are tool calls, aggregate thoughts and responses into thoughts
|
496
|
+
thoughts = "\n".join([f"*{line.strip()}*" for line in thoughts.splitlines() if line.strip()])
|
497
|
+
thoughts = f"{thoughts}\n\n{aggregated_text}"
|
498
|
+
else:
|
499
|
+
thoughts = thoughts or aggregated_text
|
500
|
+
# Json dump tool calls into aggregated response
|
501
|
+
aggregated_text = json.dumps([tool_call.__dict__ for tool_call in tool_calls])
|
502
|
+
|
503
|
+
# Usage/cost tracking
|
504
|
+
input_tokens = model_response.usage.input_tokens if model_response and model_response.usage else 0
|
505
|
+
output_tokens = model_response.usage.output_tokens if model_response and model_response.usage else 0
|
506
|
+
cost = 0
|
507
|
+
cache_read_tokens = 0
|
508
|
+
if model_response and model_response.usage and model_response.usage.input_tokens_details:
|
509
|
+
cache_read_tokens = model_response.usage.input_tokens_details.cached_tokens
|
510
|
+
input_tokens -= cache_read_tokens
|
511
|
+
tracer["usage"] = get_chat_usage_metrics(
|
512
|
+
model_name, input_tokens, output_tokens, cache_read_tokens, usage=tracer.get("usage"), cost=cost
|
513
|
+
)
|
514
|
+
|
515
|
+
# Validate final aggregated text (either message or tool-calls JSON)
|
516
|
+
if is_none_or_empty(aggregated_text):
|
517
|
+
logger.warning(f"No response by {model_name}\nLast Message by {messages[-1].role}: {messages[-1].content}.")
|
518
|
+
raise ValueError(f"Empty or no response by {model_name} over Responses API. Retry if needed.")
|
519
|
+
|
520
|
+
# Trace
|
521
|
+
tracer["chat_model"] = model_name
|
522
|
+
tracer["temperature"] = temperature
|
523
|
+
if is_promptrace_enabled():
|
524
|
+
commit_conversation_trace(messages, aggregated_text, tracer)
|
525
|
+
|
526
|
+
return ResponseWithThought(text=aggregated_text, thought=thoughts, raw_content=raw_content)
|
527
|
+
|
528
|
+
|
529
|
+
@retry(
|
530
|
+
retry=(
|
531
|
+
retry_if_exception_type(openai._exceptions.APITimeoutError)
|
532
|
+
| retry_if_exception_type(openai._exceptions.APIError)
|
533
|
+
| retry_if_exception_type(openai._exceptions.APIConnectionError)
|
534
|
+
| retry_if_exception_type(openai._exceptions.RateLimitError)
|
535
|
+
| retry_if_exception_type(openai._exceptions.APIStatusError)
|
536
|
+
| retry_if_exception_type(ValueError)
|
537
|
+
),
|
538
|
+
wait=wait_exponential(multiplier=1, min=4, max=10),
|
539
|
+
stop=stop_after_attempt(3),
|
540
|
+
before_sleep=before_sleep_log(logger, logging.WARNING),
|
541
|
+
reraise=False,
|
542
|
+
)
|
543
|
+
async def responses_chat_completion_with_backoff(
|
544
|
+
messages: list[ChatMessage],
|
545
|
+
model_name: str,
|
546
|
+
temperature,
|
547
|
+
openai_api_key=None,
|
548
|
+
api_base_url=None,
|
549
|
+
deepthought=False, # Unused; parity with legacy signature
|
550
|
+
tracer: dict = {},
|
551
|
+
) -> AsyncGenerator[ResponseWithThought, None]:
|
552
|
+
"""
|
553
|
+
Async streaming helper using the OpenAI Responses API.
|
554
|
+
Yields ResponseWithThought chunks as text/think deltas arrive.
|
555
|
+
"""
|
556
|
+
client_key = f"{openai_api_key}--{api_base_url}"
|
557
|
+
client = openai_async_clients.get(client_key)
|
558
|
+
if not client:
|
559
|
+
client = get_openai_async_client(openai_api_key, api_base_url)
|
560
|
+
openai_async_clients[client_key] = client
|
561
|
+
|
562
|
+
formatted_messages = format_message_for_api(messages, api_base_url)
|
563
|
+
# Move the first system message to Responses API instructions
|
564
|
+
instructions: Optional[str] = None
|
565
|
+
if formatted_messages and formatted_messages[0].get("role") == "system":
|
566
|
+
instructions = _extract_text_for_instructions(formatted_messages[0].get("content")) or None
|
567
|
+
formatted_messages = formatted_messages[1:]
|
568
|
+
|
569
|
+
model_kwargs: dict = {}
|
570
|
+
model_kwargs["top_p"] = model_kwargs.get("top_p", 0.95)
|
571
|
+
# Configure thinking for openai reasoning models
|
572
|
+
if is_openai_reasoning_model(model_name, api_base_url):
|
573
|
+
temperature = 1
|
574
|
+
reasoning_effort = "medium" if deepthought else "low"
|
575
|
+
model_kwargs["reasoning"] = {"effort": reasoning_effort, "summary": "auto"}
|
576
|
+
# Remove unsupported params for reasoning models
|
577
|
+
model_kwargs.pop("top_p", None)
|
578
|
+
model_kwargs.pop("stop", None)
|
579
|
+
|
580
|
+
read_timeout = 300 if is_local_api(api_base_url) else 60
|
581
|
+
|
582
|
+
aggregated_text = ""
|
583
|
+
last_final: Optional[OpenAIResponse] = None
|
584
|
+
# Tool call assembly buffers
|
585
|
+
tool_calls_args: Dict[str, str] = {}
|
586
|
+
tool_calls_name: Dict[str, str] = {}
|
587
|
+
tool_call_order: List[str] = []
|
588
|
+
|
589
|
+
async with client.responses.stream(
|
590
|
+
input=formatted_messages,
|
591
|
+
instructions=instructions,
|
592
|
+
model=model_name,
|
593
|
+
temperature=temperature,
|
594
|
+
timeout=httpx.Timeout(30, read=read_timeout),
|
595
|
+
**model_kwargs,
|
596
|
+
) as stream: # type: ignore
|
597
|
+
async for event in stream: # type: ignore
|
598
|
+
et = getattr(event, "type", "")
|
599
|
+
if et == "response.output_text.delta":
|
600
|
+
delta = getattr(event, "delta", "") or getattr(event, "output_text", "")
|
601
|
+
if delta:
|
602
|
+
aggregated_text += delta
|
603
|
+
yield ResponseWithThought(text=delta)
|
604
|
+
elif et == "response.reasoning.delta":
|
605
|
+
delta = getattr(event, "delta", "")
|
606
|
+
if delta:
|
607
|
+
yield ResponseWithThought(thought=delta)
|
608
|
+
elif et == "response.tool_call.created":
|
609
|
+
item = getattr(event, "item", None)
|
610
|
+
tool_id = (
|
611
|
+
getattr(event, "id", None)
|
612
|
+
or getattr(event, "tool_call_id", None)
|
613
|
+
or (getattr(item, "id", None) if item is not None else None)
|
614
|
+
)
|
615
|
+
name = (
|
616
|
+
getattr(event, "name", None)
|
617
|
+
or (getattr(item, "name", None) if item is not None else None)
|
618
|
+
or getattr(event, "tool_name", None)
|
619
|
+
)
|
620
|
+
if tool_id:
|
621
|
+
if tool_id not in tool_calls_args:
|
622
|
+
tool_calls_args[tool_id] = ""
|
623
|
+
tool_call_order.append(tool_id)
|
624
|
+
if name:
|
625
|
+
tool_calls_name[tool_id] = name
|
626
|
+
elif et == "response.tool_call.delta":
|
627
|
+
tool_id = getattr(event, "id", None) or getattr(event, "tool_call_id", None)
|
628
|
+
delta = getattr(event, "delta", None)
|
629
|
+
if hasattr(delta, "arguments"):
|
630
|
+
arg_delta = getattr(delta, "arguments", "")
|
631
|
+
else:
|
632
|
+
arg_delta = delta if isinstance(delta, str) else getattr(event, "arguments", "")
|
633
|
+
if tool_id and arg_delta:
|
634
|
+
tool_calls_args[tool_id] = tool_calls_args.get(tool_id, "") + arg_delta
|
635
|
+
if tool_id not in tool_call_order:
|
636
|
+
tool_call_order.append(tool_id)
|
637
|
+
elif et == "response.tool_call.completed":
|
638
|
+
item = getattr(event, "item", None)
|
639
|
+
tool_id = (
|
640
|
+
getattr(event, "id", None)
|
641
|
+
or getattr(event, "tool_call_id", None)
|
642
|
+
or (getattr(item, "id", None) if item is not None else None)
|
643
|
+
)
|
644
|
+
args_final = None
|
645
|
+
if item is not None:
|
646
|
+
args_final = getattr(item, "arguments", None) or getattr(item, "args", None)
|
647
|
+
if tool_id and args_final:
|
648
|
+
tool_calls_args[tool_id] = args_final if isinstance(args_final, str) else json.dumps(args_final)
|
649
|
+
if tool_id not in tool_call_order:
|
650
|
+
tool_call_order.append(tool_id)
|
651
|
+
# ignore other events for now
|
652
|
+
last_final = await stream.get_final_response()
|
653
|
+
|
654
|
+
# Usage/cost tracking after stream ends
|
655
|
+
input_tokens = last_final.usage.input_tokens if last_final and last_final.usage else 0
|
656
|
+
output_tokens = last_final.usage.output_tokens if last_final and last_final.usage else 0
|
657
|
+
cost = 0
|
658
|
+
tracer["usage"] = get_chat_usage_metrics(
|
659
|
+
model_name, input_tokens, output_tokens, usage=tracer.get("usage"), cost=cost
|
660
|
+
)
|
661
|
+
|
662
|
+
# If there are tool calls, package them into aggregated text for tracing parity
|
663
|
+
if tool_call_order:
|
664
|
+
packaged_tool_calls: List[ToolCall] = []
|
665
|
+
for tool_id in tool_call_order:
|
666
|
+
name = tool_calls_name.get(tool_id) or ""
|
667
|
+
args_str = tool_calls_args.get(tool_id, "")
|
668
|
+
try:
|
669
|
+
args = json.loads(args_str) if isinstance(args_str, str) else args_str
|
670
|
+
except Exception:
|
671
|
+
logger.warning(f"Failed to parse tool call arguments for {tool_id}: {args_str}")
|
672
|
+
args = {}
|
673
|
+
packaged_tool_calls.append(ToolCall(name=name, args=args, id=tool_id))
|
674
|
+
# Move any text into trace thought
|
675
|
+
tracer_text = aggregated_text
|
676
|
+
aggregated_text = json.dumps([tc.__dict__ for tc in packaged_tool_calls])
|
677
|
+
# Save for trace below
|
678
|
+
if tracer_text:
|
679
|
+
tracer.setdefault("_responses_stream_text", tracer_text)
|
680
|
+
|
681
|
+
if is_none_or_empty(aggregated_text):
|
682
|
+
logger.warning(f"No response by {model_name}\nLast Message by {messages[-1].role}: {messages[-1].content}.")
|
683
|
+
raise ValueError(f"Empty or no response by {model_name} over Responses API. Retry if needed.")
|
684
|
+
|
685
|
+
tracer["chat_model"] = model_name
|
686
|
+
tracer["temperature"] = temperature
|
687
|
+
if is_promptrace_enabled():
|
688
|
+
# If tool-calls were present, include any streamed text in the trace thought
|
689
|
+
trace_payload = aggregated_text
|
690
|
+
if tracer.get("_responses_stream_text"):
|
691
|
+
thoughts = tracer.pop("_responses_stream_text")
|
692
|
+
trace_payload = thoughts
|
693
|
+
commit_conversation_trace(messages, trace_payload, tracer)
|
694
|
+
|
695
|
+
|
394
696
|
def get_structured_output_support(model_name: str, api_base_url: str = None) -> StructuredOutputSupport:
|
395
697
|
if model_name.startswith("deepseek-reasoner"):
|
396
698
|
return StructuredOutputSupport.NONE
|
@@ -413,6 +715,12 @@ def format_message_for_api(raw_messages: List[ChatMessage], api_base_url: str) -
|
|
413
715
|
# Handle tool call and tool result message types
|
414
716
|
message_type = message.additional_kwargs.get("message_type")
|
415
717
|
if message_type == "tool_call":
|
718
|
+
if is_openai_api(api_base_url):
|
719
|
+
for part in message.content:
|
720
|
+
if "status" in part:
|
721
|
+
part.pop("status") # Drop unsupported tool call status field
|
722
|
+
formatted_messages.extend(message.content)
|
723
|
+
continue
|
416
724
|
# Convert tool_call to OpenAI function call format
|
417
725
|
content = []
|
418
726
|
for part in message.content:
|
@@ -451,14 +759,23 @@ def format_message_for_api(raw_messages: List[ChatMessage], api_base_url: str) -
|
|
451
759
|
if not tool_call_id:
|
452
760
|
logger.warning(f"Dropping tool result without valid tool_call_id: {part.get('name')}")
|
453
761
|
continue
|
454
|
-
|
455
|
-
|
456
|
-
|
457
|
-
|
458
|
-
|
459
|
-
|
460
|
-
|
461
|
-
|
762
|
+
if is_openai_api(api_base_url):
|
763
|
+
formatted_messages.append(
|
764
|
+
{
|
765
|
+
"type": "function_call_output",
|
766
|
+
"call_id": tool_call_id,
|
767
|
+
"output": part.get("content"),
|
768
|
+
}
|
769
|
+
)
|
770
|
+
else:
|
771
|
+
formatted_messages.append(
|
772
|
+
{
|
773
|
+
"role": "tool",
|
774
|
+
"tool_call_id": tool_call_id,
|
775
|
+
"name": part.get("name"),
|
776
|
+
"content": part.get("content"),
|
777
|
+
}
|
778
|
+
)
|
462
779
|
continue
|
463
780
|
if isinstance(message.content, list) and not is_openai_api(api_base_url):
|
464
781
|
assistant_texts = []
|
@@ -490,6 +807,11 @@ def format_message_for_api(raw_messages: List[ChatMessage], api_base_url: str) -
|
|
490
807
|
message.content.remove(part)
|
491
808
|
elif part["type"] == "image_url" and not part.get("image_url"):
|
492
809
|
message.content.remove(part)
|
810
|
+
# OpenAI models use the Responses API which uses slightly different content types
|
811
|
+
if part["type"] == "text":
|
812
|
+
part["type"] = "output_text" if message.role == "assistant" else "input_text"
|
813
|
+
if part["type"] == "image":
|
814
|
+
part["type"] = "output_image" if message.role == "assistant" else "input_image"
|
493
815
|
# If no valid content parts left, remove the message
|
494
816
|
if is_none_or_empty(message.content):
|
495
817
|
messages.remove(message)
|
@@ -514,7 +836,9 @@ def is_openai_reasoning_model(model_name: str, api_base_url: str = None) -> bool
|
|
514
836
|
"""
|
515
837
|
Check if the model is an OpenAI reasoning model
|
516
838
|
"""
|
517
|
-
return
|
839
|
+
return is_openai_api(api_base_url) and (
|
840
|
+
model_name.lower().startswith("o") or model_name.lower().startswith("gpt-5")
|
841
|
+
)
|
518
842
|
|
519
843
|
|
520
844
|
def is_non_streaming_model(model_name: str, api_base_url: str = None) -> bool:
|
@@ -851,20 +1175,32 @@ def add_qwen_no_think_tag(formatted_messages: List[dict]) -> None:
|
|
851
1175
|
break
|
852
1176
|
|
853
1177
|
|
854
|
-
def to_openai_tools(tools: List[ToolDefinition]) -> List[Dict] | None:
|
1178
|
+
def to_openai_tools(tools: List[ToolDefinition], use_responses_api: bool) -> List[Dict] | None:
|
855
1179
|
"Transform tool definitions from standard format to OpenAI format."
|
856
|
-
|
857
|
-
|
858
|
-
|
859
|
-
|
1180
|
+
if use_responses_api:
|
1181
|
+
openai_tools = [
|
1182
|
+
{
|
1183
|
+
"type": "function",
|
860
1184
|
"name": tool.name,
|
861
1185
|
"description": tool.description,
|
862
1186
|
"parameters": clean_response_schema(tool.schema),
|
863
1187
|
"strict": True,
|
864
|
-
}
|
865
|
-
|
866
|
-
|
867
|
-
|
1188
|
+
}
|
1189
|
+
for tool in tools
|
1190
|
+
]
|
1191
|
+
else:
|
1192
|
+
openai_tools = [
|
1193
|
+
{
|
1194
|
+
"type": "function",
|
1195
|
+
"function": {
|
1196
|
+
"name": tool.name,
|
1197
|
+
"description": tool.description,
|
1198
|
+
"parameters": clean_response_schema(tool.schema),
|
1199
|
+
"strict": True,
|
1200
|
+
},
|
1201
|
+
}
|
1202
|
+
for tool in tools
|
1203
|
+
]
|
868
1204
|
|
869
1205
|
return openai_tools or None
|
870
1206
|
|
@@ -519,12 +519,13 @@ Q: {query}
|
|
519
519
|
|
520
520
|
extract_questions_system_prompt = PromptTemplate.from_template(
|
521
521
|
"""
|
522
|
-
You are Khoj, an extremely smart and helpful document search assistant with only the ability to retrieve information from the user's notes.
|
523
|
-
Construct search queries to retrieve relevant information to answer the user's question.
|
522
|
+
You are Khoj, an extremely smart and helpful document search assistant with only the ability to use natural language semantic search to retrieve information from the user's notes.
|
523
|
+
Construct upto {max_queries} search queries to retrieve relevant information to answer the user's question.
|
524
524
|
- You will be provided past questions(User), search queries(Assistant) and answers(A) for context.
|
525
|
-
-
|
526
|
-
- Break your search
|
527
|
-
- Add date filters to your search queries
|
525
|
+
- You can use context from previous questions and answers to improve your search queries.
|
526
|
+
- Break down your search into multiple search queries from a diverse set of lenses to retrieve all related documents. E.g who, what, where, when, why, how.
|
527
|
+
- Add date filters to your search queries when required to retrieve the relevant information. This is the only structured query filter you can use.
|
528
|
+
- Output 1 concept per query. Do not use boolean operators (OR/AND) to combine queries. They do not work and degrade search quality.
|
528
529
|
- When asked a meta, vague or random questions, search for a variety of broad topics to answer the user's question.
|
529
530
|
{personality_context}
|
530
531
|
What searches will you perform to answer the users question? Respond with a JSON object with the key "queries" mapping to a list of searches you would perform on the user's knowledge base. Just return the queries and nothing else.
|
@@ -535,22 +536,27 @@ User's Location: {location}
|
|
535
536
|
|
536
537
|
Here are some examples of how you can construct search queries to answer the user's question:
|
537
538
|
|
539
|
+
Illustrate - Using diverse perspectives to retrieve all relevant documents
|
538
540
|
User: How was my trip to Cambodia?
|
539
541
|
Assistant: {{"queries": ["How was my trip to Cambodia?", "Angkor Wat temple visit", "Flight to Phnom Penh", "Expenses in Cambodia", "Stay in Cambodia"]}}
|
540
542
|
A: The trip was amazing. You went to the Angkor Wat temple and it was beautiful.
|
541
543
|
|
544
|
+
Illustrate - Combining date filters with natural language queries to retrieve documents in relevant date range
|
542
545
|
User: What national parks did I go to last year?
|
543
546
|
Assistant: {{"queries": ["National park I visited in {last_new_year} dt>='{last_new_year_date}' dt<'{current_new_year_date}'"]}}
|
544
547
|
A: You visited the Grand Canyon and Yellowstone National Park in {last_new_year}.
|
545
548
|
|
549
|
+
Illustrate - Using broad topics to answer meta or vague questions
|
546
550
|
User: How can you help me?
|
547
551
|
Assistant: {{"queries": ["Social relationships", "Physical and mental health", "Education and career", "Personal life goals and habits"]}}
|
548
552
|
A: I can help you live healthier and happier across work and personal life
|
549
553
|
|
554
|
+
Illustrate - Combining location and date in natural language queries with date filters to retrieve relevant documents
|
550
555
|
User: Who all did I meet here yesterday?
|
551
556
|
Assistant: {{"queries": ["Met in {location} on {yesterday_date} dt>='{yesterday_date}' dt<'{current_date}'"]}}
|
552
557
|
A: Yesterday's note mentions your visit to your local beach with Ram and Shyam.
|
553
558
|
|
559
|
+
Illustrate - Combining broad, diverse topics with date filters to answer meta or vague questions
|
554
560
|
User: Share some random, interesting experiences from this month
|
555
561
|
Assistant: {{"queries": ["Exciting travel adventures from {current_month}", "Fun social events dt>='{current_month}-01' dt<'{current_date}'", "Intense emotional experiences in {current_month}"]}}
|
556
562
|
A: You had a great time at the local beach with your friends, attended a music concert and had a deep conversation with your friend, Khalid.
|