webscout 3.4__py3-none-any.whl → 3.6__py3-none-any.whl

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
@@ -1,7 +1,7 @@
1
1
  Metadata-Version: 2.1
2
2
  Name: webscout
3
- Version: 3.4
4
- Summary: Search for anything using Google, DuckDuckGo, phind.com, Contains AI models, can transcribe yt videos, temporary email and phone number generation, has TTS support, webai (terminal gpt and open interpreter) and offline LLMs
3
+ Version: 3.6
4
+ Summary: Search for anything using Google, DuckDuckGo, brave, qwant, phind.com, Contains AI models, can transcribe yt videos, temporary email and phone number generation, has TTS support, webai (terminal gpt and open interpreter) and offline LLMs
5
5
  Author: OEvortex
6
6
  Author-email: helpingai5@gmail.com
7
7
  License: HelpingAI
@@ -54,6 +54,7 @@ Requires-Dist: Helpingai-T2
54
54
  Requires-Dist: playsound
55
55
  Requires-Dist: poe-api-wrapper
56
56
  Requires-Dist: pyreqwest-impersonate
57
+ Requires-Dist: ballyregan
57
58
  Provides-Extra: dev
58
59
  Requires-Dist: ruff >=0.1.6 ; extra == 'dev'
59
60
  Requires-Dist: pytest >=7.4.2 ; extra == 'dev'
@@ -104,14 +105,14 @@ Search for anything using Google, DuckDuckGo, phind.com, Contains AI models, can
104
105
  - [Temp number](#temp-number)
105
106
  - [Tempmail](#tempmail)
106
107
  - [Transcriber](#transcriber)
107
- - [DeepWEBS: Advanced Web Searches](#deepwebs-advanced-web-searches)
108
- - [Activating DeepWEBS](#activating-deepwebs)
109
- - [Point to remember before using `DeepWEBS`](#point-to-remember-before-using-deepwebs)
108
+ - [DWEBS: Advanced Web Searches](#dwebs-advanced-web-searches)
109
+ - [Activating DWEBS](#activating-dwebs)
110
+ - [Point to remember before using `DWEBS`](#point-to-remember-before-using-dwebs)
110
111
  - [Usage Example](#usage-example)
111
112
  - [Text-to-Speech:](#text-to-speech)
112
113
  - [Available TTS Voices:](#available-tts-voices)
113
114
  - [Exceptions](#exceptions)
114
- - [usage of webscout](#usage-of-webscout)
115
+ - [usage of WEBS](#usage-of-webs)
115
116
  - [1. `text()` - text search by DuckDuckGo.com](#1-text---text-search-by-duckduckgocom)
116
117
  - [2. `answers()` - instant answers by DuckDuckGo.com](#2-answers---instant-answers-by-duckduckgocom)
117
118
  - [3. `images()` - image search by DuckDuckGo.com](#3-images---image-search-by-duckduckgocom)
@@ -120,6 +121,7 @@ Search for anything using Google, DuckDuckGo, phind.com, Contains AI models, can
120
121
  - [6. `maps()` - map search by DuckDuckGo.com](#6-maps---map-search-by-duckduckgocom)
121
122
  - [7. `translate()` - translation by DuckDuckGo.com](#7-translate---translation-by-duckduckgocom)
122
123
  - [8. `suggestions()` - suggestions by DuckDuckGo.com](#8-suggestions---suggestions-by-duckduckgocom)
124
+ - [usage of WEBSX -- Another Websearch thing](#usage-of-websx----another-websearch-thing)
123
125
  - [ALL acts](#all-acts)
124
126
  - [Webscout Supported Acts:](#webscout-supported-acts)
125
127
  - [usage of webscout AI](#usage-of-webscout-ai)
@@ -141,12 +143,12 @@ Search for anything using Google, DuckDuckGo, phind.com, Contains AI models, can
141
143
  - [15. `poe`- chat with poe](#15-poe--chat-with-poe)
142
144
  - [16. `BasedGPT` - chat with GPT](#16-basedgpt---chat-with-gpt)
143
145
  - [17. `DeepSeek` -chat with deepseek](#17-deepseek--chat-with-deepseek)
146
+ - [18. Deepinfra](#18-deepinfra)
147
+ - [19. Deepinfra - VLM](#19-deepinfra---vlm)
144
148
  - [`LLM`](#llm)
145
149
  - [`Local-LLM` webscout can now run GGUF models](#local-llm-webscout-can-now-run-gguf-models)
146
- - [`Function-calling-local-llm`](#function-calling-local-llm)
147
150
  - [`Local-rawdog`](#local-rawdog)
148
151
  - [`LLM` with internet](#llm-with-internet)
149
- - [LLM with deepwebs](#llm-with-deepwebs)
150
152
  - [`Webai` - terminal gpt and a open interpeter](#webai---terminal-gpt-and-a-open-interpeter)
151
153
 
152
154
  ## Install
@@ -392,68 +394,76 @@ def main():
392
394
  if __name__ == "__main__":
393
395
  main()
394
396
  ```
395
- ## DeepWEBS: Advanced Web Searches
396
397
 
397
- `DeepWEBS` is a standalone feature designed to perform advanced web searches with enhanced capabilities. It is particularly powerful in extracting relevant information directly from webpages and Search engine, focusing exclusively on text (web) searches. Unlike the `WEBS` , which provides a broader range of search functionalities, `DeepWEBS` is specifically tailored for in-depth web searches.
398
+ ## DWEBS: Advanced Web Searches
398
399
 
399
- ### Activating DeepWEBS
400
+ `DWEBS` is a standalone feature designed to perform advanced web searches with enhanced capabilities. It is particularly powerful in extracting relevant information directly from webpages and Search engine, focusing exclusively on text (web) searches. Unlike the `WEBS` , which provides a broader range of search functionalities, `DWEBS` is specifically tailored for in-depth web searches.
400
401
 
401
- To utilize the `DeepWEBS` feature, you must first create an instance of the `DeepWEBS` . This is designed to be used independently of the `WEBS` , offering a focused approach to web searches.
402
+ ### Activating DWEBS
402
403
 
403
- ### Point to remember before using `DeepWEBS`
404
- As `DeepWEBS` is designed to extract relevant information directly from webpages and Search engine, It extracts html from webpages and saves them to folder named files in `DeepWEBS` that can be found at `C:\Users\Username\AppData\Local\Packages\PythonSoftwareFoundation.Python.3.11_qbz5n2kfra8p0\LocalCache\local-packages\Python311\site-packages\DeepWEBS`
404
+ To utilize the `DWEBS` feature, you must first create an instance of the `DWEBS` . This is designed to be used independently of the `WEBS` , offering a focused approach to web searches.
405
+
406
+ ### Point to remember before using `DWEBS`
407
+ As `DWEBS` is designed to extract relevant information directly from webpages and Search engine, It extracts html from webpages and saves them to folder named files
405
408
 
406
409
  ### Usage Example
407
410
 
408
- Here's a basic example of how to use the `DeepWEBS` :
411
+ Here's a basic example of how to use the `DWEBS` :
409
412
  ```python
410
- from webscout import DeepWEBS
411
-
412
- def perform_web_search(query):
413
- # Initialize the DeepWEBS class
414
- D = DeepWEBS()
415
-
416
- # Set up the search parameters
417
- search_params = D.DeepSearch(
418
- queries=[query], # Query to search
419
- result_num=5, # Number of search results
420
- safe=True, # Enable SafeSearch
421
- types=["web"], # Search type: web
422
- extract_webpage=True, # True for extracting webpages
423
- overwrite_query_html=False,
424
- overwrite_webpage_html=False,
413
+ from webscout import DWEBS
414
+
415
+ def finalextractor(extract_webpage=True):
416
+ print('---------------Here Running for GoogleSearch--------------------')
417
+ # 1. Google Search
418
+ google_searcher = DWEBS.GoogleSearcher()
419
+ query_html_path = google_searcher.search(
420
+ query='HelpingAI-9B',
421
+ result_num=10,
422
+ safe=False,
423
+ overwrite=False,
425
424
  )
426
-
427
- # Execute the search and retrieve results
428
- results = D.queries_to_search_results(search_params)
429
-
430
- return results
431
425
 
432
- def print_search_results(results):
433
- """
434
- Print the search results.
435
-
436
- Args:
437
- - search_results (list): List of search results to print.
438
- """
439
- if results:
440
- for index, result in enumerate(results, start=1):
441
- print(f"Result {index}: {result}")
426
+ # 2. Search Result Extraction
427
+ query_results_extractor = DWEBS.QueryResultsExtractor()
428
+ query_search_results = query_results_extractor.extract(query_html_path)
429
+
430
+ if extract_webpage:
431
+ print('---------------Batch Webpage Fetcher--------------------')
432
+ # 3. Batch Webpage Fetching
433
+ batch_webpage_fetcher = DWEBS.BatchWebpageFetcher()
434
+ urls = [query_extracts['url'] for query_extracts in query_search_results['query_results']]
435
+ url_and_html_path_list = batch_webpage_fetcher.fetch(
436
+ urls,
437
+ overwrite=False,
438
+ output_parent=query_search_results["query"],
439
+ )
440
+
441
+ print('---------------Batch Webpage Extractor--------------------')
442
+ # 4. Batch Webpage Content Extraction
443
+ batch_webpage_content_extractor = DWEBS.BatchWebpageContentExtractor()
444
+ webpageurls = [url_and_html['html_path'] for url_and_html in url_and_html_path_list]
445
+ html_path_and_extracted_content_list = batch_webpage_content_extractor.extract(webpageurls)
446
+
447
+ # 5. Printing Extracted Content
448
+ for html_path_and_extracted_content in html_path_and_extracted_content_list:
449
+ print(html_path_and_extracted_content['extracted_content'])
442
450
  else:
443
- print("No search results found.")
451
+ # Print only search results if extract_webpage is False
452
+ for result in query_search_results['query_results']:
453
+ DWEBS.logger.mesg(
454
+ f"{result['title']}\n"
455
+ f" - {result['site']}\n"
456
+ f" - {result['url']}\n"
457
+ f" - {result['abstract']}\n"
458
+ f"\n"
459
+ )
444
460
 
445
- def main():
446
- # Prompt the user for a search query
447
- query = input("Enter your search query: ")
448
-
449
- # Perform the web search
450
- results = perform_web_search(query)
451
-
452
- # Print the search results
453
- print_search_results(results)
461
+ DWEBS.logger.success(f"- {len(query_search_results['query_results'])} query results")
462
+ DWEBS.logger.success(f"- {len(query_search_results['related_questions'])} related questions")
454
463
 
455
- if __name__ == "__main__":
456
- main()
464
+ # Example usage:
465
+ finalextractor(extract_webpage=True) # Extract webpage content
466
+ finalextractor(extract_webpage=False) # Skip webpage extraction and print search results only
457
467
 
458
468
  ```
459
469
  ## Text-to-Speech:
@@ -532,7 +542,7 @@ This ensures proper resource management and cleanup, as the context manager will
532
542
  Exceptions:
533
543
  - `WebscoutE`: Raised when there is a generic exception during the API request.
534
544
 
535
- ## usage of webscout
545
+ ## usage of WEBS
536
546
 
537
547
  ### 1. `text()` - text search by DuckDuckGo.com
538
548
 
@@ -682,6 +692,36 @@ with WEBS() as WEBS:
682
692
  for r in WEBS.suggestions("fly"):
683
693
  print(r)
684
694
  ```
695
+
696
+
697
+ ## usage of WEBSX -- Another Websearch thing
698
+ ```python
699
+ from webscout import WEBSX
700
+
701
+ def main():
702
+ # Initialize the WEBSX client
703
+ search = WEBSX(
704
+ k=10,
705
+ )
706
+
707
+ # Example using `run` method - Get a summary
708
+ query = "What is the capital of France?"
709
+ answer = search.run(query)
710
+ print(f"Answer: {answer}\n")
711
+
712
+ # Example using `results` method - Get detailed results with metadata
713
+ query = "What is the capital of France?"
714
+ results = search.results(query, num_results=3)
715
+ print("Search Results:")
716
+ for result in results:
717
+ print(f"Title: {result['title']}")
718
+ print(f"Snippet: {result['snippet']}")
719
+ print(f"Link: {result['link']}\n")
720
+ print(f'Engines: {result["engines"]}')
721
+
722
+ if __name__ == "__main__":
723
+ main()
724
+ ```
685
725
  ## ALL acts
686
726
  <details>
687
727
  <summary>expand</summary>
@@ -960,6 +1000,23 @@ prompt = "write a essay on phind"
960
1000
  # Use the 'ask' method to send the prompt and receive a response
961
1001
  response = ph.ask(prompt)
962
1002
 
1003
+ # Extract and print the message from the response
1004
+ message = ph.get_message(response)
1005
+ print(message)
1006
+ ```
1007
+ Using phindv2
1008
+ ```python
1009
+ from webscout import Phindv2
1010
+
1011
+ # Create an instance of the PHIND class
1012
+ ph = Phindv2()
1013
+
1014
+ # Define a prompt to send to the AI
1015
+ prompt = ""
1016
+
1017
+ # Use the 'ask' method to send the prompt and receive a response
1018
+ response = ph.ask(prompt)
1019
+
963
1020
  # Extract and print the message from the response
964
1021
  message = ph.get_message(response)
965
1022
  print(message)
@@ -980,7 +1037,7 @@ print(r)
980
1037
 
981
1038
  ```
982
1039
 
983
- ### 3. `You.com` - search/chat with you.com
1040
+ ### 3. `You.com` - search/chat with you.com - Not working
984
1041
  ```python
985
1042
 
986
1043
  from webscout import YouChat
@@ -1104,6 +1161,45 @@ while True:
1104
1161
  response_str = opengpt.chat(prompt)
1105
1162
  print(response_str)
1106
1163
  ```
1164
+ ```python
1165
+ from webscout import OPENGPTv2
1166
+
1167
+ # Initialize the bot with all specified settings
1168
+ bot = OPENGPTv2(
1169
+ generate_new_agents=True, # Set to True to generate new IDs, False to load from file
1170
+ assistant_name="My Custom Assistant",
1171
+ retrieval_description="Helpful information from my files.",
1172
+ agent_system_message="",
1173
+ enable_action_server=False, # Assuming you want to disable Action Server by Robocorp
1174
+ enable_ddg_search=False, # Enable DuckDuckGo search tool
1175
+ enable_arxiv=False, # Assuming you want to disable Arxiv
1176
+ enable_press_releases=False, # Assuming you want to disable Press Releases (Kay.ai)
1177
+ enable_pubmed=False, # Assuming you want to disable PubMed
1178
+ enable_sec_filings=False, # Assuming you want to disable SEC Filings (Kay.ai)
1179
+ enable_retrieval=False, # Assuming you want to disable Retrieval
1180
+ enable_search_tavily=False, # Assuming you want to disable Search (Tavily)
1181
+ enable_search_short_answer_tavily=False, # Assuming you want to disable Search (short answer, Tavily)
1182
+ enable_you_com_search=True, # Assuming you want to disable You.com Search
1183
+ enable_wikipedia=False, # Enable Wikipedia tool
1184
+ is_public=True,
1185
+ is_conversation=True,
1186
+ max_tokens=800,
1187
+ timeout=40,
1188
+ filepath="opengpt_conversation_history.txt",
1189
+ update_file=True,
1190
+ history_offset=10250,
1191
+ act=None,
1192
+ )
1193
+
1194
+ # Example interaction loop
1195
+ while True:
1196
+ prompt = input("You: ")
1197
+ if prompt.strip().lower() == 'exit':
1198
+ break
1199
+ response = bot.chat(prompt)
1200
+ print(response)
1201
+
1202
+ ```
1107
1203
  ### 9. `KOBOLDAI` -
1108
1204
  ```python
1109
1205
  from webscout import KOBOLDAI
@@ -1219,8 +1315,26 @@ print(response)
1219
1315
  Usage code similar to other proviers
1220
1316
 
1221
1317
  ### 16. `BasedGPT` - chat with GPT
1222
- Usage code similar to other providers
1318
+ ```
1319
+ from webscout import BasedGPT
1320
+
1321
+ # Initialize the BasedGPT provider
1322
+ basedgpt = BasedGPT(
1323
+ is_conversation=True, # Chat conversationally
1324
+ max_tokens=600, # Maximum tokens to generate
1325
+ timeout=30, # HTTP request timeout
1326
+ intro="You are a helpful and friendly AI.", # Introductory prompt
1327
+ filepath="chat_history.txt", # File to store conversation history
1328
+ update_file=True, # Update the chat history file
1329
+ )
1223
1330
 
1331
+ # Send a prompt to the AI
1332
+ prompt = "What is the meaning of life?"
1333
+ response = basedgpt.chat(prompt)
1334
+
1335
+ # Print the AI's response
1336
+ print(response)
1337
+ ```
1224
1338
  ### 17. `DeepSeek` -chat with deepseek
1225
1339
  ```python
1226
1340
  from webscout import DeepSeek
@@ -1253,6 +1367,60 @@ while True:
1253
1367
  r = ai.chat(prompt)
1254
1368
  print(r)
1255
1369
  ```
1370
+ ### 18. `Deepinfra`
1371
+ ```python
1372
+ from webscout import DeepInfra
1373
+
1374
+ ai = DeepInfra(
1375
+ is_conversation=True,
1376
+ model= "Qwen/Qwen2-72B-Instruct",
1377
+ max_tokens=800,
1378
+ timeout=30,
1379
+ intro=None,
1380
+ filepath=None,
1381
+ update_file=True,
1382
+ proxies={},
1383
+ history_offset=10250,
1384
+ act=None,
1385
+ )
1386
+
1387
+ prompt = "what is meaning of life"
1388
+
1389
+ response = ai.ask(prompt)
1390
+
1391
+ # Extract and print the message from the response
1392
+ message = ai.get_message(response)
1393
+ print(message)
1394
+ ```
1395
+
1396
+ ### 19. `Deepinfra` - VLM
1397
+ ```python
1398
+ from webscout.Provider import VLM
1399
+
1400
+ # Load your image
1401
+ image_path = r"C:\Users\koula\OneDrive\Desktop\Webscout\photo_2024-03-25_19-23-40.jpg"
1402
+
1403
+ vlm_instance = VLM(model="llava-hf/llava-1.5-7b-hf", is_conversation=True, max_tokens=600, timeout=30, system_prompt="You are a Helpful AI.")
1404
+ image_base64 = vlm_instance.encode_image_to_base64(image_path)
1405
+
1406
+ prompt = {
1407
+ "content": "What is in this image?",
1408
+ "image": image_base64
1409
+ }
1410
+
1411
+ # Generate a response
1412
+ response = vlm_instance.chat(prompt)
1413
+ print(response)
1414
+
1415
+ ```
1416
+ ### 20. `VTLchat` - Free gpt3.5
1417
+ ```python
1418
+ from webscout import VTLchat
1419
+
1420
+ provider = VTLchat()
1421
+ response = provider.chat("Hello, how are you?")
1422
+ print(response)
1423
+ ```
1256
1424
  ### `LLM`
1257
1425
  ```python
1258
1426
  from webscout.LLM import LLM
@@ -1300,78 +1468,7 @@ thread = Thread(model, formats.phi3)
1300
1468
  # 4. Start interacting with the model
1301
1469
  thread.interact()
1302
1470
  ```
1303
- ### `Function-calling-local-llm`
1304
- ```python
1305
- from webscout.Local import Model, Thread, formats
1306
- from webscout import DeepWEBS
1307
- from webscout.Local.utils import download_model
1308
- from webscout.Local.model import Model
1309
- from webscout.Local.thread import Thread
1310
- from webscout.Local import formats
1311
- from webscout.Local.samplers import SamplerSettings
1312
- def deepwebs_search(query, max_results=5):
1313
- """Performs a web search using DeepWEBS and returns results as JSON."""
1314
- deepwebs = DeepWEBS()
1315
- search_config = DeepWEBS.DeepSearch(
1316
- queries=[query],
1317
- max_results=max_results,
1318
- extract_webpage=False,
1319
- safe=False,
1320
- types=["web"],
1321
- overwrite_query_html=True,
1322
- overwrite_webpage_html=True,
1323
- )
1324
- search_results = deepwebs.queries_to_search_results(search_config)
1325
- formatted_results = []
1326
- for result in search_results[0]: # Assuming only one query
1327
- formatted_results.append(f"Title: {result['title']}\nURL: {result['url']}\n")
1328
- return "\n".join(formatted_results)
1329
-
1330
- # Load your model
1331
- repo_id = "OEvortex/HelpingAI-9B"
1332
- filename = "helpingai-9b.Q4_0.gguf"
1333
- model_path = download_model(repo_id, filename, token='')
1334
1471
 
1335
- # 2. Load the model
1336
- model = Model(model_path, n_gpu_layers=10)
1337
-
1338
- # Create a Thread
1339
- system_prompt = "You are a helpful AI assistant. Respond to user queries concisely. If a user asks for information that requires a web search, use the `deepwebs_search` tool. Do not call the tool if it is not necessary."
1340
- sampler = SamplerSettings(temp=0.7, top_p=0.9) # Adjust these values as needed
1341
- # 4. Create a custom chatml format with your system prompt
1342
- custom_chatml = formats.chatml.copy()
1343
- custom_chatml['system_content'] = system_prompt
1344
- thread = Thread(model, custom_chatml, sampler=sampler)
1345
- # Add the deepwebs_search tool
1346
- thread.add_tool({
1347
- "type": "function",
1348
- "function": {
1349
- "name": "deepwebs_search",
1350
- "description": "Performs a web search using DeepWEBS and returns the title and URLs of the results.",
1351
- "execute": deepwebs_search,
1352
- "parameters": {
1353
- "type": "object",
1354
- "properties": {
1355
- "query": {
1356
- "type": "string",
1357
- "description": "The query to search on the web",
1358
- },
1359
- "max_results": {
1360
- "type": "integer",
1361
- "description": "Maximum number of search results (default: 5)",
1362
- },
1363
- },
1364
- "required": ["query"],
1365
- },
1366
- },
1367
- })
1368
-
1369
- # Start interacting with the model
1370
- while True:
1371
- user_input = input("You: ")
1372
- response = thread.send(user_input)
1373
- print("Bot: ", response)
1374
- ```
1375
1472
  ### `Local-rawdog`
1376
1473
  ```python
1377
1474
  import webscout.Local as ws
@@ -1532,94 +1629,7 @@ if __name__ == "__main__":
1532
1629
  else:
1533
1630
  print("No response")
1534
1631
  ```
1535
- ### LLM with deepwebs
1536
- ```python
1537
- from __future__ import annotations
1538
- from typing import List, Optional
1539
- from webscout.LLM import LLM
1540
- from webscout import DeepWEBS
1541
- import warnings
1542
-
1543
- system_message: str = (
1544
- "As an AI assistant, I have been designed with advanced capabilities, including real-time access to online resources. This enables me to enrich our conversations and provide you with informed and accurate responses, drawing from a vast array of information. With each interaction, my goal is to create a seamless and meaningful connection, offering insights and sharing relevant content."
1545
- "My directives emphasize the importance of respect, impartiality, and intellectual integrity. I am here to provide unbiased responses, ensuring an ethical and respectful exchange. I will respect your privacy and refrain from sharing any personal information that may be obtained during our conversations or through web searches, only utilizing web search functionality when necessary to provide the most accurate and up-to-date information."
1546
- "Together, let's explore a diverse range of topics, creating an enjoyable and informative experience, all while maintaining the highest standards of privacy and respect"
1547
- )
1548
-
1549
- # Ignore the specific UserWarning
1550
- warnings.filterwarnings("ignore", category=UserWarning, module="curl_cffio", lineno=205)
1551
-
1552
- LLM = LLM(model="mistralai/Mixtral-8x22B-Instruct-v0.1", system_message=system_message)
1553
-
1554
- def perform_web_search(query):
1555
- # Initialize the DeepWEBS class
1556
- D = DeepWEBS()
1557
-
1558
- # Set up the search parameters
1559
- search_params = D.DeepSearch(
1560
- queries=[query], # Query to search
1561
- result_num=10, # Number of search results
1562
- safe=True, # Enable SafeSearch
1563
- types=["web"], # Search type: web
1564
- extract_webpage=True, # True for extracting webpages
1565
- overwrite_query_html=True,
1566
- overwrite_webpage_html=True,
1567
- )
1568
-
1569
- # Execute the search and retrieve results
1570
- results = D.queries_to_search_results(search_params)
1571
- return results
1572
-
1573
- def chat(user_input: str, result_num: int = 10) -> Optional[str]:
1574
- """
1575
- Chat function to perform a web search based on the user input and generate a response using the LLM model.
1576
1632
 
1577
- Parameters
1578
- ----------
1579
- user_input : str
1580
- The user input to be used for the web search
1581
- max_results : int, optional
1582
- The maximum number of search results to include in the response, by default 10
1583
-
1584
- Returns
1585
- -------
1586
- Optional[str]
1587
- The response generated by the LLM model, or None if there is no response
1588
- """
1589
- # Perform a web search based on the user input
1590
- search_results = perform_web_search(user_input)
1591
-
1592
- # Extract URLs from search results
1593
- url_results = []
1594
- for result in search_results[0]['query_results']:
1595
- url_results.append(f"{result['title']} ({result['site']}): {result['url']}")
1596
-
1597
- # Format search results
1598
- formatted_results = "\n".join(url_results)
1599
-
1600
- # Define the messages to be sent, including the user input, search results, and system message
1601
- messages = [
1602
- {"role": "user", "content": f"User question is:\n{user_input}\nwebsearch results are:\n{formatted_results}"},
1603
- ]
1604
-
1605
- # Use the chat method to get the response
1606
- response = LLM.chat(messages)
1607
- return response
1608
-
1609
- if __name__ == "__main__":
1610
- while True:
1611
- # Get the user input
1612
- user_input = input("User: ")
1613
-
1614
- # Perform a web search based on the user input
1615
- response = chat(user_input)
1616
-
1617
- # Print the response
1618
- if response:
1619
- print("AI:", response)
1620
- else:
1621
- print("No response")
1622
- ```
1623
1633
  ## `Webai` - terminal gpt and a open interpeter
1624
1634
 
1625
1635
  ```python
@@ -1661,7 +1671,7 @@ class TaskExecutor:
1661
1671
  self._proxy_path: str = None # Path to proxy configuration
1662
1672
 
1663
1673
  # History Management
1664
- self._history_filepath: str = None
1674
+ self._history_filepath: str = "history.txt"
1665
1675
  self._update_history_file: bool = True
1666
1676
  self._history_offset: int = 10250
1667
1677
 
@@ -1672,7 +1682,7 @@ class TaskExecutor:
1672
1682
  # Optional Features
1673
1683
  self._web_search_enabled: bool = False # Enable web search
1674
1684
  self._rawdog_enabled: bool = True
1675
- self._internal_script_execution_enabled: bool = False
1685
+ self._internal_script_execution_enabled: bool = True
1676
1686
  self._script_confirmation_required: bool = False
1677
1687
  self._selected_interpreter: str = "python"
1678
1688
  self._selected_optimizer: str = "code"
@@ -1700,6 +1710,9 @@ class TaskExecutor:
1700
1710
  "chatgptuk": webscout.ChatGPTUK,
1701
1711
  "poe": webscout.POE,
1702
1712
  "basedgpt": webscout.BasedGPT,
1713
+ "deepseek": webscout.DeepSeek,
1714
+ "deepinfra": webscout.DeepInfra,
1715
+ "opengenptv2": webscout.OPENGPTv2
1703
1716
  }
1704
1717
 
1705
1718
  # Initialize Rawdog if enabled
@@ -1829,13 +1842,26 @@ class TaskExecutor:
1829
1842
  """
1830
1843
  try:
1831
1844
  is_feedback = self._rawdog_instance.main(response)
1845
+ if is_feedback and "PREVIOUS SCRIPT EXCEPTION" in is_feedback:
1846
+ self._console.print(Markdown(f"LLM: {is_feedback}"))
1847
+ error_message = is_feedback.split("PREVIOUS SCRIPT EXCEPTION:\n")[1].strip()
1848
+ # Generate a solution for the error and execute it
1849
+ error_solution_query = (
1850
+ f"The following code was executed and resulted in an error:\n\n"
1851
+ f"{response}\n\n"
1852
+ f"Error: {error_message}\n\n"
1853
+ f"Please provide a solution to fix this error in the code and execute it."
1854
+ )
1855
+ try:
1856
+ new_response = self._ai_model.chat(error_solution_query)
1857
+ self._handle_rawdog_response(new_response)
1858
+ except webscout.exceptions.FailedToGenerateResponseError as e:
1859
+ self._console.print(Markdown(f"LLM: [red]Error while generating solution: {e}[/red]"))
1860
+ else:
1861
+ self._console.print(Markdown("LLM: (Script executed successfully)"))
1832
1862
  except Exception as e:
1833
1863
  self._console.print(Markdown(f"LLM: [red]Error: {e}[/red]"))
1834
- return
1835
- if is_feedback:
1836
- self._console.print(Markdown(f"LLM: {is_feedback}"))
1837
- else:
1838
- self._console.print(Markdown("LLM: (Script executed successfully)"))
1864
+
1839
1865
 
1840
1866
  async def process_async_query(self, query: str) -> None:
1841
1867
  """