webscout 1.3.0__py3-none-any.whl → 1.3.2__py3-none-any.whl

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.

Potentially problematic release.


This version of webscout might be problematic. Click here for more details.

@@ -1,7 +1,7 @@
1
1
  Metadata-Version: 2.1
2
2
  Name: webscout
3
- Version: 1.3.0
4
- Summary: Search for words, documents, images, videos, news, maps and text translation using the Google, DuckDuckGo.com, yep.com, phind.com, you.com, etc Also containes AI models and now can transcribe yt videos
3
+ Version: 1.3.2
4
+ Summary: Search for words, documents, images, videos, news, maps and text translation using the Google, DuckDuckGo.com, yep.com, phind.com, you.com, etc Also containes AI models, can transcribe yt videos and have TTS support
5
5
  Author: OEvortex
6
6
  Author-email: helpingai5@gmail.com
7
7
  License: HelpingAI Simplified Universal License
@@ -64,7 +64,6 @@ Search for words, documents, images, videos, news, maps and text translation usi
64
64
  - [Table of Contents](#table-of-contents)
65
65
  - [Install](#install)
66
66
  - [CLI version](#cli-version)
67
- - [CLI version of webscout.AI](#cli-version-of-webscoutai)
68
67
  - [CLI to use LLM](#cli-to-use-llm)
69
68
  - [Regions](#regions)
70
69
  - [Transcriber](#transcriber)
@@ -97,8 +96,10 @@ Search for words, documents, images, videos, news, maps and text translation usi
97
96
  - [7. `PERPLEXITY` - Search With PERPLEXITY](#7-perplexity---search-with-perplexity)
98
97
  - [8. `OpenGPT` - chat With OPENGPT](#8-opengpt---chat-with-opengpt)
99
98
  - [9. `KOBOLDIA` -](#9-koboldia--)
99
+ - [10. `Sean` - chat With Sean](#10-sean---chat-with-sean)
100
100
  - [usage of special .LLM file from webscout (webscout.LLM)](#usage-of-special-llm-file-from-webscout-webscoutllm)
101
101
  - [`LLM`](#llm)
102
+ - [`LLM` with internet](#llm-with-internet)
102
103
 
103
104
  ## Install
104
105
  ```python
@@ -124,19 +125,6 @@ python -m webscout --help
124
125
 
125
126
 
126
127
 
127
- ## CLI version of webscout.AI
128
- | Command | Description |
129
- |-----------------------------------------------|--------------------------------------------------------------------------------------------------------|
130
- | `python -m webscout.AI phindsearch --prompt "your search query"` | CLI function to perform a search query using Webscout.AI's Phindsearch feature. |
131
- | `python -m webscout.AI yepchat --message "your_message_here"` | CLI function to send a message using Webscout.AI's Yepchat feature. |
132
- | `python -m webscout.AI youchat --prompt "your_prompt_here"` | CLI function to generate a response based on a prompt using Webscout.AI's Youchat feature. |
133
- | `python -m webscout.AI gemini --message "tell me about gemma 7b"` | CLI function to get information about a specific topic using Webscout.AI's Gemini feature. |
134
- | `python -m webscout.AI prodia --prompt "car"` | CLI function to generate content related to a prompt using Webscout.AI's Prodia feature. |
135
- | `python -m webscout.AI blackboxai --prompt "Your prompt here"` | CLI function to perform a search using Webscout.AI's Blackbox search feature. |
136
- | `python -m webscout.AI perplexity --prompt "Your prompt here"` | CLI function to perform a search using Webscout.AI's PERPLEXITY feature. |
137
- | `python -m webscout.AI opengpt --prompt "Your prompt here"` | CLI function to perform a search using Webscout.AI's OPENGPT feature. |
138
-
139
-
140
128
  ## CLI to use LLM
141
129
  ```python
142
130
  python -m webscout.LLM model_name
@@ -696,6 +684,16 @@ response = koboldai.ask(prompt)
696
684
  message = koboldai.get_message(response)
697
685
  print(message)
698
686
 
687
+ ```
688
+ ### 10. `Sean` - chat With Sean
689
+ ```python
690
+ from webscout.AI import Sean
691
+
692
+ a = Sean(is_conversation=True, max_tokens=8000, timeout=30)
693
+ # This example sends a simple greeting and prints the response
694
+ prompt = "tell me about india"
695
+ response_str = a.chat(prompt)
696
+ print(response_str)
699
697
  ```
700
698
 
701
699
  ## usage of special .LLM file from webscout (webscout.LLM)
@@ -704,11 +702,102 @@ print(message)
704
702
  ```python
705
703
  from webscout.LLM import LLM
706
704
 
707
- def chat(model_name, system_message="You are Jarvis"):# system prompt
708
- AI = LLM(model_name, system_message)
709
- AI.chat()
705
+ # Read the system message from the file
706
+ with open('system.txt', 'r') as file:
707
+ system_message = file.read()
708
+
709
+ # Initialize the LLM class with the model name and system message
710
+ llm = LLM(model="microsoft/WizardLM-2-8x22B", system_message=system_message)
711
+
712
+ while True:
713
+ # Get the user input
714
+ user_input = input("User: ")
715
+
716
+ # Define the messages to be sent
717
+ messages = [
718
+ {"role": "user", "content": user_input}
719
+ ]
720
+
721
+ # Use the mistral_chat method to get the response
722
+ response = llm.chat(messages)
723
+
724
+ # Print the response
725
+ print("AI: ", response)
726
+ ```
727
+ ### `LLM` with internet
728
+ ```python
729
+ from __future__ import annotations
730
+ from typing import List, Optional
731
+
732
+ from webscout import LLM
733
+ from webscout import WEBS
734
+ import warnings
735
+
736
+ system_message: str = (
737
+ "As AI, you possess internet access and are capable of executing real-time web searches based on user inputs. "
738
+ "You shall utilize this capability to enrich conversations, offer informed insights, and augment your ability to "
739
+ "respond accurately and thoroughly. However, refrain from stating 'You have provided a list of strings,' ensuring "
740
+ "seamless interactions with users. Embrace a responsive demeanor, harnessing available online resources to address "
741
+ "queries, share pertinent content, and facilitate meaningful exchanges. By doing so, you create value through "
742
+ "connection and engagement, ultimately enhancing overall user satisfaction and experience. Additionally, "
743
+ "continue upholding the principles of respect, impartiality, and intellectual integrity throughout all interactions."
744
+ )
745
+
746
+ # Ignore the specific UserWarning
747
+ warnings.filterwarnings("ignore", category=UserWarning, module="curl_cffi.aio", lineno=205)
748
+ LLM = LLM(model="meta-llama/Meta-Llama-3-70B-Instruct", system_message=system_message)
749
+
750
+
751
+ def chat(
752
+ user_input: str, webs: WEBS, max_results: int = 10
753
+ ) -> Optional[str]:
754
+ """
755
+ Chat function to perform a web search based on the user input and generate a response using the LLM model.
756
+
757
+ Parameters
758
+ ----------
759
+ user_input : str
760
+ The user input to be used for the web search
761
+ webs : WEBS
762
+ The web search instance to be used to perform the search
763
+ max_results : int, optional
764
+ The maximum number of search results to include in the response, by default 10
765
+
766
+ Returns
767
+ -------
768
+ Optional[str]
769
+ The response generated by the LLM model, or None if there is no response
770
+ """
771
+ # Perform a web search based on the user input
772
+ search_results: List[str] = []
773
+ for r in webs.text(
774
+ user_input, region="wt-wt", safesearch="off", timelimit="y", max_results=max_results
775
+ ):
776
+ search_results.append(str(r)) # Convert each result to a string
777
+
778
+ # Define the messages to be sent, including the user input, search results, and system message
779
+ messages = [
780
+ {"role": "user", "content": user_input + "\n" + "websearch results are:" + "\n".join(search_results)},
781
+ ]
782
+
783
+ # Use the chat method to get the response
784
+ response = LLM.chat(messages)
785
+
786
+ return response
787
+
710
788
 
711
789
  if __name__ == "__main__":
712
- model_name = "mistralai/Mistral-7B-Instruct-v0.2" # name of the model you wish to use It supports ALL text generation models on deepinfra.com.
713
- chat(model_name)
790
+ while True:
791
+ # Get the user input
792
+ user_input = input("User: ")
793
+
794
+ # Perform a web search based on the user input
795
+ with WEBS() as webs:
796
+ response = chat(user_input, webs)
797
+
798
+ # Print the response
799
+ if response:
800
+ print("AI:", response)
801
+ else:
802
+ print("No response")
714
803
  ```
@@ -10,26 +10,28 @@ DeepWEBS/networks/webpage_fetcher.py,sha256=d5paDTB3wa_w6YWmLV7RkpAj8Lh8ztuUuyfe
10
10
  DeepWEBS/utilsdw/__init__.py,sha256=47DEQpj8HBSa-_TImW-5JCeuQeRkm5NMpJWZG3hSuFU,0
11
11
  DeepWEBS/utilsdw/enver.py,sha256=vstxg_5P3Rwo1en6oPcuc2SBiATJqxi4C7meGmw5w0M,1754
12
12
  DeepWEBS/utilsdw/logger.py,sha256=Z0nFUcEGyU8r28yKiIyvEtO26xxpmJgbvNToTfwZecc,8174
13
- webscout/AI.py,sha256=WYi-JbbiXtQM14juJ-WgsahgQEEb7U82YElDX9TfHAI,57985
13
+ webscout/AI.py,sha256=9ZEctvGx558mmCva6c9lB5c0pbU-of5azle2F4Mpqhg,93054
14
14
  webscout/AIbase.py,sha256=vQi2ougu5bG-QdmoYmxCQsOg7KTEgG7EF6nZh5qqUGw,2343
15
- webscout/AIutel.py,sha256=fNN4mmjXcxjJGq2CVJP1MU2oQ78p8OyExQBjVif6e-k,24123
15
+ webscout/AIutel.py,sha256=j8NY4AgJbq3YLosX2R3QShtmjIM0UivXS_iCoMSIZiY,24126
16
16
  webscout/DWEBS.py,sha256=QT-7-dUgWhQ_H7EVZD53AVyXxyskoPMKCkFIpzkN56Q,7332
17
17
  webscout/HelpingAI.py,sha256=YeZw0zYVHMcBFFPNdd3_Ghpm9ebt_EScQjHO_IIs4lg,8103
18
- webscout/LLM.py,sha256=XByJPiATLA_57FBWKw18Xx_PGRCPOj-GJE96aQH1k2Y,3309
19
- webscout/__init__.py,sha256=1DBQX84kGCCzGNFV2MtNqASb_2WlM7BuM0enCqnOx8I,550
18
+ webscout/LLM.py,sha256=CiDz0okZNEoXuxMwadZnwRGSLpqk2zg0vzvXSxQZjcE,1910
19
+ webscout/__init__.py,sha256=OOnFV-M4YkzpzMor_42RHbBFidy9NWrTgbJ2ODQ7eWw,977
20
20
  webscout/__main__.py,sha256=ZtTRgsRjUi2JOvYFLF1ZCh55Sdoz94I-BS-TlJC7WDU,126
21
21
  webscout/cli.py,sha256=F888fdrFUQgczMBN4yMOSf6Nh-IbvkqpPhDsbnA2FtQ,17059
22
22
  webscout/exceptions.py,sha256=4AOO5wexeL96nvUS-badcckcwrPS7UpZyAgB9vknHZE,276
23
+ webscout/g4f.py,sha256=9ovf-gLMDC3Bt6zGrwmZ3_PJh5fSVR4ipOlsaYxbgU0,16358
23
24
  webscout/models.py,sha256=5iQIdtedT18YuTZ3npoG7kLMwcrKwhQ7928dl_7qZW0,692
24
25
  webscout/transcriber.py,sha256=EddvTSq7dPJ42V3pQVnGuEiYQ7WjJ9uyeR9kMSxN7uY,20622
25
26
  webscout/utils.py,sha256=c_98M4oqpb54pUun3fpGGlCerFD6ZHUbghyp5b7Mwgo,2605
26
- webscout/version.py,sha256=R-_i2z0hXcrgq3LLNeJVLbwdS54wc3y_4KzSPxzw8j0,25
27
+ webscout/version.py,sha256=M1Td5s3GIeuSLllLIgV1S7hL3ooGv_R6ahu1mybkXAI,25
27
28
  webscout/voice.py,sha256=1Ids_2ToPBMX0cH_UyPMkY_6eSE9H4Gazrl0ujPmFag,941
29
+ webscout/webai.py,sha256=Kkbdg1jH5VviLAFqdpqb3hv1Ptgi3UyiAFJFGvqFk7k,74262
28
30
  webscout/webscout_search.py,sha256=3_lli-hDb8_kCGwscK29xuUcOS833ROgpNhDzrxh0dk,3085
29
31
  webscout/webscout_search_async.py,sha256=Y5frH0k3hLqBCR-8dn7a_b7EvxdYxn6wHiKl3jWosE0,40670
30
- webscout-1.3.0.dist-info/LICENSE.md,sha256=mRVwJuT4SXC5O93BFdsfWBjlXjGn2Np90Zm5SocUzM0,3150
31
- webscout-1.3.0.dist-info/METADATA,sha256=E9A3XzQIkmBA6C6E0PXYyFxdqqPCH2PHYSVSKUJczcc,28258
32
- webscout-1.3.0.dist-info/WHEEL,sha256=oiQVh_5PnQM0E3gPdiz09WCNmwiHDMaGer_elqB3coM,92
33
- webscout-1.3.0.dist-info/entry_points.txt,sha256=8-93eRslYrzTHs5E-6yFRJrve00C9q-SkXJD113jzRY,197
34
- webscout-1.3.0.dist-info/top_level.txt,sha256=OD5YKy6Y3hldL7SmuxsiEDxAG4LgdSSWwzYk22MF9fk,18
35
- webscout-1.3.0.dist-info/RECORD,,
32
+ webscout-1.3.2.dist-info/LICENSE.md,sha256=mRVwJuT4SXC5O93BFdsfWBjlXjGn2Np90Zm5SocUzM0,3150
33
+ webscout-1.3.2.dist-info/METADATA,sha256=3lv1MpstUsN8doyAlMKv19dV2KkIRtUtE9JhQut95sk,30440
34
+ webscout-1.3.2.dist-info/WHEEL,sha256=oiQVh_5PnQM0E3gPdiz09WCNmwiHDMaGer_elqB3coM,92
35
+ webscout-1.3.2.dist-info/entry_points.txt,sha256=8-93eRslYrzTHs5E-6yFRJrve00C9q-SkXJD113jzRY,197
36
+ webscout-1.3.2.dist-info/top_level.txt,sha256=OD5YKy6Y3hldL7SmuxsiEDxAG4LgdSSWwzYk22MF9fk,18
37
+ webscout-1.3.2.dist-info/RECORD,,