pygpt-net 2.4.36__py3-none-any.whl → 2.4.37__py3-none-any.whl

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
@@ -1,6 +1,6 @@
1
1
  Metadata-Version: 2.1
2
2
  Name: pygpt-net
3
- Version: 2.4.36
3
+ Version: 2.4.37
4
4
  Summary: Desktop AI Assistant powered by models: OpenAI o1, GPT-4o, GPT-4, GPT-4 Vision, GPT-3.5, DALL-E 3, Llama 3, Mistral, Gemini, Claude, Bielik, and other models supported by Langchain, Llama Index, and Ollama. Features include chatbot, text completion, image generation, vision analysis, speech-to-text, internet access, file handling, command execution and more.
5
5
  Home-page: https://pygpt.net
6
6
  License: MIT
@@ -92,7 +92,7 @@ Description-Content-Type: text/markdown
92
92
 
93
93
  [![pygpt](https://snapcraft.io/pygpt/badge.svg)](https://snapcraft.io/pygpt)
94
94
 
95
- Release: **2.4.36** | build: **2024.11.28** | Python: **>=3.10, <3.12**
95
+ Release: **2.4.37** | build: **2024.11.30** | Python: **>=3.10, <3.12**
96
96
 
97
97
  > Official website: https://pygpt.net | Documentation: https://pygpt.readthedocs.io
98
98
  >
@@ -187,13 +187,13 @@ Linux version requires `GLIBC` >= `2.35`.
187
187
  You can install **PyGPT** directly from Snap Store:
188
188
 
189
189
  ```commandline
190
- $ sudo snap install pygpt
190
+ sudo snap install pygpt
191
191
  ```
192
192
 
193
193
  To manage future updates just use:
194
194
 
195
195
  ```commandline
196
- $ sudo snap refresh pygpt
196
+ sudo snap refresh pygpt
197
197
  ```
198
198
 
199
199
  [![Get it from the Snap Store](https://snapcraft.io/static/images/badges/en/snap-store-black.svg)](https://snapcraft.io/pygpt)
@@ -201,13 +201,13 @@ $ sudo snap refresh pygpt
201
201
  **Using camera:** to use camera in Snap version you must connect the camera with:
202
202
 
203
203
  ```commandline
204
- $ sudo snap connect pygpt:camera
204
+ sudo snap connect pygpt:camera
205
205
  ```
206
206
 
207
207
  **Using microphone:** to use microphone in Snap version you must connect the microphone with:
208
208
 
209
209
  ```commandline
210
- $ sudo snap connect pygpt:audio-record :audio-record
210
+ sudo snap connect pygpt:audio-record :audio-record
211
211
  ```
212
212
 
213
213
  **Connecting IPython in Docker in Snap version**:
@@ -215,11 +215,11 @@ $ sudo snap connect pygpt:audio-record :audio-record
215
215
  To use IPython in the Snap version, you must connect PyGPT to the Docker daemon:
216
216
 
217
217
  ```commandline
218
- $ sudo snap connect pygpt:docker-executables docker:docker-executables
218
+ sudo snap connect pygpt:docker-executables docker:docker-executables
219
219
  ```
220
220
 
221
221
  ````commandline
222
- $ sudo snap connect pygpt:docker docker:docker-daemon
222
+ sudo snap connect pygpt:docker docker:docker-daemon
223
223
  ````
224
224
 
225
225
  ## PyPi (pip)
@@ -229,20 +229,20 @@ The application can also be installed from `PyPi` using `pip install`:
229
229
  1. Create virtual environment:
230
230
 
231
231
  ```commandline
232
- $ python3 -m venv venv
233
- $ source venv/bin/activate
232
+ python3 -m venv venv
233
+ source venv/bin/activate
234
234
  ```
235
235
 
236
236
  2. Install from PyPi:
237
237
 
238
238
  ``` commandline
239
- $ pip install pygpt-net
239
+ pip install pygpt-net
240
240
  ```
241
241
 
242
242
  3. Once installed run the command to start the application:
243
243
 
244
244
  ``` commandline
245
- $ pygpt
245
+ pygpt
246
246
  ```
247
247
 
248
248
  ## Running from GitHub source code
@@ -254,27 +254,27 @@ An alternative method is to download the source code from `GitHub` and execute t
254
254
  1. Clone git repository or download .zip file:
255
255
 
256
256
  ```commandline
257
- $ git clone https://github.com/szczyglis-dev/py-gpt.git
258
- $ cd py-gpt
257
+ git clone https://github.com/szczyglis-dev/py-gpt.git
258
+ cd py-gpt
259
259
  ```
260
260
 
261
261
  2. Create a new virtual environment:
262
262
 
263
263
  ```commandline
264
- $ python3 -m venv venv
265
- $ source venv/bin/activate
264
+ python3 -m venv venv
265
+ source venv/bin/activate
266
266
  ```
267
267
 
268
268
  3. Install requirements:
269
269
 
270
270
  ```commandline
271
- $ pip install -r requirements.txt
271
+ pip install -r requirements.txt
272
272
  ```
273
273
 
274
274
  4. Run the application:
275
275
 
276
276
  ```commandline
277
- $ python3 run.py
277
+ python3 run.py
278
278
  ```
279
279
 
280
280
  ### Install with Poetry
@@ -282,33 +282,33 @@ $ python3 run.py
282
282
  1. Clone git repository or download .zip file:
283
283
 
284
284
  ```commandline
285
- $ git clone https://github.com/szczyglis-dev/py-gpt.git
286
- $ cd py-gpt
285
+ git clone https://github.com/szczyglis-dev/py-gpt.git
286
+ cd py-gpt
287
287
  ```
288
288
 
289
289
  2. Install Poetry (if not installed):
290
290
 
291
291
  ```commandline
292
- $ pip install poetry
292
+ pip install poetry
293
293
  ```
294
294
 
295
295
  3. Create a new virtual environment that uses Python 3.10:
296
296
 
297
297
  ```commandline
298
- $ poetry env use python3.10
299
- $ poetry shell
298
+ poetry env use python3.10
299
+ poetry shell
300
300
  ```
301
301
 
302
302
  4. Install requirements:
303
303
 
304
304
  ```commandline
305
- $ poetry install
305
+ poetry install
306
306
  ```
307
307
 
308
308
  5. Run the application:
309
309
 
310
310
  ```commandline
311
- $ poetry run python3 run.py
311
+ poetry run python3 run.py
312
312
  ```
313
313
 
314
314
  **Tip**: you can use `PyInstaller` to create a compiled version of
@@ -327,19 +327,19 @@ Reinstalling the application may fix this problem.
327
327
  ...then install `libxcb`:
328
328
 
329
329
  ```commandline
330
- $ sudo apt install libxcb-cursor0
330
+ sudo apt install libxcb-cursor0
331
331
  ```
332
332
 
333
333
  If you have a problems with audio on Linux, then try to install `portaudio19-dev` and/or `libasound2`:
334
334
 
335
335
  ```commandline
336
- $ sudo apt install portaudio19-dev
336
+ sudo apt install portaudio19-dev
337
337
  ```
338
338
 
339
339
  ```commandline
340
- $ sudo apt install libasound2
341
- $ sudo apt install libasound2-data
342
- $ sudo apt install libasound2-plugins
340
+ sudo apt install libasound2
341
+ sudo apt install libasound2-data
342
+ sudo apt install libasound2-plugins
343
343
  ```
344
344
 
345
345
  **Problems with GLIBC on Linux**
@@ -355,7 +355,7 @@ when trying to run the compiled version for Linux, try updating GLIBC to version
355
355
 
356
356
 
357
357
  ```commandline
358
- $ sudo snap connect pygpt:camera
358
+ sudo snap connect pygpt:camera
359
359
  ```
360
360
 
361
361
  **Access to microphone in Snap version:**
@@ -363,7 +363,7 @@ $ sudo snap connect pygpt:camera
363
363
  To use microphone in Snap version you must connect the microphone with:
364
364
 
365
365
  ```commandline
366
- $ sudo snap connect pygpt:audio-record :audio-record
366
+ sudo snap connect pygpt:audio-record :audio-record
367
367
  ```
368
368
 
369
369
  **Windows and VC++ Redistributable**
@@ -382,13 +382,13 @@ It may also be necessary to add the path `C:\path\to\venv\Lib\python3.x\site-pac
382
382
  If you have a problems with `WebEngine / Chromium` renderer you can force the legacy mode by launching the app with command line arguments:
383
383
 
384
384
  ``` ini
385
- $ python3 run.py --legacy=1
385
+ python3 run.py --legacy=1
386
386
  ```
387
387
 
388
388
  and to force disable OpenGL hardware acceleration:
389
389
 
390
390
  ``` ini
391
- $ python3 run.py --disable-gpu=1
391
+ python3 run.py --disable-gpu=1
392
392
  ```
393
393
 
394
394
  You can also manualy enable legacy mode by editing config file - open the `%WORKDIR%/config.json` config file in editor and set the following options:
@@ -693,11 +693,11 @@ Built-in file loaders:
693
693
  - Webpages (crawling any webpage content)
694
694
  - YouTube (transcriptions)
695
695
 
696
- You can configure data loaders in `Settings / LlamaIndex / Data Loaders` by providing list of keyword arguments for specified loaders.
696
+ You can configure data loaders in `Settings / Indexes (LlamaIndex) / Data Loaders` by providing list of keyword arguments for specified loaders.
697
697
  You can also develop and provide your own custom loader and register it within the application.
698
698
 
699
699
  LlamaIndex is also integrated with context database - you can use data from database (your context history) as additional context in discussion.
700
- Options for indexing existing context history or enabling real-time indexing new ones (from database) are available in `Settings / LlamaIndex` section.
700
+ Options for indexing existing context history or enabling real-time indexing new ones (from database) are available in `Settings / Indexes (LlamaIndex)` section.
701
701
 
702
702
  **WARNING:** remember that when indexing content, API calls to the embedding model are used. Each indexing consumes additional tokens. Always control the number of tokens used on the OpenAI page.
703
703
 
@@ -759,7 +759,7 @@ You can set the limit of steps in such a loop by going to `Settings -> Agents an
759
759
 
760
760
  You can change the prompt used for evaluating the response in `Settings -> Prompts -> Agent: evaluation prompt in loop`. Here, you can adjust it to suit your needs, for example, by defining more or less critical feedback for the responses received.
761
761
 
762
- ## Agent (Legacy, Autonomous)
762
+ ## Agent (Autonomous)
763
763
 
764
764
  This is an older version of the Agent mode, still available as legacy. However, it is recommended to use the newer mode: `Agent (LlamaIndex)`.
765
765
 
@@ -907,11 +907,13 @@ The content from the uploaded attachments will be used in the current conversati
907
907
 
908
908
  - `Full context`: Provides best results. This mode attaches the entire content of the read file to the user's prompt. This process happens in the background and may require a large number of tokens if you uploaded extensive content.
909
909
 
910
- - `Query only`: The indexed attachment will only be queried in real-time using LlamaIndex. This operation does not require any additional tokens, but it may not provide access to the full content of the file 1:1.
910
+ - `RAG`: The indexed attachment will only be queried in real-time using LlamaIndex. This operation does not require any additional tokens, but it may not provide access to the full content of the file 1:1.
911
911
 
912
912
  - `Summary`: When queried, an additional query will be generated in the background and executed by a separate model to summarize the content of the attachment and return the required information to the main model. You can change the model used for summarization in the settings under the `Files and attachments` section.
913
913
 
914
- **Important**: When using `Full context` mode, the entire content of the file is included in the prompt, which can result in high token usage each time. If you want to reduce the number of tokens used, instead use the `Query only` option, which will only query the indexed attachment in the vector database to provide additional context.
914
+ In the `RAG` and `Summary` mode, you can enable an additional setting by going to `Settings -> Files and attachments -> Use history in RAG query`. This allows for better preparation of queries for RAG. When this option is turned on, the entire conversation context is considered, rather than just the user's last query. This allows for better searching of the index for additional context. In the `RAG limit` option, you can set a limit on how many recent entries in a discussion should be considered (`0 = no limit, default: 3`).
915
+
916
+ **Important**: When using `Full context` mode, the entire content of the file is included in the prompt, which can result in high token usage each time. If you want to reduce the number of tokens used, instead use the `RAG` option, which will only query the indexed attachment in the vector database to provide additional context.
915
917
 
916
918
  **Images as Additional Context**
917
919
 
@@ -919,7 +921,7 @@ Files such as jpg, png, and similar images are a special case. By default, image
919
921
 
920
922
  **Uploading larger files and auto-index**
921
923
 
922
- To use the `Query only` mode, the file must be indexed in the vector database. This occurs automatically at the time of upload if the `Auto-index on upload` option in the `Attachments` tab is enabled. When uploading large files, such indexing might take a while - therefore, if you are using the `Full context` option, which does not use the index, you can disable the `Auto-index` option to speed up the upload of the attachment. In this case, it will only be indexed when the `Query only` option is called for the first time, and until then, attachment will be available in the form of `Full context` and `Summary`.
924
+ To use the `RAG` mode, the file must be indexed in the vector database. This occurs automatically at the time of upload if the `Auto-index on upload` option in the `Attachments` tab is enabled. When uploading large files, such indexing might take a while - therefore, if you are using the `Full context` option, which does not use the index, you can disable the `Auto-index` option to speed up the upload of the attachment. In this case, it will only be indexed when the `RAG` option is called for the first time, and until then, attachment will be available in the form of `Full context` and `Summary`.
923
925
 
924
926
  ## Downloading files
925
927
 
@@ -1036,11 +1038,11 @@ How to use locally installed Llama 3 or Mistral models:
1036
1038
 
1037
1039
  For example, on Linux:
1038
1040
 
1039
- ```$ curl -fsSL https://ollama.com/install.sh | sh```
1041
+ ```curl -fsSL https://ollama.com/install.sh | sh```
1040
1042
 
1041
1043
  4) Run the model (e.g. Llama 3) locally on your machine. For example, on Linux:
1042
1044
 
1043
- ```$ ollama run llama3.1```
1045
+ ```ollama run llama3.1```
1044
1046
 
1045
1047
  5) Return to PyGPT and select the correct model from models list to chat with selected model using Ollama running locally.
1046
1048
 
@@ -1557,11 +1559,11 @@ You can find the installation instructions here: https://docs.docker.com/engine/
1557
1559
  To use IPython in the Snap version, you must connect PyGPT to the Docker daemon:
1558
1560
 
1559
1561
  ```commandline
1560
- $ sudo snap connect pygpt:docker-executables docker:docker-executables
1562
+ sudo snap connect pygpt:docker-executables docker:docker-executables
1561
1563
  ```
1562
1564
 
1563
1565
  ````commandline
1564
- $ sudo snap connect pygpt:docker docker:docker-daemon
1566
+ sudo snap connect pygpt:docker docker:docker-daemon
1565
1567
  ````
1566
1568
 
1567
1569
 
@@ -2800,6 +2802,16 @@ Config -> Settings...
2800
2802
 
2801
2803
  - `Directory for file downloads`: Subdirectory for downloaded files, e.g. in Assistants mode, inside "data". Default: "download"
2802
2804
 
2805
+ - `Verbose mode`: Enabled verbose mode when using attachment as additional context.
2806
+
2807
+ - `Model for querying index`: Model to use for preparing query and querying the index when the RAG option is selected.
2808
+
2809
+ - `Model for attachment content summary`: Model to use when generating a summary for the content of a file when the Summary option is selected.
2810
+
2811
+ - `Use history in RAG query`: When enabled, the content of the entire conversation will be used when preparing a query if mode is RAG or Summary.
2812
+
2813
+ - `RAG limit`: Only if the option `Use history in RAG query` is enabled. Specify the limit of how many recent entries in the conversation will be used when generating a query for RAG. 0 = no limit.
2814
+
2803
2815
  **Context**
2804
2816
 
2805
2817
  - `Context Threshold`: Sets the number of tokens reserved for the model to respond to the next prompt.
@@ -3051,22 +3063,6 @@ You can manually edit the configuration files in this directory (this is your wo
3051
3063
 
3052
3064
  ## Setting the Working Directory Using Command Line Arguments
3053
3065
 
3054
- If you want to force set current workdir using command-line argument, use:
3055
-
3056
- ```
3057
- python3 ./run.py --workdir="/path/to/workdir"
3058
- ```
3059
- or:
3060
-
3061
- ```
3062
- pygpt.exe --workdir="/path/to/workdir"
3063
- ```
3064
- in binary version.
3065
-
3066
- Certainly! Here's the improved version:
3067
-
3068
- ## Setting the Working Directory Using Command Line Arguments
3069
-
3070
3066
  To set the current working directory using a command-line argument, use:
3071
3067
 
3072
3068
  ```
@@ -3353,7 +3349,7 @@ If you want to only query index (without chat) you can enable `Query index only
3353
3349
 
3354
3350
  You can create a custom vector store provider or data loader for your data and develop a custom launcher for the application.
3355
3351
 
3356
- See the section `Extending PyGPT / Adding custom Vector Store provider` for more details.
3352
+ See the section `Extending PyGPT / Adding a custom Vector Store provider` for more details.
3357
3353
 
3358
3354
  # Updates
3359
3355
 
@@ -3651,6 +3647,8 @@ Syntax: `event name` - triggered on, `event data` *(data type)*:
3651
3647
 
3652
3648
  - `AI_NAME` - when preparing an AI name, `data['value']` *(string, name of the AI assistant)*
3653
3649
 
3650
+ - `AGENT_PROMPT` - on agent prompt in eval mode, `data['value']` *(string, prompt)*
3651
+
3654
3652
  - `AUDIO_INPUT_RECORD_START` - start audio input recording
3655
3653
 
3656
3654
  - `AUDIO_INPUT_RECORD_STOP` - stop audio input recording
@@ -3709,10 +3707,16 @@ Syntax: `event name` - triggered on, `event data` *(data type)*:
3709
3707
 
3710
3708
  - `POST_PROMPT` - after preparing a system prompt, `data['value']` *(string, system prompt)*
3711
3709
 
3710
+ - `POST_PROMPT_ASYNC` - after preparing a system prompt, just before request in async thread, `data['value']` *(string, system prompt)*
3711
+
3712
+ - `POST_PROMPT_END` - after preparing a system prompt, just before request in async thread, at the very end `data['value']` *(string, system prompt)*
3713
+
3712
3714
  - `PRE_PROMPT` - before preparing a system prompt, `data['value']` *(string, system prompt)*
3713
3715
 
3714
3716
  - `SYSTEM_PROMPT` - when preparing a system prompt, `data['value']` *(string, system prompt)*
3715
3717
 
3718
+ - `TOOL_OUTPUT_RENDER` - when rendering extra content from tools from plugins, `data['content']` *(string, content)*
3719
+
3716
3720
  - `UI_ATTACHMENTS` - when the attachment upload elements are rendered, `data['value']` *(bool, show True/False)*
3717
3721
 
3718
3722
  - `UI_VISION` - when the vision elements are rendered, `data['value']` *(bool, show True/False)*
@@ -3951,6 +3955,14 @@ may consume additional tokens that are not displayed in the main window.
3951
3955
 
3952
3956
  ## Recent changes:
3953
3957
 
3958
+ **2.4.37 (2024-11-30)**
3959
+
3960
+ - The `Query only` mode in `Uploaded` tab has been renamed to `RAG`.
3961
+ - New options have been added under `Settings -> Files and Attachments`:
3962
+ - `Use history in RAG query`: When enabled, the content of the entire conversation will be used when preparing a query if the mode is set to RAG or Summary.
3963
+ - `RAG limit`: This option is applicable only if 'Use history in RAG query' is enabled. It specifies the limit on how many recent entries in the conversation will be used when generating a query for RAG. A value of 0 indicates no limit.
3964
+ - Cache: dynamic parts of the system prompt (from plugins) have been moved to the very end of the prompt stack to enable the use of prompt cache mechanisms in OpenAI.
3965
+
3954
3966
  **2.4.36 (2024-11-28)**
3955
3967
 
3956
3968
  - Added a new command-line argument: --workdir="/path/to/workdir" to explicitly set the current working directory.
@@ -3984,7 +3996,7 @@ may consume additional tokens that are not displayed in the main window.
3984
3996
 
3985
3997
  - Added an option checkbox `Auto-index on upload` in the `Attachments` tab:
3986
3998
 
3987
- **Tip:** To use the `Query only` mode, the file must be indexed in the vector database. This occurs automatically at the time of upload if the `Auto-index on upload` option in the `Attachments` tab is enabled. When uploading large files, such indexing might take a while - therefore, if you are using the `Full context` option, which does not use the index, you can disable the `Auto-index` option to speed up the upload of the attachment. In this case, it will only be indexed when the `Query only` option is called for the first time, and until then, attachment will be available in the form of `Full context` and `Summary`.
3999
+ **Tip:** To use the `RAG` mode, the file must be indexed in the vector database. This occurs automatically at the time of upload if the `Auto-index on upload` option in the `Attachments` tab is enabled. When uploading large files, such indexing might take a while - therefore, if you are using the `Full context` option, which does not use the index, you can disable the `Auto-index` option to speed up the upload of the attachment. In this case, it will only be indexed when the `RAG` option is called for the first time, and until then, attachment will be available in the form of `Full context` and `Summary`.
3988
4000
 
3989
4001
  - Added context menu options in `Uploaded attachments` tab: `Open`, `Open Source directory` and `Open Storage directory`.
3990
4002
 
@@ -1,9 +1,9 @@
1
- CHANGELOG.md,sha256=ze6FSM2VH2viP0X8-REeDLDNixH5XEWIpTe9a_TRieQ,76996
2
- README.md,sha256=yZC0MUnbYtMzRy8yYi06BQJ0JgKS1tIQ7irzFoP0GK4,162165
1
+ CHANGELOG.md,sha256=8wrPaTiIoD-6JD75ThU_LF2hY2ylHmRbm1yDRCwxack,77715
2
+ README.md,sha256=HsxtUFLetSFyO5HMmGzabAraLw7wHmhaT43ip0HAcBM,164212
3
3
  icon.png,sha256=CzcINJaU23a9hNjsDlDNbyuiEvKZ4Wg6DQVYF6SpuRg,13970
4
- pygpt_net/CHANGELOG.txt,sha256=OANVuWFOL2EVR5SgtCTfaNf8cEuTT8uib6oPWnwjTvQ,75552
4
+ pygpt_net/CHANGELOG.txt,sha256=ec7KOMpxOkqFAAI5ANHbw1Ks9g6C9Idg7bUAuLjj_HY,76268
5
5
  pygpt_net/LICENSE,sha256=6Ku72-zJ8wO5VIR87UoJ5P_coCVjPghaFL9ZF2jLp7E,1146
6
- pygpt_net/__init__.py,sha256=Lx1tpxBy21pZsDUu6TwVNXa0dK5sEmG4yr1Uw6Q6Ruo,1067
6
+ pygpt_net/__init__.py,sha256=DdP6xTl4Q9tdsmKWKGT6wILl4WgJkNVFgplTEcyBrKo,1067
7
7
  pygpt_net/app.py,sha256=Q7g-2UlF7FlEOBytbGb_nrjT4zEio2HzfzQd687QuUo,15930
8
8
  pygpt_net/config.py,sha256=Qc1FOBtTf3O6A6-6KoqUGtoJ0u8hXQeowvCVbZFwtik,16405
9
9
  pygpt_net/container.py,sha256=BemiVZPpPNIzfB-ZvnZeeBPFu-AcX2c30OqYFylEjJc,4023
@@ -28,7 +28,7 @@ pygpt_net/controller/calendar/__init__.py,sha256=aAYEAex5UNoB7LHdNSKssls2Rdc877E
28
28
  pygpt_net/controller/calendar/note.py,sha256=B19cNKyD9UODZo7LdyST0U3I3519jsqVgWJp5UDTgVU,10841
29
29
  pygpt_net/controller/camera.py,sha256=t_ZgevP3zrsBe_A4Yx_WO4PfMMfYbsezd9NdQzkMpOQ,16522
30
30
  pygpt_net/controller/chat/__init__.py,sha256=4ZbmjVXOBESTmbajiykz_TiJ5cYptUbUJU7WWp2XSlE,3062
31
- pygpt_net/controller/chat/attachment.py,sha256=ip0wmNV06DrdAwbjH_56kOSNP3uVVPtgeSjk64cm8lE,21523
31
+ pygpt_net/controller/chat/attachment.py,sha256=baR7EnW81DxVgIiHYcSGCSPqLPqiGGBSA8dFccsJub8,20530
32
32
  pygpt_net/controller/chat/audio.py,sha256=1eX_kIiRLFBDrNAPVthj-1ftknhdOkn3jWBuC7kT79c,3181
33
33
  pygpt_net/controller/chat/command.py,sha256=_lXHki5pbTi8Pvz_BzP3VxGUM_0Ztr1mE5rsatPmSoE,2833
34
34
  pygpt_net/controller/chat/common.py,sha256=uLRRT1ZNGLJiiyJ42lJ7rjDwhqPEfX1RD-EnoFhBmmU,13875
@@ -122,13 +122,13 @@ pygpt_net/core/assistants/__init__.py,sha256=nnKKqcP5Xtx9daGUHAX8FT2JwG3r5syc6mS
122
122
  pygpt_net/core/assistants/files.py,sha256=cg32PsmdM3rB6eNkKo8CP9aSKNqoml1YKniGl9rXSlM,9796
123
123
  pygpt_net/core/assistants/store.py,sha256=vzMN2dOKIcH7NCGA6UzZkXGal1jAioOzNTpAHzObgOo,7990
124
124
  pygpt_net/core/attachments/__init__.py,sha256=lKH31SMkJdyQlqdYW07VhDnggpV6jssc6Lk4SIUlzEc,12223
125
- pygpt_net/core/attachments/context.py,sha256=96fZ3hohTo3VrWCMO27w4rLCi0OGQtNJixfitWKTNto,19836
125
+ pygpt_net/core/attachments/context.py,sha256=Su52wla5GZPBzM3e5_uhly38pKqSvr7sv1o5e9tRs_c,23280
126
126
  pygpt_net/core/attachments/worker.py,sha256=_aUCyi5-Mbz0IGfgY6QKBZ6MFz8aKRDfKasbBVXg7kU,1341
127
127
  pygpt_net/core/audio/__init__.py,sha256=gysJ0SEFFwhq3Jz268JxJTdpZ7-8qqgID60BWQjtIig,2708
128
128
  pygpt_net/core/audio/context.py,sha256=GtUK2DIUBSJwtUVPx3Vv9oZCx_wHsilHYYdvUfHP4G4,1044
129
129
  pygpt_net/core/bridge/__init__.py,sha256=4XRmueSSshAqstjgDp3jFTSCwSTaWlzaU4d-gBK46x8,9292
130
130
  pygpt_net/core/bridge/context.py,sha256=u_1_sSB4YO6OP1zR3c15OT_7kVVlxtuAO1OgCn7BFY4,4301
131
- pygpt_net/core/bridge/worker.py,sha256=YPcBXH_gdl9QkAaCPzSabplO45B-EqaWF2Mha2YT6eY,5291
131
+ pygpt_net/core/bridge/worker.py,sha256=knquYoNZSZY7RK-qmbNfj3LJDns3uDnkZhGus23qsJg,5819
132
132
  pygpt_net/core/calendar/__init__.py,sha256=7mCGxUcGUbTiyXF6yqZsLONfhRAzQEYgQqtiGPhQ6qU,6634
133
133
  pygpt_net/core/camera.py,sha256=WAo1YAmdjRuAtpVx73695aEHBmT2C16-T348MVOY7Rg,4034
134
134
  pygpt_net/core/chain/__init__.py,sha256=8hMi_gWmB-I2k1AaADEtTfOAzeVxmBD8YhUJFIpjON4,3439
@@ -165,7 +165,7 @@ pygpt_net/core/events/__init__.py,sha256=C6n8MRL_GXcHlr3txzyb2DtRMGLQbD_LEZygFVy
165
165
  pygpt_net/core/events/app.py,sha256=BKRhScIN-rnfP4U-VzZNPwIYPziW7iyPk6h7ou4ol-k,1841
166
166
  pygpt_net/core/events/base.py,sha256=oige2BIjdyQo2UeCSyY-T3mu_8VNoOmtRsqlia5BzEM,1764
167
167
  pygpt_net/core/events/control.py,sha256=TADFX1IJbQeC572_uWlOCw5VXMkJFqoPwE-9hptkh6k,2858
168
- pygpt_net/core/events/event.py,sha256=ZdZTzFybSST2z4aCjHCOBeaVVUo9DnV5q1FI1teqvuU,3512
168
+ pygpt_net/core/events/event.py,sha256=avGLJ3bkGyQWvQrcRyCc_iRa-e10CkJH5-SJSQ7LqKo,3552
169
169
  pygpt_net/core/events/kernel.py,sha256=1s8gRvGT3GNyXkBQ6clxuWUrPNmIWY2gRCfbcOMbpqY,1870
170
170
  pygpt_net/core/events/render.py,sha256=xtNTyud6ywtpUGdIrbaJeHhbqZuDjd4It3j0KlPmHks,2118
171
171
  pygpt_net/core/experts/__init__.py,sha256=ub_Z-5xEvW9o-ufnURtYemo60NTvHpjd56X5H9Ca0RY,17482
@@ -177,7 +177,7 @@ pygpt_net/core/filesystem/types.py,sha256=gnV1CMRE0SwQDapNYEoayMs8NyUync121RCdnB
177
177
  pygpt_net/core/filesystem/url.py,sha256=9IHrt_lgULsz56Xxy6me4v2lun26q5D4yoPESLgOO_A,3006
178
178
  pygpt_net/core/history.py,sha256=LnSyB3nuXZxXeaiNdjg4Q4yWhJM5hA2QN5dy_AY9Pxo,3092
179
179
  pygpt_net/core/idx/__init__.py,sha256=7LlN_CmhOJVLuZFp-XihNtoQJXFxlA2ygYAcaIAfU8c,17238
180
- pygpt_net/core/idx/chat.py,sha256=lGwP-Ok7GHRkCk0Ys_oLt1ssMkaLZKmn7zWVeYC0BLU,21047
180
+ pygpt_net/core/idx/chat.py,sha256=KPCYIB1CcvxUItNSNBBhsBqRETs-he_7kN_np8izX5M,20761
181
181
  pygpt_net/core/idx/context.py,sha256=oC9Do2_YdOZ2yJSh8C1_FbxUdkB6bT20JyhjShDJUkk,3049
182
182
  pygpt_net/core/idx/indexing.py,sha256=korUe-G_cV9Vf2PrMh7LJfHEntHCmcUnw9GQ5TjBhMw,41011
183
183
  pygpt_net/core/idx/llm.py,sha256=gys1i0qRpdbsUfgkTq4bHw0w89Yz2-s5CQueO5vDUCo,5027
@@ -242,9 +242,9 @@ pygpt_net/css_rc.py,sha256=i13kX7irhbYCWZ5yJbcMmnkFp_UfS4PYnvRFSPF7XXo,11349
242
242
  pygpt_net/data/audio/click_off.mp3,sha256=aNiRDP1pt-Jy7ija4YKCNFBwvGWbzU460F4pZWZDS90,65201
243
243
  pygpt_net/data/audio/click_on.mp3,sha256=qfdsSnthAEHVXzeyN4LlC0OvXuyW8p7stb7VXtlvZ1k,65201
244
244
  pygpt_net/data/audio/ok.mp3,sha256=LTiV32pEBkpUGBkKkcOdOFB7Eyt_QoP2Nv6c5AaXftk,32256
245
- pygpt_net/data/config/config.json,sha256=d2IPMfZiPkY-w85RnK6HtMmYLVH3YhFnok_fRMqe4sY,19224
246
- pygpt_net/data/config/models.json,sha256=ii9y2Gwwe6Vo4OBPkO9yHIQXxnWQKUMOkPk_Hk8cpow,48872
247
- pygpt_net/data/config/modes.json,sha256=mLcLNI0DmIiBQdVVLBiqvl-vNcv1_gjiJKb98ipCOIU,1923
245
+ pygpt_net/data/config/config.json,sha256=Lf5n4SHVnsK-24YzkXXIfg4GGQIVuSco3hAGZv10SBc,19307
246
+ pygpt_net/data/config/models.json,sha256=NStKUNxPZuf9cwTTX7k5J5yIpqSjgLe8QbuEIOE7aMU,48872
247
+ pygpt_net/data/config/modes.json,sha256=KkkftMky4siMRSJ13T8RC0zpbBcI8I6ZOZyAOlASouI,1923
248
248
  pygpt_net/data/config/presets/agent_openai.json,sha256=vMTR-soRBiEZrpJJHuFLWyx8a3Ez_BqtqjyXgxCAM_Q,733
249
249
  pygpt_net/data/config/presets/agent_openai_assistant.json,sha256=awJw9lNTGpKML6SJUShVn7lv8AXh0oic7wBeyoN7AYs,798
250
250
  pygpt_net/data/config/presets/agent_planner.json,sha256=a6Rv58Bnm2STNWB0Rw_dGhnsz6Lb3J8_GwsUVZaTIXc,742
@@ -264,7 +264,7 @@ pygpt_net/data/config/presets/current.vision.json,sha256=x1ll5B3ROSKYQA6l27PRGXU
264
264
  pygpt_net/data/config/presets/dalle_white_cat.json,sha256=esqUb43cqY8dAo7B5u99tRC0MBV5lmlrVLnJhTSkL8w,552
265
265
  pygpt_net/data/config/presets/joke_agent.json,sha256=R6n9P7KRb0s-vZWZE7kHdlOfXAx1yYrPmUw8uLyw8OE,474
266
266
  pygpt_net/data/config/presets/joke_expert.json,sha256=aFBFCY97Uba71rRq0MSeakXaOj8yuaUqekQ842YHv64,683
267
- pygpt_net/data/config/settings.json,sha256=AhmZtT6c4WiiW54yklGWuuwv6CtntUfiJX4sz1ATdlo,44282
267
+ pygpt_net/data/config/settings.json,sha256=n6Dd155V09jP7ITs6Vh1l5r946uZ6eOfDx39wFh8Doo,45065
268
268
  pygpt_net/data/config/settings_section.json,sha256=8YmYWVdTWC_HorqlRa-lZO55B0lhq0v9qZfBHt77pqk,870
269
269
  pygpt_net/data/css/fix_windows.css,sha256=Mks14Vg25ncbMqZJfAMStrhvZmgHF6kU75ohTWRZeI8,664
270
270
  pygpt_net/data/css/markdown.css,sha256=yaoJPogZZ_ghbqP8vTXTycwVyD61Ik5_033NpzuUzC0,1122
@@ -1473,14 +1473,14 @@ pygpt_net/data/js/katex/fonts/KaTeX_Typewriter-Regular.woff,sha256=4U_tArGrp86fW
1473
1473
  pygpt_net/data/js/katex/fonts/KaTeX_Typewriter-Regular.woff2,sha256=cdUX1ngneHz6vfGGkUzDNY7aU543kxlB8rL9SiH2jAs,13568
1474
1474
  pygpt_net/data/js/katex/katex.min.css,sha256=lVaKnUaQNG4pI71WHffQZVALLQF4LMZEk4nOia8U9ow,23532
1475
1475
  pygpt_net/data/js/katex/katex.min.js,sha256=KLASOtKS2x8pUxWVzCDmlWJ4jhuLb0vtrgakbD6gDDo,276757
1476
- pygpt_net/data/locale/locale.de.ini,sha256=O_fwosMbXd35zP7OPwBP0AyvWdEeFLrpJhEKzmmiUPs,61366
1477
- pygpt_net/data/locale/locale.en.ini,sha256=SGoN-jtxPkvIohQtypkwooH31ShH4fwPnC0CHUQLQiM,73649
1478
- pygpt_net/data/locale/locale.es.ini,sha256=FY1KwWP5XGjZQjB0_Mz_39xntd3ET0rMD9T_vXl2pKc,61493
1479
- pygpt_net/data/locale/locale.fr.ini,sha256=d6mUYDiiKfso6n4ZOTivIQzQyzab7_urrFJzGi9Ye2g,63477
1480
- pygpt_net/data/locale/locale.it.ini,sha256=hZf6bOdN3AfKnW0_ejXKB5spgLinvXwlzA_OzgFaL_4,60374
1481
- pygpt_net/data/locale/locale.pl.ini,sha256=oBqFzsEQK3ZzAXTdpOk63xLvdVvohbQUEVKJ0k0g9mM,60453
1482
- pygpt_net/data/locale/locale.uk.ini,sha256=pdp9exp4LbFUTgwV5Y8QNFzWfAMTvReh4u6mC3wjrFA,84330
1483
- pygpt_net/data/locale/locale.zh.ini,sha256=DfRdmIxvRNT2mfn6qudkofa-TxByavJEfPUGsqYuckM,62409
1476
+ pygpt_net/data/locale/locale.de.ini,sha256=eeWP_7Qvc_lS8X8Vdgh3H5ddu7BNoh8MyitTOIklw48,62056
1477
+ pygpt_net/data/locale/locale.en.ini,sha256=W1jQHzCmnMZkrf-HuU2JPmcPELY-HfHGtPM9eC6Z4aY,74305
1478
+ pygpt_net/data/locale/locale.es.ini,sha256=ot_DzGtTYzh1izQlib5dWWui5vZZOKtzIC_vfvmq40w,62222
1479
+ pygpt_net/data/locale/locale.fr.ini,sha256=mQ4kXdmMp_QoJTuw7Uuyciqb2ZexU4p_RTi5YsD0HSI,64244
1480
+ pygpt_net/data/locale/locale.it.ini,sha256=CpWRU4tTcukmd1y4Y0nNaHg3bj7DojsZKreloVcPEyA,61086
1481
+ pygpt_net/data/locale/locale.pl.ini,sha256=Ya-xv62IuXbRG90W5MDbQCmd6qqFA9C2T_fORXtr16Y,61152
1482
+ pygpt_net/data/locale/locale.uk.ini,sha256=0g6tqRpmGypCZp7yFocN0FKGocA83k8kfIoVt4vec3Y,85376
1483
+ pygpt_net/data/locale/locale.zh.ini,sha256=d74BHnTdRYgzTi_iiLmQlxLKY1tugMx1MoDfHpBC0yI,63003
1484
1484
  pygpt_net/data/locale/plugin.agent.de.ini,sha256=BY28KpfFvgfVYJzcw2o5ScWnR4uuErIYGyc3NVHlmTw,1714
1485
1485
  pygpt_net/data/locale/plugin.agent.en.ini,sha256=88LkZUpilbV9l4QDbMyIdq_K9sbWt-CQPpavEttPjJU,1489
1486
1486
  pygpt_net/data/locale/plugin.agent.es.ini,sha256=bqaJQne8HPKFVtZ8Ukzo1TSqVW41yhYbGUqW3j2x1p8,1680
@@ -1742,7 +1742,7 @@ pygpt_net/plugin/experts/__init__.py,sha256=QUs2IwI6CB94oqkLwg1MLA9gbZRp8seS1lAQ
1742
1742
  pygpt_net/plugin/experts/config.py,sha256=4FFfDDYzstUeqsxEQegGp5XHun2gNQC3k-OyY5PYVyo,895
1743
1743
  pygpt_net/plugin/extra_prompt/__init__.py,sha256=20cuqF1HTxhVPPMFZIeC8TuTr5VpGcG-WqUcfBEM4ZM,2053
1744
1744
  pygpt_net/plugin/extra_prompt/config.py,sha256=a-dol6rXXCuSyMhfq0VTs5cc2rU9bmoqQ0AafQh-ZFw,1471
1745
- pygpt_net/plugin/idx_llama_index/__init__.py,sha256=5LPUyBpRSSKv2b4vziCg5G4tZhXcq-TC9oS0TK4oexA,10677
1745
+ pygpt_net/plugin/idx_llama_index/__init__.py,sha256=B0u6ZGDljJrQnFETz1nLLYT_RBMmIv45ciaQyQXI7fM,10675
1746
1746
  pygpt_net/plugin/idx_llama_index/config.py,sha256=4S08w3hj_B3AXHl4qefOn20pHw4Z-pTSBhRMDhp0wzc,5604
1747
1747
  pygpt_net/plugin/idx_llama_index/worker.py,sha256=gyGioF7q-YarBKNgeWMqQSKpEC3N4rs9HOdQK09KK1k,3017
1748
1748
  pygpt_net/plugin/openai_dalle/__init__.py,sha256=fR_aqeOGbrB1w4JNO-JiG2lKGvLu8m2SP-4sdYGjQEU,5158
@@ -1750,7 +1750,7 @@ pygpt_net/plugin/openai_dalle/config.py,sha256=yBsd_EvPlWJY0HhK2cgz0InsaYf4IK6ee
1750
1750
  pygpt_net/plugin/openai_vision/__init__.py,sha256=Hi_n9iMFq2hS4YaVmkYL32r1KiAgib7DYLxMCYn7Ne8,10151
1751
1751
  pygpt_net/plugin/openai_vision/config.py,sha256=8yP4znFYGA_14pexNT3T6mddNsf-wsRqXc5QT52N4Uc,4627
1752
1752
  pygpt_net/plugin/openai_vision/worker.py,sha256=uz1JGUxy0VfdziiCnq5fjQZ7TrdjUmtyU-1X7RrvNyY,5108
1753
- pygpt_net/plugin/real_time/__init__.py,sha256=2OsgLo7l32Ut5uA-xhWCrhZpgWMnsZtf8oPgnjHFnZ8,5591
1753
+ pygpt_net/plugin/real_time/__init__.py,sha256=8iu7i5pC_SXRUyyhwFWbeOk4ehtEZG7V_fShne9bnjM,5593
1754
1754
  pygpt_net/plugin/real_time/config.py,sha256=v_wKjuXRMkcWOTb-ZL28w3LwDUo7rcOM55Pizwjn3-0,2127
1755
1755
  pygpt_net/plugin/voice_control/__init__.py,sha256=gdQ1P3ud9lwcFcxyX23_ODSwFFGeO3g1dkxPqhG0Hug,4837
1756
1756
  pygpt_net/plugin/voice_control/config.py,sha256=iSjLR8bsYwMJJOEz3AShIQXqlCPNWNJ8-7nnRfrAOuE,1314
@@ -1802,7 +1802,7 @@ pygpt_net/provider/core/calendar/db_sqlite/storage.py,sha256=aedWMmJr-uw7waUGzoj
1802
1802
  pygpt_net/provider/core/config/__init__.py,sha256=jQQgG9u_ZLsZWXustoc1uvC-abUvj4RBKPAM30-f2Kc,488
1803
1803
  pygpt_net/provider/core/config/base.py,sha256=nLMsTIe4sSaq-NGl7YLruBAR-DjRvq0aTR3OOvA9mIc,1235
1804
1804
  pygpt_net/provider/core/config/json_file.py,sha256=a4bt7RofzFrq01ppcajWzJEg4SaracFbLyDVl5EI0b8,5017
1805
- pygpt_net/provider/core/config/patch.py,sha256=e9PdhsChEY4zCGNQRSasZPTVu1xSeH0X3IlaJHA6UoI,90902
1805
+ pygpt_net/provider/core/config/patch.py,sha256=YPwhhIwuC1uCCsqJj7ASjL21gMmKkJoS9jykCkBgti4,91504
1806
1806
  pygpt_net/provider/core/ctx/__init__.py,sha256=jQQgG9u_ZLsZWXustoc1uvC-abUvj4RBKPAM30-f2Kc,488
1807
1807
  pygpt_net/provider/core/ctx/base.py,sha256=uIOqarMQUpvkPDrwGdnpD5jJQospePhEwZaS_l0BJbo,2631
1808
1808
  pygpt_net/provider/core/ctx/db_sqlite/__init__.py,sha256=klO4ocC6sGRYLBtTklernzZTNBBJryo3c4b0SRmgAi4,10911
@@ -2155,8 +2155,8 @@ pygpt_net/ui/widget/textarea/web.py,sha256=KIW8MnwDWjEAMdiLA2v1yZiFbf-PT4KkF55uh
2155
2155
  pygpt_net/ui/widget/vision/__init__.py,sha256=8HT4tQFqQogEEpGYTv2RplKBthlsFKcl5egnv4lzzEw,488
2156
2156
  pygpt_net/ui/widget/vision/camera.py,sha256=T8b5cmK6uhf_WSSxzPt_Qod8JgMnst6q8sQqRvgQiSA,2584
2157
2157
  pygpt_net/utils.py,sha256=YhMvgy0wNt3roHIbbAnS-5SXOxOOIIvRRGd6FPTa6d0,6153
2158
- pygpt_net-2.4.36.dist-info/LICENSE,sha256=GLKQTnJOPK4dDIWfkAIM4GwOxKJXi5zcMGt7FjLR1xk,1126
2159
- pygpt_net-2.4.36.dist-info/METADATA,sha256=rRfnfCeXnDXMX7rcw7-2ej5de2YpomfJf0js0IUk9po,166967
2160
- pygpt_net-2.4.36.dist-info/WHEEL,sha256=FMvqSimYX_P7y0a7UY-_Mc83r5zkBZsCYPm7Lr0Bsq4,88
2161
- pygpt_net-2.4.36.dist-info/entry_points.txt,sha256=qvpII6UHIt8XfokmQWnCYQrTgty8FeJ9hJvOuUFCN-8,43
2162
- pygpt_net-2.4.36.dist-info/RECORD,,
2158
+ pygpt_net-2.4.37.dist-info/LICENSE,sha256=GLKQTnJOPK4dDIWfkAIM4GwOxKJXi5zcMGt7FjLR1xk,1126
2159
+ pygpt_net-2.4.37.dist-info/METADATA,sha256=mX5X4-EExXL5ulkomx_uP6O855TlqCXzAJaNijH6L2A,169014
2160
+ pygpt_net-2.4.37.dist-info/WHEEL,sha256=FMvqSimYX_P7y0a7UY-_Mc83r5zkBZsCYPm7Lr0Bsq4,88
2161
+ pygpt_net-2.4.37.dist-info/entry_points.txt,sha256=qvpII6UHIt8XfokmQWnCYQrTgty8FeJ9hJvOuUFCN-8,43
2162
+ pygpt_net-2.4.37.dist-info/RECORD,,