pygpt-net 2.4.36__py3-none-any.whl → 2.4.37__py3-none-any.whl

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
CHANGELOG.md CHANGED
@@ -1,5 +1,13 @@
1
1
  # CHANGELOG
2
2
 
3
+ ## 2.4.37 (2024-11-30)
4
+
5
+ - The `Query only` mode in `Uploaded` tab has been renamed to `RAG`.
6
+ - New options have been added under `Settings -> Files and Attachments`:
7
+ - `Use history in RAG query`: When enabled, the content of the entire conversation will be used when preparing a query if the mode is set to RAG or Summary.
8
+ - `RAG limit`: This option is applicable only if 'Use history in RAG query' is enabled. It specifies the limit on how many recent entries in the conversation will be used when generating a query for RAG. A value of 0 indicates no limit.
9
+ - Cache: dynamic parts of the system prompt (from plugins) have been moved to the very end of the prompt stack to enable the use of prompt cache mechanisms in OpenAI.
10
+
3
11
  ## 2.4.36 (2024-11-28)
4
12
 
5
13
  - Added a new command-line argument: --workdir="/path/to/workdir" to explicitly set the current working directory.
@@ -33,7 +41,7 @@
33
41
 
34
42
  - Added an option checkbox `Auto-index on upload` in the `Attachments` tab:
35
43
 
36
- **Tip:** To use the `Query only` mode, the file must be indexed in the vector database. This occurs automatically at the time of upload if the `Auto-index on upload` option in the `Attachments` tab is enabled. When uploading large files, such indexing might take a while - therefore, if you are using the `Full context` option, which does not use the index, you can disable the `Auto-index` option to speed up the upload of the attachment. In this case, it will only be indexed when the `Query only` option is called for the first time, and until then, attachment will be available in the form of `Full context` and `Summary`.
44
+ **Tip:** To use the `RAG` mode, the file must be indexed in the vector database. This occurs automatically at the time of upload if the `Auto-index on upload` option in the `Attachments` tab is enabled. When uploading large files, such indexing might take a while - therefore, if you are using the `Full context` option, which does not use the index, you can disable the `Auto-index` option to speed up the upload of the attachment. In this case, it will only be indexed when the `RAG` option is called for the first time, and until then, attachment will be available in the form of `Full context` and `Summary`.
37
45
 
38
46
  - Added context menu options in `Uploaded attachments` tab: `Open`, `Open Source directory` and `Open Storage directory`.
39
47
 
README.md CHANGED
@@ -2,7 +2,7 @@
2
2
 
3
3
  [![pygpt](https://snapcraft.io/pygpt/badge.svg)](https://snapcraft.io/pygpt)
4
4
 
5
- Release: **2.4.36** | build: **2024.11.28** | Python: **>=3.10, <3.12**
5
+ Release: **2.4.37** | build: **2024.11.30** | Python: **>=3.10, <3.12**
6
6
 
7
7
  > Official website: https://pygpt.net | Documentation: https://pygpt.readthedocs.io
8
8
  >
@@ -97,13 +97,13 @@ Linux version requires `GLIBC` >= `2.35`.
97
97
  You can install **PyGPT** directly from Snap Store:
98
98
 
99
99
  ```commandline
100
- $ sudo snap install pygpt
100
+ sudo snap install pygpt
101
101
  ```
102
102
 
103
103
  To manage future updates just use:
104
104
 
105
105
  ```commandline
106
- $ sudo snap refresh pygpt
106
+ sudo snap refresh pygpt
107
107
  ```
108
108
 
109
109
  [![Get it from the Snap Store](https://snapcraft.io/static/images/badges/en/snap-store-black.svg)](https://snapcraft.io/pygpt)
@@ -111,13 +111,13 @@ $ sudo snap refresh pygpt
111
111
  **Using camera:** to use camera in Snap version you must connect the camera with:
112
112
 
113
113
  ```commandline
114
- $ sudo snap connect pygpt:camera
114
+ sudo snap connect pygpt:camera
115
115
  ```
116
116
 
117
117
  **Using microphone:** to use microphone in Snap version you must connect the microphone with:
118
118
 
119
119
  ```commandline
120
- $ sudo snap connect pygpt:audio-record :audio-record
120
+ sudo snap connect pygpt:audio-record :audio-record
121
121
  ```
122
122
 
123
123
  **Connecting IPython in Docker in Snap version**:
@@ -125,11 +125,11 @@ $ sudo snap connect pygpt:audio-record :audio-record
125
125
  To use IPython in the Snap version, you must connect PyGPT to the Docker daemon:
126
126
 
127
127
  ```commandline
128
- $ sudo snap connect pygpt:docker-executables docker:docker-executables
128
+ sudo snap connect pygpt:docker-executables docker:docker-executables
129
129
  ```
130
130
 
131
131
  ````commandline
132
- $ sudo snap connect pygpt:docker docker:docker-daemon
132
+ sudo snap connect pygpt:docker docker:docker-daemon
133
133
  ````
134
134
 
135
135
  ## PyPi (pip)
@@ -139,20 +139,20 @@ The application can also be installed from `PyPi` using `pip install`:
139
139
  1. Create virtual environment:
140
140
 
141
141
  ```commandline
142
- $ python3 -m venv venv
143
- $ source venv/bin/activate
142
+ python3 -m venv venv
143
+ source venv/bin/activate
144
144
  ```
145
145
 
146
146
  2. Install from PyPi:
147
147
 
148
148
  ``` commandline
149
- $ pip install pygpt-net
149
+ pip install pygpt-net
150
150
  ```
151
151
 
152
152
  3. Once installed run the command to start the application:
153
153
 
154
154
  ``` commandline
155
- $ pygpt
155
+ pygpt
156
156
  ```
157
157
 
158
158
  ## Running from GitHub source code
@@ -164,27 +164,27 @@ An alternative method is to download the source code from `GitHub` and execute t
164
164
  1. Clone git repository or download .zip file:
165
165
 
166
166
  ```commandline
167
- $ git clone https://github.com/szczyglis-dev/py-gpt.git
168
- $ cd py-gpt
167
+ git clone https://github.com/szczyglis-dev/py-gpt.git
168
+ cd py-gpt
169
169
  ```
170
170
 
171
171
  2. Create a new virtual environment:
172
172
 
173
173
  ```commandline
174
- $ python3 -m venv venv
175
- $ source venv/bin/activate
174
+ python3 -m venv venv
175
+ source venv/bin/activate
176
176
  ```
177
177
 
178
178
  3. Install requirements:
179
179
 
180
180
  ```commandline
181
- $ pip install -r requirements.txt
181
+ pip install -r requirements.txt
182
182
  ```
183
183
 
184
184
  4. Run the application:
185
185
 
186
186
  ```commandline
187
- $ python3 run.py
187
+ python3 run.py
188
188
  ```
189
189
 
190
190
  ### Install with Poetry
@@ -192,33 +192,33 @@ $ python3 run.py
192
192
  1. Clone git repository or download .zip file:
193
193
 
194
194
  ```commandline
195
- $ git clone https://github.com/szczyglis-dev/py-gpt.git
196
- $ cd py-gpt
195
+ git clone https://github.com/szczyglis-dev/py-gpt.git
196
+ cd py-gpt
197
197
  ```
198
198
 
199
199
  2. Install Poetry (if not installed):
200
200
 
201
201
  ```commandline
202
- $ pip install poetry
202
+ pip install poetry
203
203
  ```
204
204
 
205
205
  3. Create a new virtual environment that uses Python 3.10:
206
206
 
207
207
  ```commandline
208
- $ poetry env use python3.10
209
- $ poetry shell
208
+ poetry env use python3.10
209
+ poetry shell
210
210
  ```
211
211
 
212
212
  4. Install requirements:
213
213
 
214
214
  ```commandline
215
- $ poetry install
215
+ poetry install
216
216
  ```
217
217
 
218
218
  5. Run the application:
219
219
 
220
220
  ```commandline
221
- $ poetry run python3 run.py
221
+ poetry run python3 run.py
222
222
  ```
223
223
 
224
224
  **Tip**: you can use `PyInstaller` to create a compiled version of
@@ -237,19 +237,19 @@ Reinstalling the application may fix this problem.
237
237
  ...then install `libxcb`:
238
238
 
239
239
  ```commandline
240
- $ sudo apt install libxcb-cursor0
240
+ sudo apt install libxcb-cursor0
241
241
  ```
242
242
 
243
243
  If you have a problems with audio on Linux, then try to install `portaudio19-dev` and/or `libasound2`:
244
244
 
245
245
  ```commandline
246
- $ sudo apt install portaudio19-dev
246
+ sudo apt install portaudio19-dev
247
247
  ```
248
248
 
249
249
  ```commandline
250
- $ sudo apt install libasound2
251
- $ sudo apt install libasound2-data
252
- $ sudo apt install libasound2-plugins
250
+ sudo apt install libasound2
251
+ sudo apt install libasound2-data
252
+ sudo apt install libasound2-plugins
253
253
  ```
254
254
 
255
255
  **Problems with GLIBC on Linux**
@@ -265,7 +265,7 @@ when trying to run the compiled version for Linux, try updating GLIBC to version
265
265
 
266
266
 
267
267
  ```commandline
268
- $ sudo snap connect pygpt:camera
268
+ sudo snap connect pygpt:camera
269
269
  ```
270
270
 
271
271
  **Access to microphone in Snap version:**
@@ -273,7 +273,7 @@ $ sudo snap connect pygpt:camera
273
273
  To use microphone in Snap version you must connect the microphone with:
274
274
 
275
275
  ```commandline
276
- $ sudo snap connect pygpt:audio-record :audio-record
276
+ sudo snap connect pygpt:audio-record :audio-record
277
277
  ```
278
278
 
279
279
  **Windows and VC++ Redistributable**
@@ -292,13 +292,13 @@ It may also be necessary to add the path `C:\path\to\venv\Lib\python3.x\site-pac
292
292
  If you have a problems with `WebEngine / Chromium` renderer you can force the legacy mode by launching the app with command line arguments:
293
293
 
294
294
  ``` ini
295
- $ python3 run.py --legacy=1
295
+ python3 run.py --legacy=1
296
296
  ```
297
297
 
298
298
  and to force disable OpenGL hardware acceleration:
299
299
 
300
300
  ``` ini
301
- $ python3 run.py --disable-gpu=1
301
+ python3 run.py --disable-gpu=1
302
302
  ```
303
303
 
304
304
  You can also manualy enable legacy mode by editing config file - open the `%WORKDIR%/config.json` config file in editor and set the following options:
@@ -603,11 +603,11 @@ Built-in file loaders:
603
603
  - Webpages (crawling any webpage content)
604
604
  - YouTube (transcriptions)
605
605
 
606
- You can configure data loaders in `Settings / LlamaIndex / Data Loaders` by providing list of keyword arguments for specified loaders.
606
+ You can configure data loaders in `Settings / Indexes (LlamaIndex) / Data Loaders` by providing list of keyword arguments for specified loaders.
607
607
  You can also develop and provide your own custom loader and register it within the application.
608
608
 
609
609
  LlamaIndex is also integrated with context database - you can use data from database (your context history) as additional context in discussion.
610
- Options for indexing existing context history or enabling real-time indexing new ones (from database) are available in `Settings / LlamaIndex` section.
610
+ Options for indexing existing context history or enabling real-time indexing new ones (from database) are available in `Settings / Indexes (LlamaIndex)` section.
611
611
 
612
612
  **WARNING:** remember that when indexing content, API calls to the embedding model are used. Each indexing consumes additional tokens. Always control the number of tokens used on the OpenAI page.
613
613
 
@@ -669,7 +669,7 @@ You can set the limit of steps in such a loop by going to `Settings -> Agents an
669
669
 
670
670
  You can change the prompt used for evaluating the response in `Settings -> Prompts -> Agent: evaluation prompt in loop`. Here, you can adjust it to suit your needs, for example, by defining more or less critical feedback for the responses received.
671
671
 
672
- ## Agent (Legacy, Autonomous)
672
+ ## Agent (Autonomous)
673
673
 
674
674
  This is an older version of the Agent mode, still available as legacy. However, it is recommended to use the newer mode: `Agent (LlamaIndex)`.
675
675
 
@@ -817,11 +817,13 @@ The content from the uploaded attachments will be used in the current conversati
817
817
 
818
818
  - `Full context`: Provides best results. This mode attaches the entire content of the read file to the user's prompt. This process happens in the background and may require a large number of tokens if you uploaded extensive content.
819
819
 
820
- - `Query only`: The indexed attachment will only be queried in real-time using LlamaIndex. This operation does not require any additional tokens, but it may not provide access to the full content of the file 1:1.
820
+ - `RAG`: The indexed attachment will only be queried in real-time using LlamaIndex. This operation does not require any additional tokens, but it may not provide access to the full content of the file 1:1.
821
821
 
822
822
  - `Summary`: When queried, an additional query will be generated in the background and executed by a separate model to summarize the content of the attachment and return the required information to the main model. You can change the model used for summarization in the settings under the `Files and attachments` section.
823
823
 
824
- **Important**: When using `Full context` mode, the entire content of the file is included in the prompt, which can result in high token usage each time. If you want to reduce the number of tokens used, instead use the `Query only` option, which will only query the indexed attachment in the vector database to provide additional context.
824
+ In the `RAG` and `Summary` mode, you can enable an additional setting by going to `Settings -> Files and attachments -> Use history in RAG query`. This allows for better preparation of queries for RAG. When this option is turned on, the entire conversation context is considered, rather than just the user's last query. This allows for better searching of the index for additional context. In the `RAG limit` option, you can set a limit on how many recent entries in a discussion should be considered (`0 = no limit, default: 3`).
825
+
826
+ **Important**: When using `Full context` mode, the entire content of the file is included in the prompt, which can result in high token usage each time. If you want to reduce the number of tokens used, instead use the `RAG` option, which will only query the indexed attachment in the vector database to provide additional context.
825
827
 
826
828
  **Images as Additional Context**
827
829
 
@@ -829,7 +831,7 @@ Files such as jpg, png, and similar images are a special case. By default, image
829
831
 
830
832
  **Uploading larger files and auto-index**
831
833
 
832
- To use the `Query only` mode, the file must be indexed in the vector database. This occurs automatically at the time of upload if the `Auto-index on upload` option in the `Attachments` tab is enabled. When uploading large files, such indexing might take a while - therefore, if you are using the `Full context` option, which does not use the index, you can disable the `Auto-index` option to speed up the upload of the attachment. In this case, it will only be indexed when the `Query only` option is called for the first time, and until then, attachment will be available in the form of `Full context` and `Summary`.
834
+ To use the `RAG` mode, the file must be indexed in the vector database. This occurs automatically at the time of upload if the `Auto-index on upload` option in the `Attachments` tab is enabled. When uploading large files, such indexing might take a while - therefore, if you are using the `Full context` option, which does not use the index, you can disable the `Auto-index` option to speed up the upload of the attachment. In this case, it will only be indexed when the `RAG` option is called for the first time, and until then, attachment will be available in the form of `Full context` and `Summary`.
833
835
 
834
836
  ## Downloading files
835
837
 
@@ -946,11 +948,11 @@ How to use locally installed Llama 3 or Mistral models:
946
948
 
947
949
  For example, on Linux:
948
950
 
949
- ```$ curl -fsSL https://ollama.com/install.sh | sh```
951
+ ```curl -fsSL https://ollama.com/install.sh | sh```
950
952
 
951
953
  4) Run the model (e.g. Llama 3) locally on your machine. For example, on Linux:
952
954
 
953
- ```$ ollama run llama3.1```
955
+ ```ollama run llama3.1```
954
956
 
955
957
  5) Return to PyGPT and select the correct model from models list to chat with selected model using Ollama running locally.
956
958
 
@@ -1467,11 +1469,11 @@ You can find the installation instructions here: https://docs.docker.com/engine/
1467
1469
  To use IPython in the Snap version, you must connect PyGPT to the Docker daemon:
1468
1470
 
1469
1471
  ```commandline
1470
- $ sudo snap connect pygpt:docker-executables docker:docker-executables
1472
+ sudo snap connect pygpt:docker-executables docker:docker-executables
1471
1473
  ```
1472
1474
 
1473
1475
  ````commandline
1474
- $ sudo snap connect pygpt:docker docker:docker-daemon
1476
+ sudo snap connect pygpt:docker docker:docker-daemon
1475
1477
  ````
1476
1478
 
1477
1479
 
@@ -2710,6 +2712,16 @@ Config -> Settings...
2710
2712
 
2711
2713
  - `Directory for file downloads`: Subdirectory for downloaded files, e.g. in Assistants mode, inside "data". Default: "download"
2712
2714
 
2715
+ - `Verbose mode`: Enabled verbose mode when using attachment as additional context.
2716
+
2717
+ - `Model for querying index`: Model to use for preparing query and querying the index when the RAG option is selected.
2718
+
2719
+ - `Model for attachment content summary`: Model to use when generating a summary for the content of a file when the Summary option is selected.
2720
+
2721
+ - `Use history in RAG query`: When enabled, the content of the entire conversation will be used when preparing a query if mode is RAG or Summary.
2722
+
2723
+ - `RAG limit`: Only if the option `Use history in RAG query` is enabled. Specify the limit of how many recent entries in the conversation will be used when generating a query for RAG. 0 = no limit.
2724
+
2713
2725
  **Context**
2714
2726
 
2715
2727
  - `Context Threshold`: Sets the number of tokens reserved for the model to respond to the next prompt.
@@ -2961,22 +2973,6 @@ You can manually edit the configuration files in this directory (this is your wo
2961
2973
 
2962
2974
  ## Setting the Working Directory Using Command Line Arguments
2963
2975
 
2964
- If you want to force set current workdir using command-line argument, use:
2965
-
2966
- ```
2967
- python3 ./run.py --workdir="/path/to/workdir"
2968
- ```
2969
- or:
2970
-
2971
- ```
2972
- pygpt.exe --workdir="/path/to/workdir"
2973
- ```
2974
- in binary version.
2975
-
2976
- Certainly! Here's the improved version:
2977
-
2978
- ## Setting the Working Directory Using Command Line Arguments
2979
-
2980
2976
  To set the current working directory using a command-line argument, use:
2981
2977
 
2982
2978
  ```
@@ -3263,7 +3259,7 @@ If you want to only query index (without chat) you can enable `Query index only
3263
3259
 
3264
3260
  You can create a custom vector store provider or data loader for your data and develop a custom launcher for the application.
3265
3261
 
3266
- See the section `Extending PyGPT / Adding custom Vector Store provider` for more details.
3262
+ See the section `Extending PyGPT / Adding a custom Vector Store provider` for more details.
3267
3263
 
3268
3264
  # Updates
3269
3265
 
@@ -3561,6 +3557,8 @@ Syntax: `event name` - triggered on, `event data` *(data type)*:
3561
3557
 
3562
3558
  - `AI_NAME` - when preparing an AI name, `data['value']` *(string, name of the AI assistant)*
3563
3559
 
3560
+ - `AGENT_PROMPT` - on agent prompt in eval mode, `data['value']` *(string, prompt)*
3561
+
3564
3562
  - `AUDIO_INPUT_RECORD_START` - start audio input recording
3565
3563
 
3566
3564
  - `AUDIO_INPUT_RECORD_STOP` - stop audio input recording
@@ -3619,10 +3617,16 @@ Syntax: `event name` - triggered on, `event data` *(data type)*:
3619
3617
 
3620
3618
  - `POST_PROMPT` - after preparing a system prompt, `data['value']` *(string, system prompt)*
3621
3619
 
3620
+ - `POST_PROMPT_ASYNC` - after preparing a system prompt, just before request in async thread, `data['value']` *(string, system prompt)*
3621
+
3622
+ - `POST_PROMPT_END` - after preparing a system prompt, just before request in async thread, at the very end `data['value']` *(string, system prompt)*
3623
+
3622
3624
  - `PRE_PROMPT` - before preparing a system prompt, `data['value']` *(string, system prompt)*
3623
3625
 
3624
3626
  - `SYSTEM_PROMPT` - when preparing a system prompt, `data['value']` *(string, system prompt)*
3625
3627
 
3628
+ - `TOOL_OUTPUT_RENDER` - when rendering extra content from tools from plugins, `data['content']` *(string, content)*
3629
+
3626
3630
  - `UI_ATTACHMENTS` - when the attachment upload elements are rendered, `data['value']` *(bool, show True/False)*
3627
3631
 
3628
3632
  - `UI_VISION` - when the vision elements are rendered, `data['value']` *(bool, show True/False)*
@@ -3861,6 +3865,14 @@ may consume additional tokens that are not displayed in the main window.
3861
3865
 
3862
3866
  ## Recent changes:
3863
3867
 
3868
+ **2.4.37 (2024-11-30)**
3869
+
3870
+ - The `Query only` mode in `Uploaded` tab has been renamed to `RAG`.
3871
+ - New options have been added under `Settings -> Files and Attachments`:
3872
+ - `Use history in RAG query`: When enabled, the content of the entire conversation will be used when preparing a query if the mode is set to RAG or Summary.
3873
+ - `RAG limit`: This option is applicable only if 'Use history in RAG query' is enabled. It specifies the limit on how many recent entries in the conversation will be used when generating a query for RAG. A value of 0 indicates no limit.
3874
+ - Cache: dynamic parts of the system prompt (from plugins) have been moved to the very end of the prompt stack to enable the use of prompt cache mechanisms in OpenAI.
3875
+
3864
3876
  **2.4.36 (2024-11-28)**
3865
3877
 
3866
3878
  - Added a new command-line argument: --workdir="/path/to/workdir" to explicitly set the current working directory.
@@ -3894,7 +3906,7 @@ may consume additional tokens that are not displayed in the main window.
3894
3906
 
3895
3907
  - Added an option checkbox `Auto-index on upload` in the `Attachments` tab:
3896
3908
 
3897
- **Tip:** To use the `Query only` mode, the file must be indexed in the vector database. This occurs automatically at the time of upload if the `Auto-index on upload` option in the `Attachments` tab is enabled. When uploading large files, such indexing might take a while - therefore, if you are using the `Full context` option, which does not use the index, you can disable the `Auto-index` option to speed up the upload of the attachment. In this case, it will only be indexed when the `Query only` option is called for the first time, and until then, attachment will be available in the form of `Full context` and `Summary`.
3909
+ **Tip:** To use the `RAG` mode, the file must be indexed in the vector database. This occurs automatically at the time of upload if the `Auto-index on upload` option in the `Attachments` tab is enabled. When uploading large files, such indexing might take a while - therefore, if you are using the `Full context` option, which does not use the index, you can disable the `Auto-index` option to speed up the upload of the attachment. In this case, it will only be indexed when the `RAG` option is called for the first time, and until then, attachment will be available in the form of `Full context` and `Summary`.
3898
3910
 
3899
3911
  - Added context menu options in `Uploaded attachments` tab: `Open`, `Open Source directory` and `Open Storage directory`.
3900
3912
 
pygpt_net/CHANGELOG.txt CHANGED
@@ -1,3 +1,11 @@
1
+ 2.4.37 (2024-11-30)
2
+
3
+ - The `Query only` mode in `Uploaded` tab has been renamed to `RAG`.
4
+ - New options have been added under `Settings -> Files and Attachments`:
5
+ - `Use history in RAG query`: When enabled, the content of the entire conversation will be used when preparing a query if the mode is set to RAG or Summary.
6
+ - `RAG limit`: This option is applicable only if 'Use history in RAG query' is enabled. It specifies the limit on how many recent entries in the conversation will be used when generating a query for RAG. A value of 0 indicates no limit.
7
+ - Cache: dynamic parts of the system prompt (from plugins) have been moved to the very end of the prompt stack to enable the use of prompt cache mechanisms in OpenAI.
8
+
1
9
  2.4.36 (2024-11-28)
2
10
 
3
11
  - Added a new command-line argument: --workdir="/path/to/workdir" to explicitly set the current working directory.
@@ -31,7 +39,7 @@
31
39
 
32
40
  - Added an option checkbox `Auto-index on upload` in the `Attachments` tab:
33
41
 
34
- Tip: To use the `Query only` mode, the file must be indexed in the vector database. This occurs automatically at the time of upload if the `Auto-index on upload` option in the `Attachments` tab is enabled. When uploading large files, such indexing might take a while - therefore, if you are using the `Full context` option, which does not use the index, you can disable the `Auto-index` option to speed up the upload of the attachment. In this case, it will only be indexed when the `Query only` option is called for the first time, and until then, attachment will be available in the form of `Full context` and `Summary`.
42
+ Tip: To use the `RAG` mode, the file must be indexed in the vector database. This occurs automatically at the time of upload if the `Auto-index on upload` option in the `Attachments` tab is enabled. When uploading large files, such indexing might take a while - therefore, if you are using the `Full context` option, which does not use the index, you can disable the `Auto-index` option to speed up the upload of the attachment. In this case, it will only be indexed when the `RAG` option is called for the first time, and until then, attachment will be available in the form of `Full context` and `Summary`.
35
43
 
36
44
  - Added context menu options in `Uploaded attachments` tab: `Open`, `Open Source directory` and `Open Storage directory`.
37
45
 
pygpt_net/__init__.py CHANGED
@@ -6,15 +6,15 @@
6
6
  # GitHub: https://github.com/szczyglis-dev/py-gpt #
7
7
  # MIT License #
8
8
  # Created By : Marcin Szczygliński #
9
- # Updated Date: 2024.11.28 01:00:00 #
9
+ # Updated Date: 2024.11.30 01:00:00 #
10
10
  # ================================================== #
11
11
 
12
12
  __author__ = "Marcin Szczygliński"
13
13
  __copyright__ = "Copyright 2024, Marcin Szczygliński"
14
14
  __credits__ = ["Marcin Szczygliński"]
15
15
  __license__ = "MIT"
16
- __version__ = "2.4.36"
17
- __build__ = "2024.11.28"
16
+ __version__ = "2.4.37"
17
+ __build__ = "2024.11.30"
18
18
  __maintainer__ = "Marcin Szczygliński"
19
19
  __github__ = "https://github.com/szczyglis-dev/py-gpt"
20
20
  __website__ = "https://pygpt.net"
@@ -6,7 +6,7 @@
6
6
  # GitHub: https://github.com/szczyglis-dev/py-gpt #
7
7
  # MIT License #
8
8
  # Created By : Marcin Szczygliński #
9
- # Updated Date: 2024.11.26 04:00:00 #
9
+ # Updated Date: 2024.11.29 23:00:00 #
10
10
  # ================================================== #
11
11
 
12
12
  import os
@@ -261,26 +261,22 @@ class Attachment(QObject):
261
261
  """
262
262
  return self.mode
263
263
 
264
- def get_context(self, ctx: CtxItem) -> str:
264
+ def get_context(self, ctx: CtxItem, history: list) -> str:
265
265
  """
266
266
  Get additional context for attachment
267
267
 
268
268
  :param ctx: CtxItem instance
269
+ :param history Context items (history)
269
270
  :return: Additional context
270
271
  """
271
- content = ""
272
- meta = ctx.meta
273
272
  if self.mode != self.MODE_DISABLED:
274
273
  if self.is_verbose():
275
274
  print("\nPreparing additional context...\nContext Mode: {}".format(self.mode))
276
275
 
277
- self.window.core.attachments.context.reset()
278
- if self.mode == self.MODE_FULL_CONTEXT:
279
- content = self.get_full_context(ctx)
280
- elif self.mode == self.MODE_QUERY_CONTEXT:
281
- content = self.get_query_context(meta, str(ctx.input))
282
- elif self.mode == self.MODE_QUERY_CONTEXT_SUMMARY:
283
- content = self.get_context_summary(ctx)
276
+ self.window.core.attachments.context.reset() # reset used files and urls
277
+
278
+ # get additional context from attachments
279
+ content = self.window.core.attachments.context.get_context(self.mode, ctx, history)
284
280
 
285
281
  # append used files and urls to context
286
282
  files = self.window.core.attachments.context.get_used_files()
@@ -296,34 +292,6 @@ class Attachment(QObject):
296
292
  return "====================================\nADDITIONAL CONTEXT FROM ATTACHMENT(s): {}".format(content)
297
293
  return ""
298
294
 
299
- def get_full_context(self, ctx: CtxItem) -> str:
300
- """
301
- Get full context for attachment
302
-
303
- :param ctx: CtxItem instance
304
- :return: Full context
305
- """
306
- return self.window.core.attachments.context.get_context_text(ctx, filename=True)
307
-
308
- def get_query_context(self, meta: CtxMeta, query: str) -> str:
309
- """
310
- Get query context for attachment
311
-
312
- :param meta: CtxMeta instance
313
- :param query: Query string
314
- :return: Query context
315
- """
316
- return self.window.core.attachments.context.query_context(meta, query)
317
-
318
- def get_context_summary(self, ctx: CtxItem) -> str:
319
- """
320
- Get context summary
321
-
322
- :param ctx: CtxItem instance
323
- :return: Context summary
324
- """
325
- return self.window.core.attachments.context.summary_context(ctx, ctx.input)
326
-
327
295
  def get_uploaded_attachments(self, meta: CtxMeta) -> list:
328
296
  """
329
297
  Get uploaded attachments for meta