pygpt-net 2.4.35__py3-none-any.whl → 2.4.36__py3-none-any.whl

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
CHANGELOG.md CHANGED
@@ -1,5 +1,10 @@
1
1
  # CHANGELOG
2
2
 
3
+ ## 2.4.36 (2024-11-28)
4
+
5
+ - Added a new command-line argument: --workdir="/path/to/workdir" to explicitly set the current working directory.
6
+ - Fix: start image generation in Image mode.
7
+
3
8
  ## 2.4.35 (2024-11-28)
4
9
 
5
10
  - Docker removed from dependencies in Snap version #82
README.md CHANGED
@@ -2,7 +2,7 @@
2
2
 
3
3
  [![pygpt](https://snapcraft.io/pygpt/badge.svg)](https://snapcraft.io/pygpt)
4
4
 
5
- Release: **2.4.35** | build: **2024.11.28** | Python: **>=3.10, <3.12**
5
+ Release: **2.4.36** | build: **2024.11.28** | Python: **>=3.10, <3.12**
6
6
 
7
7
  > Official website: https://pygpt.net | Documentation: https://pygpt.readthedocs.io
8
8
  >
@@ -24,13 +24,13 @@ For audio interactions, **PyGPT** includes speech synthesis using the `Microsoft
24
24
 
25
25
  Multiple operation modes are included, such as chat, text completion, assistant, vision, LangChain, Chat with Files (via `LlamaIndex`), commands execution, external API calls and image generation, making **PyGPT** a multi-tool for many AI-driven tasks.
26
26
 
27
- **Video** (mp4, version `2.2.35`, build `2024-11-28`):
27
+ **Video** (mp4, version `2.4.35`, build `2024-11-28`):
28
28
 
29
29
  https://github.com/user-attachments/assets/5751a003-950f-40e7-a655-d098bbf27b0c
30
30
 
31
- **Screenshot** (version `2.2.35`, build `2024-11-28`):
31
+ **Screenshot** (version `2.4.35`, build `2024-11-28`):
32
32
 
33
- ![v2_main](https://github.com/user-attachments/assets/29f1a595-006c-4e2c-b665-f626368ca330)
33
+ ![v2_main](https://github.com/user-attachments/assets/5d1b0da4-f8b3-437f-af07-764798315253)
34
34
 
35
35
  You can download compiled 64-bit versions for Windows and Linux here: https://pygpt.net/#download
36
36
 
@@ -129,7 +129,7 @@ $ sudo snap connect pygpt:docker-executables docker:docker-executables
129
129
  ```
130
130
 
131
131
  ````commandline
132
- sudo snap connect pygpt:docker docker:docker-daemon
132
+ $ sudo snap connect pygpt:docker docker:docker-daemon
133
133
  ````
134
134
 
135
135
  ## PyPi (pip)
@@ -358,11 +358,11 @@ The main part of the interface is a chat window where you see your conversations
358
358
 
359
359
  Above where you type your messages, the interface shows you the number of tokens your message will use up as you type it – this helps to keep track of usage. There is also a feature to attach and upload files in this area. Go to the `Files and Attachments` section for more information on how to use attachments.
360
360
 
361
- ![v2_mode_chat](https://github.com/user-attachments/assets/719854af-c28c-4329-a4c4-8622038ba53b)
361
+ ![v2_mode_chat](https://github.com/user-attachments/assets/ed9a0290-1dcc-42e7-9585-078ad06f2e28)
362
362
 
363
363
  **Vision:** If you want to send photos from your disk or images from your camera for analysis, and the selected model does not support Vision, you must enable the `GPT-4 Vision (inline)` plugin in the Plugins menu. This plugin allows you to send photos or images from your camera for analysis in any Chat mode.
364
364
 
365
- ![v3_vision_plugins](https://github.com/szczyglis-dev/py-gpt/assets/61396542/104b0a80-7cf8-4a02-aa74-27e89ad2e409)
365
+ ![v3_vision_plugins](https://github.com/user-attachments/assets/7d16f5f3-71b0-4c87-8f52-77b42ac9ded8)
366
366
 
367
367
  With this plugin, you can capture an image with your camera or attach an image and send it for analysis to discuss the photograph:
368
368
 
@@ -371,7 +371,7 @@ With this plugin, you can capture an image with your camera or attach an image a
371
371
  **Image generation:** If you want to generate images (using DALL-E) directly in chat you must enable plugin `DALL-E 3 (inline)` in the Plugins menu.
372
372
  Plugin allows you to generate images in Chat mode:
373
373
 
374
- ![v3_img_chat](https://github.com/szczyglis-dev/py-gpt/assets/61396542/c288a4b3-c932-4201-b5a3-8452aea49817)
374
+ ![v3_img_chat](https://github.com/user-attachments/assets/1af65452-1ed1-43ec-8d78-21b0e61f0ec3)
375
375
 
376
376
 
377
377
  ## Chat with Audio
@@ -404,12 +404,14 @@ From version `2.0.107` the `davinci` models are deprecated and has been replaced
404
404
  The older model version, `DALL-E 2`, is also accessible. Generating images is akin to a chat conversation - a user's prompt triggers the generation, followed by downloading, saving to the computer,
405
405
  and displaying the image onscreen. You can send raw prompt to `DALL-E` in `Image generation` mode or ask the model for the best prompt.
406
406
 
407
+ ![v3_img](https://github.com/user-attachments/assets/d7521cd1-3162-4425-89df-f7a43881574f)
408
+
407
409
  Image generation using DALL-E is available in every mode via plugin `DALL-E 3 Image Generation (inline)`. Just ask any model, in any mode, like e.g. GPT-4 to generate an image and it will do it inline, without need to mode change.
408
410
 
409
411
  If you want to generate images (using DALL-E) directly in chat you must enable plugin **DALL-E 3 Inline** in the Plugins menu.
410
412
  Plugin allows you to generate images in Chat mode:
411
413
 
412
- ![v3_img_chat](https://github.com/szczyglis-dev/py-gpt/assets/61396542/c288a4b3-c932-4201-b5a3-8452aea49817)
414
+ ![v3_img_chat](https://github.com/user-attachments/assets/1af65452-1ed1-43ec-8d78-21b0e61f0ec3)
413
415
 
414
416
  ### Multiple variants
415
417
 
@@ -448,15 +450,13 @@ images and those found online.
448
450
 
449
451
  Vision is also integrated into any chat mode via plugin `GPT-4 Vision (inline)`. Just enable the plugin and use Vision in other work modes, such as Chat or Chat with Files.
450
452
 
451
- Vision mode also includes real-time video capture from camera. To enable capture check the option `Camera` on the right-bottom corner. It will enable real-time capturing from your camera. To capture image from camera and append it to chat just click on video at left side. You can also enable `Auto capture` - image will be captured and appended to chat message every time you send message.
452
-
453
- ![v2_capture_enable](https://github.com/szczyglis-dev/py-gpt/assets/61396542/c40ce0b4-57c8-4643-9982-25d15e68377e)
453
+ Vision mode also includes real-time video capture from camera. To capture image from camera and append it to chat just click on video at left side. You can also enable `Auto capture` - image will be captured and appended to chat message every time you send message.
454
454
 
455
455
  **1) Video camera real-time image capture**
456
456
 
457
457
  ![v2_capture1](https://github.com/szczyglis-dev/py-gpt/assets/61396542/477bb7fa-4639-42bb-8466-937e88e4a835)
458
458
 
459
- ![v3_vision_chat](https://github.com/szczyglis-dev/py-gpt/assets/61396542/3fbd99e2-5bbf-4bd4-81d8-fd4d7db9d8eb)
459
+ ![v3_vision_chat](https://github.com/user-attachments/assets/928d1aed-2689-44e1-b32a-68f02f83fb55)
460
460
 
461
461
  **2) you can also provide an image URL**
462
462
 
@@ -476,7 +476,7 @@ This mode expands on the basic chat functionality by including additional extern
476
476
 
477
477
  Setting up new assistants is simple - a single click is all it takes, and they instantly sync with the `OpenAI API`. Importing assistants you've previously created with OpenAI into **PyGPT** is also a seamless process.
478
478
 
479
- ![v2_mode_assistant](https://github.com/szczyglis-dev/py-gpt/assets/61396542/5c3b5604-928d-4f29-940a-21cc83c8dc34)
479
+ ![v2_mode_assistant](https://github.com/user-attachments/assets/726d31f8-9120-47af-9811-269c1178f506)
480
480
 
481
481
  In Assistant mode you are allowed to storage your files in remote vector store (per Assistant) and manage them easily from app:
482
482
 
@@ -679,8 +679,6 @@ The mode activates autonomous mode, where AI begins a conversation with itself.
679
679
  You can set this loop to run for any number of iterations. Throughout this sequence, the model will engage
680
680
  in self-dialogue, answering his own questions and comments, in order to find the best possible solution, subjecting previously generated steps to criticism.
681
681
 
682
- ![v2_agent_toolbox](https://github.com/szczyglis-dev/py-gpt/assets/61396542/a0ae5d13-942e-4a18-9c53-33e7ad1886ff)
683
-
684
682
  **WARNING:** Setting the number of run steps (iterations) to `0` activates an infinite loop which can generate a large number of requests and cause very high token consumption, so use this option with caution! Confirmation will be displayed every time you run the infinite loop.
685
683
 
686
684
  This mode is similar to `Auto-GPT` - it can be used to create more advanced inferences and to solve problems by breaking them down into subtasks that the model will autonomously perform one after another until the goal is achieved.
@@ -706,9 +704,9 @@ Default is: `chat`.
706
704
 
707
705
  If you want to use the LlamaIndex mode when running the agent, you can also specify which index `LlamaIndex` should use with the option:
708
706
 
709
- ```Settings / Agent (autonomous) / Index to use```
707
+ ```Settings / Agents and experts / Index to use```
710
708
 
711
- ![v2_agent_settings](https://github.com/szczyglis-dev/py-gpt/assets/61396542/c577d219-eb25-4f0e-9ea5-adf20a6b6b81)
709
+ ![v2_agent_settings](https://github.com/user-attachments/assets/5e4658dc-4488-415b-8be4-51bb5b45d0bd)
712
710
 
713
711
 
714
712
  ## Experts (co-op, co-operation mode)
@@ -752,7 +750,7 @@ Give me a list of active experts.
752
750
 
753
751
  On the left side of the application interface, there is a panel that displays a list of saved conversations. You can save numerous contexts and switch between them with ease. This feature allows you to revisit and continue from any point in a previous conversation. **PyGPT** automatically generates a summary for each context, akin to the way `ChatGPT` operates and gives you the option to modify these titles itself.
754
752
 
755
- ![v2_context_list](https://github.com/szczyglis-dev/py-gpt/assets/61396542/9228ea4c-f30c-4b02-ba85-da10b4e2eb7b)
753
+ ![v2_context_list](https://github.com/user-attachments/assets/75d1eb9d-da85-422b-8b54-d5f3e95ba059)
756
754
 
757
755
  You can disable context support in the settings by using the following option:
758
756
 
@@ -845,7 +843,7 @@ The `Files I/O` plugin takes care of file operations in the `data` directory, wh
845
843
 
846
844
  To allow the model to manage files or python code execution, the `+ Tools` option must be active, along with the above-mentioned plugins:
847
845
 
848
- ![v2_code_execute](https://github.com/szczyglis-dev/py-gpt/assets/61396542/d5181eeb-6ab4-426f-93f0-037d256cb078)
846
+ ![v2_code_execute](https://github.com/user-attachments/assets/0b96b362-52ca-4928-9675-a39038d787a1)
849
847
 
850
848
  # Presets
851
849
 
@@ -855,7 +853,7 @@ Presets in **PyGPT** are essentially templates used to store and quickly apply d
855
853
 
856
854
  The application lets you create as many presets as needed and easily switch among them. Additionally, you can clone an existing preset, which is useful for creating variations based on previously set configurations and experimentation.
857
855
 
858
- ![v2_preset](https://github.com/user-attachments/assets/b6a4a2c0-3944-4201-8674-05e86cc8a326)
856
+ ![v2_preset](https://github.com/user-attachments/assets/15d142c7-6669-4e0a-9a21-e57555a4cb83)
859
857
 
860
858
  ## Example usage
861
859
 
@@ -1221,7 +1219,7 @@ Options reference: https://pypi.org/project/SpeechRecognition/1.3.1/
1221
1219
  The plugin lets you turn text into speech using the TTS model from OpenAI or other services like ``Microsoft Azure``, ``Google``, and ``Eleven Labs``. You can add more text-to-speech providers to it too. `OpenAI TTS` does not require any additional API keys or extra configuration; it utilizes the main OpenAI key.
1222
1220
  Microsoft Azure requires to have an Azure API Key. Before using speech synthesis via `Microsoft Azure`, `Google` or `Eleven Labs`, you must configure the audio plugin with your API keys, regions and voices if required.
1223
1221
 
1224
- ![v2_azure](https://github.com/szczyglis-dev/py-gpt/assets/61396542/8035e9a5-5a01-44a1-85da-6e44c52459e4)
1222
+ ![v2_azure](https://github.com/user-attachments/assets/475108c1-5ea8-4f43-8cd5-effcd5ef352c)
1225
1223
 
1226
1224
  Through the available options, you can select the voice that you want the model to use. More voice synthesis providers coming soon.
1227
1225
 
@@ -1621,7 +1619,7 @@ Docker image to use for sandbox *Default:* `python:3.8-alpine`
1621
1619
 
1622
1620
  With the `Custom Commands` plugin, you can integrate **PyGPT** with your operating system and scripts or applications. You can define an unlimited number of custom commands and instruct GPT on when and how to execute them. Configuration is straightforward, and **PyGPT** includes a simple tutorial command for testing and learning how it works:
1623
1621
 
1624
- ![v2_custom_cmd](https://github.com/szczyglis-dev/py-gpt/assets/61396542/b30b8724-9ca1-44b1-abc7-78241588e1f6)
1622
+ ![v2_custom_cmd](https://github.com/user-attachments/assets/e1d803e8-9452-4507-a9a6-7a43b83b897d)
1625
1623
 
1626
1624
  To add a new custom command, click the **ADD** button and then:
1627
1625
 
@@ -1946,7 +1944,7 @@ Then, copy the following two items into **PyGPT**:
1946
1944
 
1947
1945
  These data must be configured in the appropriate fields in the `Plugins / Settings...` menu:
1948
1946
 
1949
- ![v2_plugin_google](https://github.com/szczyglis-dev/py-gpt/assets/61396542/f2e0df62-caaa-40ef-9b1e-239b2f912ec8)
1947
+ ![v2_plugin_google](https://github.com/user-attachments/assets/6a9ed44d-7a1e-45f7-a9c9-afd2b66baccc)
1950
1948
 
1951
1949
  - `Google Custom Search API KEY` *google_api_key*
1952
1950
 
@@ -2343,7 +2341,7 @@ It is a JSON object wrapped between `~###~`. The application extracts the JSON o
2343
2341
 
2344
2342
  **Tip:** The `+ Tools` option checkbox must be enabled to allow the execution of commands from plugins. Disable the option if you do not want to use commands, to prevent additional token usage (as the command execution system prompt consumes additional tokens).
2345
2343
 
2346
- ![v2_code_execute](https://github.com/szczyglis-dev/py-gpt/assets/61396542/d5181eeb-6ab4-426f-93f0-037d256cb078)
2344
+ ![v2_code_execute](https://github.com/user-attachments/assets/0b96b362-52ca-4928-9675-a39038d787a1)
2347
2345
 
2348
2346
  When native API function calls are disabled, a special system prompt responsible for invoking commands is added to the main system prompt if the `+ Tools` option is active.
2349
2347
 
@@ -2434,7 +2432,7 @@ PyGPT features several useful tools, including:
2434
2432
  - Python Code Interpreter
2435
2433
  - HTML/JS Canvas (built-in HTML renderer)
2436
2434
 
2437
- ![v2_tool_menu](https://github.com/szczyglis-dev/py-gpt/assets/61396542/fb3f44af-f0de-4e18-bcac-e20389a651c9)
2435
+ ![v2_tool_menu](https://github.com/user-attachments/assets/c8041cdc-64fd-41a5-b1af-8c987b06e5f0)
2438
2436
 
2439
2437
 
2440
2438
  ## Notepad
@@ -2512,13 +2510,13 @@ the system prompt, any additional data, and those used within the context (the m
2512
2510
 
2513
2511
  **Remember that these are only approximate calculations and do not include, for example, the number of tokens consumed by some plugins. You can find the exact number of tokens used on the OpenAI website.**
2514
2512
 
2515
- ![v2_tokens1](https://github.com/szczyglis-dev/py-gpt/assets/61396542/29b610be-9e96-41cc-84f0-1b946886f801)
2513
+ ![v2_tokens1](https://github.com/user-attachments/assets/e131a880-986d-4014-b5fd-9820516c6a10)
2516
2514
 
2517
2515
  ## Total tokens
2518
2516
 
2519
2517
  After receiving a response from the model, the application displays the actual total number of tokens used for the query (received from the API).
2520
2518
 
2521
- ![v2_tokens2](https://github.com/szczyglis-dev/py-gpt/assets/61396542/c81e95b5-7c33-41a6-8910-21d674db37e5)
2519
+ ![v2_tokens2](https://github.com/user-attachments/assets/52cd355f-c7b0-432b-9ff9-31c8a6a60b89)
2522
2520
 
2523
2521
 
2524
2522
  # Accessibility
@@ -2656,7 +2654,7 @@ The following basic options can be modified directly within the application:
2656
2654
  Config -> Settings...
2657
2655
  ```
2658
2656
 
2659
- ![v2_settings](https://github.com/szczyglis-dev/py-gpt/assets/61396542/43622c58-6cdb-4ed8-b47d-47729763db04)
2657
+ ![v2_settings](https://github.com/user-attachments/assets/003b0f86-8225-4478-8525-fb9324ac5c88)
2660
2658
 
2661
2659
  **General**
2662
2660
 
@@ -2961,6 +2959,36 @@ You can manually edit the configuration files in this directory (this is your wo
2961
2959
 
2962
2960
  ---
2963
2961
 
2962
+ ## Setting the Working Directory Using Command Line Arguments
2963
+
2964
+ If you want to force set current workdir using command-line argument, use:
2965
+
2966
+ ```
2967
+ python3 ./run.py --workdir="/path/to/workdir"
2968
+ ```
2969
+ or:
2970
+
2971
+ ```
2972
+ pygpt.exe --workdir="/path/to/workdir"
2973
+ ```
2974
+ in binary version.
2975
+
2976
+ Certainly! Here's the improved version:
2977
+
2978
+ ## Setting the Working Directory Using Command Line Arguments
2979
+
2980
+ To set the current working directory using a command-line argument, use:
2981
+
2982
+ ```
2983
+ python3 ./run.py --workdir="/path/to/workdir"
2984
+ ```
2985
+ or, for the binary version:
2986
+
2987
+ ```
2988
+ pygpt.exe --workdir="/path/to/workdir"
2989
+ ```
2990
+
2991
+
2964
2992
  ## Translations / Locale
2965
2993
 
2966
2994
  Locale `.ini` files are located in the app directory:
@@ -3833,6 +3861,11 @@ may consume additional tokens that are not displayed in the main window.
3833
3861
 
3834
3862
  ## Recent changes:
3835
3863
 
3864
+ **2.4.36 (2024-11-28)**
3865
+
3866
+ - Added a new command-line argument: --workdir="/path/to/workdir" to explicitly set the current working directory.
3867
+ - Fix: start image generation in Image mode.
3868
+
3836
3869
  **2.4.35 (2024-11-28)**
3837
3870
 
3838
3871
  - Docker removed from dependencies in Snap version #82
pygpt_net/CHANGELOG.txt CHANGED
@@ -1,3 +1,8 @@
1
+ 2.4.36 (2024-11-28)
2
+
3
+ - Added a new command-line argument: --workdir="/path/to/workdir" to explicitly set the current working directory.
4
+ - Fix: start image generation in Image mode.
5
+
1
6
  2.4.35 (2024-11-28)
2
7
 
3
8
  - Docker removed from dependencies in Snap version #82
pygpt_net/__init__.py CHANGED
@@ -13,7 +13,7 @@ __author__ = "Marcin Szczygliński"
13
13
  __copyright__ = "Copyright 2024, Marcin Szczygliński"
14
14
  __credits__ = ["Marcin Szczygliński"]
15
15
  __license__ = "MIT"
16
- __version__ = "2.4.35"
16
+ __version__ = "2.4.36"
17
17
  __build__ = "2024.11.28"
18
18
  __maintainer__ = "Marcin Szczygliński"
19
19
  __github__ = "https://github.com/szczyglis-dev/py-gpt"
pygpt_net/config.py CHANGED
@@ -104,7 +104,19 @@ class Config:
104
104
 
105
105
  :return: base workdir path
106
106
  """
107
- return os.path.join(Path.home(), '.config', Config.CONFIG_DIR)
107
+ path = os.path.join(Path.home(), '.config', Config.CONFIG_DIR)
108
+ if "PYGPT_WORKDIR" in os.environ and os.environ["PYGPT_WORKDIR"] != "":
109
+ print("FORCE using workdir: {}".format(os.environ["PYGPT_WORKDIR"]))
110
+ # convert relative path to absolute path if needed
111
+ if not os.path.isabs(os.environ["PYGPT_WORKDIR"]):
112
+ path = os.path.join(os.getcwd(), os.environ["PYGPT_WORKDIR"])
113
+ else:
114
+ path = os.environ["PYGPT_WORKDIR"]
115
+ if not os.path.exists(path):
116
+ print("Workdir path not exists: {}".format(path))
117
+ print("Creating workdir path: {}".format(path))
118
+ os.makedirs(path, exist_ok=True)
119
+ return path
108
120
 
109
121
  @staticmethod
110
122
  def prepare_workdir() -> str:
@@ -12,6 +12,7 @@
12
12
  from PySide6.QtCore import Slot
13
13
 
14
14
  from pygpt_net.core.bridge.context import BridgeContext
15
+ from pygpt_net.core.types import MODE_IMAGE
15
16
  from pygpt_net.item.ctx import CtxItem
16
17
  from pygpt_net.core.events import Event, KernelEvent, RenderEvent
17
18
  from pygpt_net.utils import trans
@@ -94,7 +95,7 @@ class Image:
94
95
  # generate image
95
96
  bridge_context = BridgeContext(
96
97
  ctx=ctx,
97
- mode="image",
98
+ mode=MODE_IMAGE,
98
99
  model=model_data, # model instance
99
100
  prompt=text,
100
101
  )
@@ -85,7 +85,7 @@ class Capture:
85
85
 
86
86
  # clear attachments before capture if needed
87
87
  if self.window.controller.attachment.is_capture_clear():
88
- self.window.controller.attachment.clear(True, auto=True, force=True)
88
+ self.window.controller.attachment.clear(True, auto=True)
89
89
 
90
90
  try:
91
91
  # prepare filename
@@ -127,7 +127,7 @@ class Capture:
127
127
 
128
128
  # clear attachments before capture if needed
129
129
  if self.window.controller.attachment.is_capture_clear():
130
- self.window.controller.attachment.clear(True, auto=True, force=True)
130
+ self.window.controller.attachment.clear(True, auto=True)
131
131
 
132
132
  try:
133
133
  # prepare filename
@@ -1,7 +1,7 @@
1
1
  {
2
2
  "__meta__": {
3
- "version": "2.4.35",
4
- "app.version": "2.4.35",
3
+ "version": "2.4.36",
4
+ "app.version": "2.4.36",
5
5
  "updated_at": "2024-11-28T00:00:00"
6
6
  },
7
7
  "access.audio.event.speech": false,
@@ -1,7 +1,7 @@
1
1
  {
2
2
  "__meta__": {
3
- "version": "2.4.35",
4
- "app.version": "2.4.35",
3
+ "version": "2.4.36",
4
+ "app.version": "2.4.36",
5
5
  "updated_at": "2024-11-28T00:00:00"
6
6
  },
7
7
  "items": {
@@ -1,7 +1,7 @@
1
1
  {
2
2
  "__meta__": {
3
- "version": "2.4.35",
4
- "app.version": "2.4.35",
3
+ "version": "2.4.36",
4
+ "app.version": "2.4.36",
5
5
  "updated_at": "2024-11-28T00:00:00"
6
6
  },
7
7
  "items": {
pygpt_net/launcher.py CHANGED
@@ -8,7 +8,7 @@
8
8
  # Created By : Marcin Szczygliński #
9
9
  # Updated Date: 2024.11.20 21:00:00 #
10
10
  # ================================================== #
11
-
11
+ import os
12
12
  import sys
13
13
  import argparse
14
14
  from logging import ERROR, WARNING, INFO, DEBUG
@@ -43,6 +43,7 @@ class Launcher:
43
43
  self.force_legacy = False
44
44
  self.force_disable_gpu = False
45
45
  self.shortcut_filter = None
46
+ self.workdir = None
46
47
 
47
48
  def setup(self) -> dict:
48
49
  """
@@ -69,6 +70,12 @@ class Launcher:
69
70
  required=False,
70
71
  help="force disable OpenGL (1=disabled, 0=enabled)",
71
72
  )
73
+ parser.add_argument(
74
+ "-w",
75
+ "--workdir",
76
+ required=False,
77
+ help="force set workdir",
78
+ )
72
79
  args = vars(parser.parse_args())
73
80
 
74
81
  # set log level [ERROR|WARNING|INFO|DEBUG]
@@ -93,6 +100,11 @@ class Launcher:
93
100
  print("** Force disable GPU enabled")
94
101
  self.force_disable_gpu = True
95
102
 
103
+ # force set workdir
104
+ if "workdir" in args and args["workdir"] is not None:
105
+ # set as environment variable
106
+ os.environ["PYGPT_WORKDIR"] = args["workdir"]
107
+
96
108
  return args
97
109
 
98
110
  def init(self):
@@ -1,6 +1,6 @@
1
1
  Metadata-Version: 2.1
2
2
  Name: pygpt-net
3
- Version: 2.4.35
3
+ Version: 2.4.36
4
4
  Summary: Desktop AI Assistant powered by models: OpenAI o1, GPT-4o, GPT-4, GPT-4 Vision, GPT-3.5, DALL-E 3, Llama 3, Mistral, Gemini, Claude, Bielik, and other models supported by Langchain, Llama Index, and Ollama. Features include chatbot, text completion, image generation, vision analysis, speech-to-text, internet access, file handling, command execution and more.
5
5
  Home-page: https://pygpt.net
6
6
  License: MIT
@@ -92,7 +92,7 @@ Description-Content-Type: text/markdown
92
92
 
93
93
  [![pygpt](https://snapcraft.io/pygpt/badge.svg)](https://snapcraft.io/pygpt)
94
94
 
95
- Release: **2.4.35** | build: **2024.11.28** | Python: **>=3.10, <3.12**
95
+ Release: **2.4.36** | build: **2024.11.28** | Python: **>=3.10, <3.12**
96
96
 
97
97
  > Official website: https://pygpt.net | Documentation: https://pygpt.readthedocs.io
98
98
  >
@@ -114,13 +114,13 @@ For audio interactions, **PyGPT** includes speech synthesis using the `Microsoft
114
114
 
115
115
  Multiple operation modes are included, such as chat, text completion, assistant, vision, LangChain, Chat with Files (via `LlamaIndex`), commands execution, external API calls and image generation, making **PyGPT** a multi-tool for many AI-driven tasks.
116
116
 
117
- **Video** (mp4, version `2.2.35`, build `2024-11-28`):
117
+ **Video** (mp4, version `2.4.35`, build `2024-11-28`):
118
118
 
119
119
  https://github.com/user-attachments/assets/5751a003-950f-40e7-a655-d098bbf27b0c
120
120
 
121
- **Screenshot** (version `2.2.35`, build `2024-11-28`):
121
+ **Screenshot** (version `2.4.35`, build `2024-11-28`):
122
122
 
123
- ![v2_main](https://github.com/user-attachments/assets/29f1a595-006c-4e2c-b665-f626368ca330)
123
+ ![v2_main](https://github.com/user-attachments/assets/5d1b0da4-f8b3-437f-af07-764798315253)
124
124
 
125
125
  You can download compiled 64-bit versions for Windows and Linux here: https://pygpt.net/#download
126
126
 
@@ -219,7 +219,7 @@ $ sudo snap connect pygpt:docker-executables docker:docker-executables
219
219
  ```
220
220
 
221
221
  ````commandline
222
- sudo snap connect pygpt:docker docker:docker-daemon
222
+ $ sudo snap connect pygpt:docker docker:docker-daemon
223
223
  ````
224
224
 
225
225
  ## PyPi (pip)
@@ -448,11 +448,11 @@ The main part of the interface is a chat window where you see your conversations
448
448
 
449
449
  Above where you type your messages, the interface shows you the number of tokens your message will use up as you type it – this helps to keep track of usage. There is also a feature to attach and upload files in this area. Go to the `Files and Attachments` section for more information on how to use attachments.
450
450
 
451
- ![v2_mode_chat](https://github.com/user-attachments/assets/719854af-c28c-4329-a4c4-8622038ba53b)
451
+ ![v2_mode_chat](https://github.com/user-attachments/assets/ed9a0290-1dcc-42e7-9585-078ad06f2e28)
452
452
 
453
453
  **Vision:** If you want to send photos from your disk or images from your camera for analysis, and the selected model does not support Vision, you must enable the `GPT-4 Vision (inline)` plugin in the Plugins menu. This plugin allows you to send photos or images from your camera for analysis in any Chat mode.
454
454
 
455
- ![v3_vision_plugins](https://github.com/szczyglis-dev/py-gpt/assets/61396542/104b0a80-7cf8-4a02-aa74-27e89ad2e409)
455
+ ![v3_vision_plugins](https://github.com/user-attachments/assets/7d16f5f3-71b0-4c87-8f52-77b42ac9ded8)
456
456
 
457
457
  With this plugin, you can capture an image with your camera or attach an image and send it for analysis to discuss the photograph:
458
458
 
@@ -461,7 +461,7 @@ With this plugin, you can capture an image with your camera or attach an image a
461
461
  **Image generation:** If you want to generate images (using DALL-E) directly in chat you must enable plugin `DALL-E 3 (inline)` in the Plugins menu.
462
462
  Plugin allows you to generate images in Chat mode:
463
463
 
464
- ![v3_img_chat](https://github.com/szczyglis-dev/py-gpt/assets/61396542/c288a4b3-c932-4201-b5a3-8452aea49817)
464
+ ![v3_img_chat](https://github.com/user-attachments/assets/1af65452-1ed1-43ec-8d78-21b0e61f0ec3)
465
465
 
466
466
 
467
467
  ## Chat with Audio
@@ -494,12 +494,14 @@ From version `2.0.107` the `davinci` models are deprecated and has been replaced
494
494
  The older model version, `DALL-E 2`, is also accessible. Generating images is akin to a chat conversation - a user's prompt triggers the generation, followed by downloading, saving to the computer,
495
495
  and displaying the image onscreen. You can send raw prompt to `DALL-E` in `Image generation` mode or ask the model for the best prompt.
496
496
 
497
+ ![v3_img](https://github.com/user-attachments/assets/d7521cd1-3162-4425-89df-f7a43881574f)
498
+
497
499
  Image generation using DALL-E is available in every mode via plugin `DALL-E 3 Image Generation (inline)`. Just ask any model, in any mode, like e.g. GPT-4 to generate an image and it will do it inline, without need to mode change.
498
500
 
499
501
  If you want to generate images (using DALL-E) directly in chat you must enable plugin **DALL-E 3 Inline** in the Plugins menu.
500
502
  Plugin allows you to generate images in Chat mode:
501
503
 
502
- ![v3_img_chat](https://github.com/szczyglis-dev/py-gpt/assets/61396542/c288a4b3-c932-4201-b5a3-8452aea49817)
504
+ ![v3_img_chat](https://github.com/user-attachments/assets/1af65452-1ed1-43ec-8d78-21b0e61f0ec3)
503
505
 
504
506
  ### Multiple variants
505
507
 
@@ -538,15 +540,13 @@ images and those found online.
538
540
 
539
541
  Vision is also integrated into any chat mode via plugin `GPT-4 Vision (inline)`. Just enable the plugin and use Vision in other work modes, such as Chat or Chat with Files.
540
542
 
541
- Vision mode also includes real-time video capture from camera. To enable capture check the option `Camera` on the right-bottom corner. It will enable real-time capturing from your camera. To capture image from camera and append it to chat just click on video at left side. You can also enable `Auto capture` - image will be captured and appended to chat message every time you send message.
542
-
543
- ![v2_capture_enable](https://github.com/szczyglis-dev/py-gpt/assets/61396542/c40ce0b4-57c8-4643-9982-25d15e68377e)
543
+ Vision mode also includes real-time video capture from camera. To capture image from camera and append it to chat just click on video at left side. You can also enable `Auto capture` - image will be captured and appended to chat message every time you send message.
544
544
 
545
545
  **1) Video camera real-time image capture**
546
546
 
547
547
  ![v2_capture1](https://github.com/szczyglis-dev/py-gpt/assets/61396542/477bb7fa-4639-42bb-8466-937e88e4a835)
548
548
 
549
- ![v3_vision_chat](https://github.com/szczyglis-dev/py-gpt/assets/61396542/3fbd99e2-5bbf-4bd4-81d8-fd4d7db9d8eb)
549
+ ![v3_vision_chat](https://github.com/user-attachments/assets/928d1aed-2689-44e1-b32a-68f02f83fb55)
550
550
 
551
551
  **2) you can also provide an image URL**
552
552
 
@@ -566,7 +566,7 @@ This mode expands on the basic chat functionality by including additional extern
566
566
 
567
567
  Setting up new assistants is simple - a single click is all it takes, and they instantly sync with the `OpenAI API`. Importing assistants you've previously created with OpenAI into **PyGPT** is also a seamless process.
568
568
 
569
- ![v2_mode_assistant](https://github.com/szczyglis-dev/py-gpt/assets/61396542/5c3b5604-928d-4f29-940a-21cc83c8dc34)
569
+ ![v2_mode_assistant](https://github.com/user-attachments/assets/726d31f8-9120-47af-9811-269c1178f506)
570
570
 
571
571
  In Assistant mode you are allowed to storage your files in remote vector store (per Assistant) and manage them easily from app:
572
572
 
@@ -769,8 +769,6 @@ The mode activates autonomous mode, where AI begins a conversation with itself.
769
769
  You can set this loop to run for any number of iterations. Throughout this sequence, the model will engage
770
770
  in self-dialogue, answering his own questions and comments, in order to find the best possible solution, subjecting previously generated steps to criticism.
771
771
 
772
- ![v2_agent_toolbox](https://github.com/szczyglis-dev/py-gpt/assets/61396542/a0ae5d13-942e-4a18-9c53-33e7ad1886ff)
773
-
774
772
  **WARNING:** Setting the number of run steps (iterations) to `0` activates an infinite loop which can generate a large number of requests and cause very high token consumption, so use this option with caution! Confirmation will be displayed every time you run the infinite loop.
775
773
 
776
774
  This mode is similar to `Auto-GPT` - it can be used to create more advanced inferences and to solve problems by breaking them down into subtasks that the model will autonomously perform one after another until the goal is achieved.
@@ -796,9 +794,9 @@ Default is: `chat`.
796
794
 
797
795
  If you want to use the LlamaIndex mode when running the agent, you can also specify which index `LlamaIndex` should use with the option:
798
796
 
799
- ```Settings / Agent (autonomous) / Index to use```
797
+ ```Settings / Agents and experts / Index to use```
800
798
 
801
- ![v2_agent_settings](https://github.com/szczyglis-dev/py-gpt/assets/61396542/c577d219-eb25-4f0e-9ea5-adf20a6b6b81)
799
+ ![v2_agent_settings](https://github.com/user-attachments/assets/5e4658dc-4488-415b-8be4-51bb5b45d0bd)
802
800
 
803
801
 
804
802
  ## Experts (co-op, co-operation mode)
@@ -842,7 +840,7 @@ Give me a list of active experts.
842
840
 
843
841
  On the left side of the application interface, there is a panel that displays a list of saved conversations. You can save numerous contexts and switch between them with ease. This feature allows you to revisit and continue from any point in a previous conversation. **PyGPT** automatically generates a summary for each context, akin to the way `ChatGPT` operates and gives you the option to modify these titles itself.
844
842
 
845
- ![v2_context_list](https://github.com/szczyglis-dev/py-gpt/assets/61396542/9228ea4c-f30c-4b02-ba85-da10b4e2eb7b)
843
+ ![v2_context_list](https://github.com/user-attachments/assets/75d1eb9d-da85-422b-8b54-d5f3e95ba059)
846
844
 
847
845
  You can disable context support in the settings by using the following option:
848
846
 
@@ -935,7 +933,7 @@ The `Files I/O` plugin takes care of file operations in the `data` directory, wh
935
933
 
936
934
  To allow the model to manage files or python code execution, the `+ Tools` option must be active, along with the above-mentioned plugins:
937
935
 
938
- ![v2_code_execute](https://github.com/szczyglis-dev/py-gpt/assets/61396542/d5181eeb-6ab4-426f-93f0-037d256cb078)
936
+ ![v2_code_execute](https://github.com/user-attachments/assets/0b96b362-52ca-4928-9675-a39038d787a1)
939
937
 
940
938
  # Presets
941
939
 
@@ -945,7 +943,7 @@ Presets in **PyGPT** are essentially templates used to store and quickly apply d
945
943
 
946
944
  The application lets you create as many presets as needed and easily switch among them. Additionally, you can clone an existing preset, which is useful for creating variations based on previously set configurations and experimentation.
947
945
 
948
- ![v2_preset](https://github.com/user-attachments/assets/b6a4a2c0-3944-4201-8674-05e86cc8a326)
946
+ ![v2_preset](https://github.com/user-attachments/assets/15d142c7-6669-4e0a-9a21-e57555a4cb83)
949
947
 
950
948
  ## Example usage
951
949
 
@@ -1311,7 +1309,7 @@ Options reference: https://pypi.org/project/SpeechRecognition/1.3.1/
1311
1309
  The plugin lets you turn text into speech using the TTS model from OpenAI or other services like ``Microsoft Azure``, ``Google``, and ``Eleven Labs``. You can add more text-to-speech providers to it too. `OpenAI TTS` does not require any additional API keys or extra configuration; it utilizes the main OpenAI key.
1312
1310
  Microsoft Azure requires to have an Azure API Key. Before using speech synthesis via `Microsoft Azure`, `Google` or `Eleven Labs`, you must configure the audio plugin with your API keys, regions and voices if required.
1313
1311
 
1314
- ![v2_azure](https://github.com/szczyglis-dev/py-gpt/assets/61396542/8035e9a5-5a01-44a1-85da-6e44c52459e4)
1312
+ ![v2_azure](https://github.com/user-attachments/assets/475108c1-5ea8-4f43-8cd5-effcd5ef352c)
1315
1313
 
1316
1314
  Through the available options, you can select the voice that you want the model to use. More voice synthesis providers coming soon.
1317
1315
 
@@ -1711,7 +1709,7 @@ Docker image to use for sandbox *Default:* `python:3.8-alpine`
1711
1709
 
1712
1710
  With the `Custom Commands` plugin, you can integrate **PyGPT** with your operating system and scripts or applications. You can define an unlimited number of custom commands and instruct GPT on when and how to execute them. Configuration is straightforward, and **PyGPT** includes a simple tutorial command for testing and learning how it works:
1713
1711
 
1714
- ![v2_custom_cmd](https://github.com/szczyglis-dev/py-gpt/assets/61396542/b30b8724-9ca1-44b1-abc7-78241588e1f6)
1712
+ ![v2_custom_cmd](https://github.com/user-attachments/assets/e1d803e8-9452-4507-a9a6-7a43b83b897d)
1715
1713
 
1716
1714
  To add a new custom command, click the **ADD** button and then:
1717
1715
 
@@ -2036,7 +2034,7 @@ Then, copy the following two items into **PyGPT**:
2036
2034
 
2037
2035
  These data must be configured in the appropriate fields in the `Plugins / Settings...` menu:
2038
2036
 
2039
- ![v2_plugin_google](https://github.com/szczyglis-dev/py-gpt/assets/61396542/f2e0df62-caaa-40ef-9b1e-239b2f912ec8)
2037
+ ![v2_plugin_google](https://github.com/user-attachments/assets/6a9ed44d-7a1e-45f7-a9c9-afd2b66baccc)
2040
2038
 
2041
2039
  - `Google Custom Search API KEY` *google_api_key*
2042
2040
 
@@ -2433,7 +2431,7 @@ It is a JSON object wrapped between `~###~`. The application extracts the JSON o
2433
2431
 
2434
2432
  **Tip:** The `+ Tools` option checkbox must be enabled to allow the execution of commands from plugins. Disable the option if you do not want to use commands, to prevent additional token usage (as the command execution system prompt consumes additional tokens).
2435
2433
 
2436
- ![v2_code_execute](https://github.com/szczyglis-dev/py-gpt/assets/61396542/d5181eeb-6ab4-426f-93f0-037d256cb078)
2434
+ ![v2_code_execute](https://github.com/user-attachments/assets/0b96b362-52ca-4928-9675-a39038d787a1)
2437
2435
 
2438
2436
  When native API function calls are disabled, a special system prompt responsible for invoking commands is added to the main system prompt if the `+ Tools` option is active.
2439
2437
 
@@ -2524,7 +2522,7 @@ PyGPT features several useful tools, including:
2524
2522
  - Python Code Interpreter
2525
2523
  - HTML/JS Canvas (built-in HTML renderer)
2526
2524
 
2527
- ![v2_tool_menu](https://github.com/szczyglis-dev/py-gpt/assets/61396542/fb3f44af-f0de-4e18-bcac-e20389a651c9)
2525
+ ![v2_tool_menu](https://github.com/user-attachments/assets/c8041cdc-64fd-41a5-b1af-8c987b06e5f0)
2528
2526
 
2529
2527
 
2530
2528
  ## Notepad
@@ -2602,13 +2600,13 @@ the system prompt, any additional data, and those used within the context (the m
2602
2600
 
2603
2601
  **Remember that these are only approximate calculations and do not include, for example, the number of tokens consumed by some plugins. You can find the exact number of tokens used on the OpenAI website.**
2604
2602
 
2605
- ![v2_tokens1](https://github.com/szczyglis-dev/py-gpt/assets/61396542/29b610be-9e96-41cc-84f0-1b946886f801)
2603
+ ![v2_tokens1](https://github.com/user-attachments/assets/e131a880-986d-4014-b5fd-9820516c6a10)
2606
2604
 
2607
2605
  ## Total tokens
2608
2606
 
2609
2607
  After receiving a response from the model, the application displays the actual total number of tokens used for the query (received from the API).
2610
2608
 
2611
- ![v2_tokens2](https://github.com/szczyglis-dev/py-gpt/assets/61396542/c81e95b5-7c33-41a6-8910-21d674db37e5)
2609
+ ![v2_tokens2](https://github.com/user-attachments/assets/52cd355f-c7b0-432b-9ff9-31c8a6a60b89)
2612
2610
 
2613
2611
 
2614
2612
  # Accessibility
@@ -2746,7 +2744,7 @@ The following basic options can be modified directly within the application:
2746
2744
  Config -> Settings...
2747
2745
  ```
2748
2746
 
2749
- ![v2_settings](https://github.com/szczyglis-dev/py-gpt/assets/61396542/43622c58-6cdb-4ed8-b47d-47729763db04)
2747
+ ![v2_settings](https://github.com/user-attachments/assets/003b0f86-8225-4478-8525-fb9324ac5c88)
2750
2748
 
2751
2749
  **General**
2752
2750
 
@@ -3051,6 +3049,36 @@ You can manually edit the configuration files in this directory (this is your wo
3051
3049
 
3052
3050
  ---
3053
3051
 
3052
+ ## Setting the Working Directory Using Command Line Arguments
3053
+
3054
+ If you want to force set current workdir using command-line argument, use:
3055
+
3056
+ ```
3057
+ python3 ./run.py --workdir="/path/to/workdir"
3058
+ ```
3059
+ or:
3060
+
3061
+ ```
3062
+ pygpt.exe --workdir="/path/to/workdir"
3063
+ ```
3064
+ in binary version.
3065
+
3066
+ Certainly! Here's the improved version:
3067
+
3068
+ ## Setting the Working Directory Using Command Line Arguments
3069
+
3070
+ To set the current working directory using a command-line argument, use:
3071
+
3072
+ ```
3073
+ python3 ./run.py --workdir="/path/to/workdir"
3074
+ ```
3075
+ or, for the binary version:
3076
+
3077
+ ```
3078
+ pygpt.exe --workdir="/path/to/workdir"
3079
+ ```
3080
+
3081
+
3054
3082
  ## Translations / Locale
3055
3083
 
3056
3084
  Locale `.ini` files are located in the app directory:
@@ -3923,6 +3951,11 @@ may consume additional tokens that are not displayed in the main window.
3923
3951
 
3924
3952
  ## Recent changes:
3925
3953
 
3954
+ **2.4.36 (2024-11-28)**
3955
+
3956
+ - Added a new command-line argument: --workdir="/path/to/workdir" to explicitly set the current working directory.
3957
+ - Fix: start image generation in Image mode.
3958
+
3926
3959
  **2.4.35 (2024-11-28)**
3927
3960
 
3928
3961
  - Docker removed from dependencies in Snap version #82
@@ -1,11 +1,11 @@
1
- CHANGELOG.md,sha256=nOoW3BfkRRDM59KKyd5MciSTlUC_Q77hvKp6lctIfWU,76811
2
- README.md,sha256=fUFkVHHbpzlfHSxSCklh5xbE_CQQeEAFPwL8zRlFT1U,161874
1
+ CHANGELOG.md,sha256=ze6FSM2VH2viP0X8-REeDLDNixH5XEWIpTe9a_TRieQ,76996
2
+ README.md,sha256=yZC0MUnbYtMzRy8yYi06BQJ0JgKS1tIQ7irzFoP0GK4,162165
3
3
  icon.png,sha256=CzcINJaU23a9hNjsDlDNbyuiEvKZ4Wg6DQVYF6SpuRg,13970
4
- pygpt_net/CHANGELOG.txt,sha256=Ca191tG6M9UBviV3N9xfyGZZnO-mhoB81YJ5IYlF29g,75370
4
+ pygpt_net/CHANGELOG.txt,sha256=OANVuWFOL2EVR5SgtCTfaNf8cEuTT8uib6oPWnwjTvQ,75552
5
5
  pygpt_net/LICENSE,sha256=6Ku72-zJ8wO5VIR87UoJ5P_coCVjPghaFL9ZF2jLp7E,1146
6
- pygpt_net/__init__.py,sha256=0Vn08xxYOS5qPUeJ3xaBxNwoPIiFgxSsJKB4HZuQfSo,1067
6
+ pygpt_net/__init__.py,sha256=Lx1tpxBy21pZsDUu6TwVNXa0dK5sEmG4yr1Uw6Q6Ruo,1067
7
7
  pygpt_net/app.py,sha256=Q7g-2UlF7FlEOBytbGb_nrjT4zEio2HzfzQd687QuUo,15930
8
- pygpt_net/config.py,sha256=hwleTfPpI8SpFwUH5I9VVgCFC-cgeeoHhxlezNdJPBE,15731
8
+ pygpt_net/config.py,sha256=Qc1FOBtTf3O6A6-6KoqUGtoJ0u8hXQeowvCVbZFwtik,16405
9
9
  pygpt_net/container.py,sha256=BemiVZPpPNIzfB-ZvnZeeBPFu-AcX2c30OqYFylEjJc,4023
10
10
  pygpt_net/controller/__init__.py,sha256=q0YzbH5Hm4JyOl8zt1IOHScWNFLCG330jFghj3IUznc,5786
11
11
  pygpt_net/controller/access/__init__.py,sha256=5DzR7zVmFsOICo9I5mEQcIOgsk2WCNi1amUWxExwUiY,2639
@@ -33,7 +33,7 @@ pygpt_net/controller/chat/audio.py,sha256=1eX_kIiRLFBDrNAPVthj-1ftknhdOkn3jWBuC7
33
33
  pygpt_net/controller/chat/command.py,sha256=_lXHki5pbTi8Pvz_BzP3VxGUM_0Ztr1mE5rsatPmSoE,2833
34
34
  pygpt_net/controller/chat/common.py,sha256=uLRRT1ZNGLJiiyJ42lJ7rjDwhqPEfX1RD-EnoFhBmmU,13875
35
35
  pygpt_net/controller/chat/files.py,sha256=Q53Sr7-uJwI_L1FKa1raApFvZ4AI0Eoj766iY9a6Iik,2705
36
- pygpt_net/controller/chat/image.py,sha256=ciksQ_XoyvWn1FeY782jtKjy8jzjae5q_wYTSBnVjSg,7819
36
+ pygpt_net/controller/chat/image.py,sha256=cIh_IlqFYAzyNNMHgKDT2MB0DnXf78A_yBEIurvk8ME,7866
37
37
  pygpt_net/controller/chat/input.py,sha256=UhGfOqKSJNJMNh85UEcYhZ7cRg32qPeH-2LgjC3_vwE,9811
38
38
  pygpt_net/controller/chat/output.py,sha256=EvXVTYJANOQhmpJE6wNznI6jzhQ9kcGo3ne0St1ha-0,9112
39
39
  pygpt_net/controller/chat/render.py,sha256=h7lHc0AL0D3IvnXRxSKgv0Y0B3_ZnJUgt_4oJbamlFU,16240
@@ -82,7 +82,7 @@ pygpt_net/controller/model/__init__.py,sha256=pDbxudu0jM1soOK0B9-b0vlnfnQnT3-qze
82
82
  pygpt_net/controller/model/editor.py,sha256=H0dmzkqELUnwuaYx6JL7-P8TfkClJfvRbpJwWcdKH2k,12603
83
83
  pygpt_net/controller/notepad.py,sha256=SuB18pUbN6cewnBTNTn-SS5ELq8Ka7VDhRagvWw3Vd0,8987
84
84
  pygpt_net/controller/painter/__init__.py,sha256=1Ekmr2a3irDkSb2wowiPXhW59rfdZOW1tdbxeubph-k,2747
85
- pygpt_net/controller/painter/capture.py,sha256=zpLfLkEE0g9BwF6dLyg27J_mbI92pZRaemcN_WpVw7s,6331
85
+ pygpt_net/controller/painter/capture.py,sha256=LjhLqGVUKdaCI026FYJXAKUme9EmUdhwhRrQrW-Bvv8,6307
86
86
  pygpt_net/controller/painter/common.py,sha256=4FtwKdrvXTyq1MWNN6GP8OYO4H7BKi95K4fXPzHEH64,6292
87
87
  pygpt_net/controller/plugins/__init__.py,sha256=dmO2lYxVkED6ew5Z0Sl3S6577e3Q9CD-bgPPTNPjOf8,15720
88
88
  pygpt_net/controller/plugins/presets.py,sha256=g2ROVyI-Lc9yjIZu_XinCZPE5F0Dx5N6l4lAJK_N2-8,11930
@@ -242,9 +242,9 @@ pygpt_net/css_rc.py,sha256=i13kX7irhbYCWZ5yJbcMmnkFp_UfS4PYnvRFSPF7XXo,11349
242
242
  pygpt_net/data/audio/click_off.mp3,sha256=aNiRDP1pt-Jy7ija4YKCNFBwvGWbzU460F4pZWZDS90,65201
243
243
  pygpt_net/data/audio/click_on.mp3,sha256=qfdsSnthAEHVXzeyN4LlC0OvXuyW8p7stb7VXtlvZ1k,65201
244
244
  pygpt_net/data/audio/ok.mp3,sha256=LTiV32pEBkpUGBkKkcOdOFB7Eyt_QoP2Nv6c5AaXftk,32256
245
- pygpt_net/data/config/config.json,sha256=YnLxyLYEL0zHCl09yNTtVw_rHf37PxgB04OXtacCa7Q,19224
246
- pygpt_net/data/config/models.json,sha256=dfm-OONaKus3q0ZyXh_wIac3ELucuWsGCe9zjuxEfkQ,48872
247
- pygpt_net/data/config/modes.json,sha256=pOxwL8WQ8LbnRcWiIUaJui54gU2JvEt4DLQHNi5bU0k,1923
245
+ pygpt_net/data/config/config.json,sha256=d2IPMfZiPkY-w85RnK6HtMmYLVH3YhFnok_fRMqe4sY,19224
246
+ pygpt_net/data/config/models.json,sha256=ii9y2Gwwe6Vo4OBPkO9yHIQXxnWQKUMOkPk_Hk8cpow,48872
247
+ pygpt_net/data/config/modes.json,sha256=mLcLNI0DmIiBQdVVLBiqvl-vNcv1_gjiJKb98ipCOIU,1923
248
248
  pygpt_net/data/config/presets/agent_openai.json,sha256=vMTR-soRBiEZrpJJHuFLWyx8a3Ez_BqtqjyXgxCAM_Q,733
249
249
  pygpt_net/data/config/presets/agent_openai_assistant.json,sha256=awJw9lNTGpKML6SJUShVn7lv8AXh0oic7wBeyoN7AYs,798
250
250
  pygpt_net/data/config/presets/agent_planner.json,sha256=a6Rv58Bnm2STNWB0Rw_dGhnsz6Lb3J8_GwsUVZaTIXc,742
@@ -1666,7 +1666,7 @@ pygpt_net/item/preset.py,sha256=413IdBxlY7zA895dEPNJMI6OJVRTQu0vlacEb0CY8EU,5430
1666
1666
  pygpt_net/item/prompt.py,sha256=oX3BA9n2E6fco2dMZu7DiO3GQgqPj3isFjPcTcFXw9s,1558
1667
1667
  pygpt_net/js.qrc,sha256=OqPzGN6U2Y-uENLFlfDY2BxywCAnU0uds4QcbB7me5Q,542
1668
1668
  pygpt_net/js_rc.py,sha256=5f7l2zJIzW-gHHndytWVXz2sjKyR924GCpOSmDX9sZI,2456868
1669
- pygpt_net/launcher.py,sha256=BMm3c_cDYU-CFrFTDh1bEAvxiWiRWvqGjfBwGsaENk0,8978
1669
+ pygpt_net/launcher.py,sha256=c5GUAn74IkR4x7odIxonz1mne2vq_bl_ROOIbKjW8qg,9354
1670
1670
  pygpt_net/migrations/Version20231227152900.py,sha256=1Rw1mK2mVQs0B2HrbxHICu1Pd1X5jg4yZIrytnR5N5Y,2849
1671
1671
  pygpt_net/migrations/Version20231230095000.py,sha256=A1_e9oC_E4LSo9uBFiiI2dKH7N-SERFp7DMX1R_8LXQ,906
1672
1672
  pygpt_net/migrations/Version20231231230000.py,sha256=SICzfCBpm32P_YMlVIW1LRumEvPbuI2cb9eKsHpcBqg,901
@@ -2155,8 +2155,8 @@ pygpt_net/ui/widget/textarea/web.py,sha256=KIW8MnwDWjEAMdiLA2v1yZiFbf-PT4KkF55uh
2155
2155
  pygpt_net/ui/widget/vision/__init__.py,sha256=8HT4tQFqQogEEpGYTv2RplKBthlsFKcl5egnv4lzzEw,488
2156
2156
  pygpt_net/ui/widget/vision/camera.py,sha256=T8b5cmK6uhf_WSSxzPt_Qod8JgMnst6q8sQqRvgQiSA,2584
2157
2157
  pygpt_net/utils.py,sha256=YhMvgy0wNt3roHIbbAnS-5SXOxOOIIvRRGd6FPTa6d0,6153
2158
- pygpt_net-2.4.35.dist-info/LICENSE,sha256=GLKQTnJOPK4dDIWfkAIM4GwOxKJXi5zcMGt7FjLR1xk,1126
2159
- pygpt_net-2.4.35.dist-info/METADATA,sha256=M-ZJYqGkzPWWZej8vspCilb2dwv-b9Ogz3n6_nu7HTQ,166676
2160
- pygpt_net-2.4.35.dist-info/WHEEL,sha256=FMvqSimYX_P7y0a7UY-_Mc83r5zkBZsCYPm7Lr0Bsq4,88
2161
- pygpt_net-2.4.35.dist-info/entry_points.txt,sha256=qvpII6UHIt8XfokmQWnCYQrTgty8FeJ9hJvOuUFCN-8,43
2162
- pygpt_net-2.4.35.dist-info/RECORD,,
2158
+ pygpt_net-2.4.36.dist-info/LICENSE,sha256=GLKQTnJOPK4dDIWfkAIM4GwOxKJXi5zcMGt7FjLR1xk,1126
2159
+ pygpt_net-2.4.36.dist-info/METADATA,sha256=rRfnfCeXnDXMX7rcw7-2ej5de2YpomfJf0js0IUk9po,166967
2160
+ pygpt_net-2.4.36.dist-info/WHEEL,sha256=FMvqSimYX_P7y0a7UY-_Mc83r5zkBZsCYPm7Lr0Bsq4,88
2161
+ pygpt_net-2.4.36.dist-info/entry_points.txt,sha256=qvpII6UHIt8XfokmQWnCYQrTgty8FeJ9hJvOuUFCN-8,43
2162
+ pygpt_net-2.4.36.dist-info/RECORD,,