pygpt-net 2.4.57__py3-none-any.whl → 2.5.0__py3-none-any.whl

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
CHANGELOG.md CHANGED
@@ -1,5 +1,15 @@
1
1
  # CHANGELOG
2
2
 
3
+ ## 2.5.0 (2025-01-31)
4
+
5
+ - Added provider for DeepSeek (in Chat with Files mode, beta).
6
+ - Added new models: OpenAI o1, Llama 3.3, DeepSeek V3 and R1 (API + local, with Ollama).
7
+ - Added tool calls for OpenAI o1.
8
+ - Added native vision for OpenAI o1.
9
+ - Fix: tool calls in Ollama provider.
10
+ - Fix: error handling in stream mode.
11
+ - Fix: added check for active plugin tools before tool call.
12
+
3
13
  ## 2.4.57 (2025-01-19)
4
14
 
5
15
  - Logging fix.
README.md CHANGED
@@ -2,7 +2,7 @@
2
2
 
3
3
  [![pygpt](https://snapcraft.io/pygpt/badge.svg)](https://snapcraft.io/pygpt)
4
4
 
5
- Release: **2.4.57** | build: **2025.01.19** | Python: **>=3.10, <3.13**
5
+ Release: **2.5.0** | build: **2025.01.31** | Python: **>=3.10, <3.13**
6
6
 
7
7
  > Official website: https://pygpt.net | Documentation: https://pygpt.readthedocs.io
8
8
  >
@@ -14,7 +14,7 @@ Release: **2.4.57** | build: **2025.01.19** | Python: **>=3.10, <3.13**
14
14
 
15
15
  ## Overview
16
16
 
17
- **PyGPT** is **all-in-one** Desktop AI Assistant that provides direct interaction with OpenAI language models, including `o1`, `gpt-4o`, `gpt-4`, `gpt-4 Vision`, and `gpt-3.5`, through the `OpenAI API`. By utilizing `LangChain` and `LlamaIndex`, the application also supports alternative LLMs, like those available on `HuggingFace`, locally available models (like `Llama 3`,`Mistral` or `Bielik`), `Google Gemini` and `Anthropic Claude`.
17
+ **PyGPT** is **all-in-one** Desktop AI Assistant that provides direct interaction with OpenAI language models, including `o1`, `gpt-4o`, `gpt-4`, `gpt-4 Vision`, and `gpt-3.5`, through the `OpenAI API`. By utilizing `LangChain` and `LlamaIndex`, the application also supports alternative LLMs, like those available on `HuggingFace`, locally available models (like `Llama 3`,`Mistral`, `DeepSeek V3/R1` or `Bielik`), `Google Gemini` and `Anthropic Claude`.
18
18
 
19
19
  This assistant offers multiple modes of operation such as chat, assistants, completions, and image-related tasks using `DALL-E 3` for generation and `gpt-4 Vision` for image analysis. **PyGPT** has filesystem capabilities for file I/O, can generate and run Python code, execute system commands, execute custom commands and manage file transfers. It also allows models to perform web searches with the `Google` and `Microsoft Bing`.
20
20
 
@@ -39,7 +39,7 @@ You can download compiled 64-bit versions for Windows and Linux here: https://py
39
39
  - Desktop AI Assistant for `Linux`, `Windows` and `Mac`, written in Python.
40
40
  - Works similarly to `ChatGPT`, but locally (on a desktop computer).
41
41
  - 11 modes of operation: Chat, Vision, Completion, Assistant, Image generation, LangChain, Chat with Files, Chat with Audio, Experts, Autonomous Mode and Agents.
42
- - Supports multiple models: `o1`, `GPT-4o`, `GPT-4`, `GPT-3.5`, and any model accessible through `LangChain`, `LlamaIndex` and `Ollama` such as `Llama 3`, `Mistral`, `Google Gemini`, `Anthropic Claude`, `Bielik`, etc.
42
+ - Supports multiple models: `o1`, `GPT-4o`, `GPT-4`, `GPT-3.5`, and any model accessible through `LangChain`, `LlamaIndex` and `Ollama` such as `Llama 3`, `Mistral`, `Google Gemini`, `Anthropic Claude`, `DeepSeek V3/R1`, `Bielik`, etc.
43
43
  - Chat with your own Files: integrated `LlamaIndex` support: chat with data such as: `txt`, `pdf`, `csv`, `html`, `md`, `docx`, `json`, `epub`, `xlsx`, `xml`, webpages, `Google`, `GitHub`, video/audio, images and other data types, or use conversation history as additional context provided to the model.
44
44
  - Built-in vector databases support and automated files and data embedding.
45
45
  - Included support features for individuals with disabilities: customizable keyboard shortcuts, voice control, and translation of on-screen actions into audio via speech synthesis.
@@ -3960,6 +3960,16 @@ may consume additional tokens that are not displayed in the main window.
3960
3960
 
3961
3961
  ## Recent changes:
3962
3962
 
3963
+ **2.5.0 (2025-01-31)**
3964
+
3965
+ - Added provider for DeepSeek (in Chat with Files mode, beta).
3966
+ - Added new models: OpenAI o1, Llama 3.3, DeepSeek V3 and R1 (API + local, with Ollama).
3967
+ - Added tool calls for OpenAI o1.
3968
+ - Added native vision for OpenAI o1.
3969
+ - Fix: tool calls in Ollama provider.
3970
+ - Fix: error handling in stream mode.
3971
+ - Fix: added check for active plugin tools before tool call.
3972
+
3963
3973
  **2.4.57 (2025-01-19)**
3964
3974
 
3965
3975
  - Logging fix.
pygpt_net/CHANGELOG.txt CHANGED
@@ -1,3 +1,13 @@
1
+ 2.5.0 (2025-01-31)
2
+
3
+ - Added provider for DeepSeek (in Chat with Files mode, beta).
4
+ - Added new models: OpenAI o1, Llama 3.3, DeepSeek V3 and R1 (API + local, with Ollama).
5
+ - Added tool calls for OpenAI o1.
6
+ - Added native vision for OpenAI o1.
7
+ - Fix: tool calls in Ollama provider.
8
+ - Fix: error handling in stream mode.
9
+ - Fix: added check for active plugin tools before tool call.
10
+
1
11
  2.4.57 (2025-01-19)
2
12
 
3
13
  - Logging fix.
pygpt_net/__init__.py CHANGED
@@ -6,15 +6,15 @@
6
6
  # GitHub: https://github.com/szczyglis-dev/py-gpt #
7
7
  # MIT License #
8
8
  # Created By : Marcin Szczygliński #
9
- # Updated Date: 2025.01.19 16:00:00 #
9
+ # Updated Date: 2025.01.31 22:00:00 #
10
10
  # ================================================== #
11
11
 
12
12
  __author__ = "Marcin Szczygliński"
13
13
  __copyright__ = "Copyright 2025, Marcin Szczygliński"
14
14
  __credits__ = ["Marcin Szczygliński"]
15
15
  __license__ = "MIT"
16
- __version__ = "2.4.57"
17
- __build__ = "2025.01.19"
16
+ __version__ = "2.5.0"
17
+ __build__ = "2025.01.31"
18
18
  __maintainer__ = "Marcin Szczygliński"
19
19
  __github__ = "https://github.com/szczyglis-dev/py-gpt"
20
20
  __report__ = "https://github.com/szczyglis-dev/py-gpt/issues"
pygpt_net/app.py CHANGED
@@ -6,7 +6,7 @@
6
6
  # GitHub: https://github.com/szczyglis-dev/py-gpt #
7
7
  # MIT License #
8
8
  # Created By : Marcin Szczygliński #
9
- # Updated Date: 2024.12.15 04:00:00 #
9
+ # Updated Date: 2025.01.31 19:00:00 #
10
10
  # ================================================== #
11
11
 
12
12
  from pygpt_net.launcher import Launcher
@@ -43,6 +43,7 @@ from pygpt_net.provider.agents.react import ReactAgent
43
43
  # LLM wrapper providers (langchain, llama-index, embeddings)
44
44
  from pygpt_net.provider.llms.anthropic import AnthropicLLM
45
45
  from pygpt_net.provider.llms.azure_openai import AzureOpenAILLM
46
+ from pygpt_net.provider.llms.deepseek_api import DeepseekApiLLM
46
47
  from pygpt_net.provider.llms.google import GoogleLLM
47
48
  from pygpt_net.provider.llms.hugging_face import HuggingFaceLLM
48
49
  from pygpt_net.provider.llms.hugging_face_api import HuggingFaceApiLLM
@@ -339,6 +340,7 @@ def run(**kwargs):
339
340
  launcher.add_llm(HuggingFaceApiLLM())
340
341
  launcher.add_llm(LocalLLM())
341
342
  launcher.add_llm(OllamaLLM())
343
+ launcher.add_llm(DeepseekApiLLM())
342
344
 
343
345
  # register custom langchain and llama-index LLMs
344
346
  llms = kwargs.get('llms', None)
@@ -6,7 +6,7 @@
6
6
  # GitHub: https://github.com/szczyglis-dev/py-gpt #
7
7
  # MIT License #
8
8
  # Created By : Marcin Szczygliński #
9
- # Updated Date: 2024.12.14 00:00:00 #
9
+ # Updated Date: 2025.01.31 19:00:00 #
10
10
  # ================================================== #
11
11
 
12
12
  from typing import Any
@@ -14,7 +14,7 @@ from typing import Any
14
14
  from pygpt_net.core.types import (
15
15
  MODE_AGENT,
16
16
  )
17
- from pygpt_net.core.events import KernelEvent, RenderEvent
17
+ from pygpt_net.core.events import KernelEvent, RenderEvent, Event
18
18
  from pygpt_net.core.bridge import BridgeContext
19
19
  from pygpt_net.core.ctx.reply import ReplyContext
20
20
  from pygpt_net.item.ctx import CtxItem
@@ -44,7 +44,17 @@ class Command:
44
44
  cmds = ctx.cmds_before # from llama index tool calls pre-handler
45
45
  if not cmds: # if no commands in context (from llama index tool calls)
46
46
  cmds = self.window.core.command.extract_cmds(ctx.output)
47
+
47
48
  if len(cmds) > 0:
49
+ # check if commands are enabled, leave only enabled commands
50
+ for cmd in cmds:
51
+ cmd_id = str(cmd["cmd"])
52
+ if not self.window.core.command.is_enabled(cmd_id):
53
+ self.log("Command not allowed: " + cmd_id)
54
+ cmds.remove(cmd) # remove command from execution list
55
+ if len(cmds) == 0:
56
+ return # abort if no commands
57
+
48
58
  ctx.cmds = cmds # append commands to ctx
49
59
  self.log("Command call received...")
50
60
 
@@ -6,7 +6,7 @@
6
6
  # GitHub: https://github.com/szczyglis-dev/py-gpt #
7
7
  # MIT License #
8
8
  # Created By : Marcin Szczygliński #
9
- # Updated Date: 2024.12.14 00:00:00 #
9
+ # Updated Date: 2025.01.31 22:00:00 #
10
10
  # ================================================== #
11
11
 
12
12
  from typing import Any
@@ -35,6 +35,7 @@ class Stream:
35
35
  output = ""
36
36
  output_tokens = 0
37
37
  begin = True
38
+ error = None
38
39
 
39
40
  # chunks: stream begin
40
41
  data = {
@@ -56,6 +57,9 @@ class Stream:
56
57
  if self.window.controller.kernel.stopped():
57
58
  break
58
59
 
60
+ if error is not None:
61
+ break # break if error
62
+
59
63
  response = None
60
64
  chunk_type = "raw"
61
65
  if (hasattr(chunk, 'choices')
@@ -159,6 +163,7 @@ class Stream:
159
163
 
160
164
  except Exception as e:
161
165
  self.window.core.debug.log(e)
166
+ error = e
162
167
 
163
168
  self.window.controller.ui.update_tokens() # update UI tokens
164
169
 
@@ -177,6 +182,9 @@ class Stream:
177
182
  # log
178
183
  self.log("[chat] Stream end.")
179
184
 
185
+ if error is not None:
186
+ raise error # raise error if any, to display in UI
187
+
180
188
  def log(self, data: Any):
181
189
  """
182
190
  Log data to debug
@@ -161,6 +161,7 @@ class Text:
161
161
  # build final prompt (+plugins)
162
162
  sys_prompt = self.window.core.prompt.prepare_sys_prompt(
163
163
  mode=mode,
164
+ model=model_data,
164
165
  sys_prompt=sys_prompt,
165
166
  ctx=ctx,
166
167
  reply=reply,
@@ -6,7 +6,7 @@
6
6
  # GitHub: https://github.com/szczyglis-dev/py-gpt #
7
7
  # MIT License #
8
8
  # Created By : Marcin Szczygliński #
9
- # Updated Date: 2024.12.14 08:00:00 #
9
+ # Updated Date: 2025.01.31 22:00:00 #
10
10
  # ================================================== #
11
11
 
12
12
  import copy
@@ -23,6 +23,7 @@ from pygpt_net.core.types import (
23
23
  )
24
24
  from pygpt_net.core.events import Event
25
25
  from pygpt_net.item.ctx import CtxItem
26
+ from pygpt_net.item.model import ModelItem
26
27
 
27
28
 
28
29
  class Command:
@@ -590,3 +591,69 @@ class Command:
590
591
  if self.window.controller.agent.legacy.enabled() or self.window.controller.agent.experts.enabled():
591
592
  return False
592
593
  return self.window.core.config.get('func_call.native', False) # otherwise check config
594
+
595
+ def is_enabled(self, cmd: str) -> bool:
596
+ """
597
+ Check if command is enabled
598
+
599
+ :param cmd: command
600
+ :return: True if command is enabled
601
+ """
602
+ enabled_cmds = []
603
+ data = {
604
+ 'prompt': "",
605
+ 'silent': True,
606
+ 'syntax': [],
607
+ 'cmd': [],
608
+ }
609
+ event = Event(Event.CMD_SYNTAX, data)
610
+ self.window.dispatch(event)
611
+ if (event.data and "cmd" in event.data
612
+ and isinstance(event.data["cmd"], list)):
613
+ for item in event.data["cmd"]:
614
+ if "cmd" in item:
615
+ enabled_cmds.append(item["cmd"])
616
+ data = {
617
+ 'prompt': "",
618
+ 'silent': True,
619
+ 'syntax': [],
620
+ 'cmd': [],
621
+ }
622
+ event = Event(Event.CMD_SYNTAX_INLINE, data)
623
+ self.window.dispatch(event)
624
+ if (event.data and "cmd" in event.data
625
+ and isinstance(event.data["cmd"], list)):
626
+ for item in event.data["cmd"]:
627
+ if "cmd" in item:
628
+ enabled_cmds.append(item["cmd"])
629
+ if cmd in enabled_cmds:
630
+ return True
631
+ return False
632
+
633
+ def is_model_supports_tools(
634
+ self,
635
+ mode: str,
636
+ model: ModelItem = None) -> bool:
637
+ """
638
+ Check if model supports tools
639
+
640
+ :param mode: mode
641
+ :param model: model item
642
+ :return: True if model supports tools
643
+ """
644
+ return True # TMP allowed all
645
+ if model is None:
646
+ return False
647
+ disabled_models = [
648
+ "deepseek-r1:1.5b",
649
+ "deepseek-r1:7b",
650
+ "llama2",
651
+ "llama3.1",
652
+ "codellama",
653
+ ]
654
+ if model.id is not None:
655
+ for disabled_model in disabled_models:
656
+ if (model.llama_index['provider'] == "ollama"
657
+ and model.id.startswith(disabled_model)):
658
+ return False
659
+ return True
@@ -358,6 +358,7 @@ class Experts:
358
358
  sys_prompt = event.data['value']
359
359
  sys_prompt = self.window.core.prompt.prepare_sys_prompt(
360
360
  mode,
361
+ model_data,
361
362
  sys_prompt,
362
363
  ctx,
363
364
  reply,
@@ -6,7 +6,7 @@
6
6
  # GitHub: https://github.com/szczyglis-dev/py-gpt #
7
7
  # MIT License #
8
8
  # Created By : Marcin Szczygliński #
9
- # Updated Date: 2025.01.16 01:00:00 #
9
+ # Updated Date: 2025.01.31 19:00:00 #
10
10
  # ================================================== #
11
11
 
12
12
  import json
@@ -37,6 +37,11 @@ class Chat:
37
37
  self.window = window
38
38
  self.storage = storage
39
39
  self.context = Context(window)
40
+ self.tool_calls_not_allowed_providers = [
41
+ "ollama",
42
+ "hugging_face_api",
43
+ "deepseek_api",
44
+ ]
40
45
 
41
46
  def call(
42
47
  self,
@@ -231,6 +236,10 @@ class Chat:
231
236
  chat_mode = self.window.core.config.get("llama.idx.chat.mode")
232
237
  use_index = True
233
238
  verbose = self.window.core.config.get("log.llama", False)
239
+ allow_native_tool_calls = True
240
+ if ('provider' in model.llama_index
241
+ and model.llama_index['provider'] in self.tool_calls_not_allowed_providers):
242
+ allow_native_tool_calls = False
234
243
 
235
244
  if idx is None or idx == "_":
236
245
  chat_mode = "simple" # do not use query engine if no index
@@ -254,7 +263,7 @@ class Chat:
254
263
  else:
255
264
  llm = self.window.core.idx.llm.get(model)
256
265
 
257
- # if multimodal support, try to get multimodal provider
266
+ # TODO: if multimodal support, try to get multimodal provider
258
267
  # if model.is_multimodal():
259
268
  # llm = self.window.core.idx.llm.get(model, multimodal=True) # get multimodal LLM model
260
269
 
@@ -272,7 +281,7 @@ class Chat:
272
281
  )
273
282
 
274
283
  if use_index:
275
- # CMD: commands are applied to system prompt here
284
+ # TOOLS: commands are applied to system prompt here
276
285
  # index as query engine
277
286
  chat_engine = index.as_chat_engine(
278
287
  llm=llm,
@@ -286,7 +295,7 @@ class Chat:
286
295
  else:
287
296
  response = chat_engine.chat(query)
288
297
  else:
289
- # CMD: commands are applied to system prompt here
298
+ # TOOLS: commands are applied to system prompt here
290
299
  # prepare tools (native calls if enabled)
291
300
  tools = self.window.core.agents.tools.prepare(context, extra)
292
301
 
@@ -294,7 +303,8 @@ class Chat:
294
303
  history.insert(0, self.context.add_system(system_prompt))
295
304
  history.append(self.context.add_user(query))
296
305
  if stream:
297
- if hasattr(llm, "stream_chat_with_tools"):
306
+ # IMPORTANT: stream chat with tools not supported by all providers
307
+ if allow_native_tool_calls and hasattr(llm, "stream_chat_with_tools"):
298
308
  response = llm.stream_chat_with_tools(
299
309
  tools=tools,
300
310
  messages=history,
@@ -304,7 +314,8 @@ class Chat:
304
314
  messages=history,
305
315
  )
306
316
  else:
307
- if hasattr(llm, "chat_with_tools"):
317
+ # IMPORTANT: stream chat with tools not supported by all providers
318
+ if allow_native_tool_calls and hasattr(llm, "chat_with_tools"):
308
319
  response = llm.chat_with_tools(
309
320
  tools=tools,
310
321
  messages=history,
@@ -336,16 +347,17 @@ class Chat:
336
347
  ctx.set_output(output, "")
337
348
  ctx.add_doc_meta(self.get_metadata(response.source_nodes)) # store metadata
338
349
  else:
339
- # from LLM directly
350
+ # from LLM directly, no index
340
351
  if stream:
341
- # tools handled in stream output controller
352
+ # tools are handled in stream output controller
342
353
  ctx.stream = response # chunk is in response.delta
343
354
  ctx.input_tokens = input_tokens
344
355
  ctx.set_output("", "")
345
356
  else:
346
357
  # unpack tool calls
347
358
  tool_calls = llm.get_tool_calls_from_response(
348
- response, error_on_no_tool_call=False
359
+ response,
360
+ error_on_no_tool_call=False,
349
361
  )
350
362
  ctx.tool_calls = self.window.core.command.unpack_tool_calls_from_llama(tool_calls)
351
363
  output = response.message.content
@@ -6,7 +6,7 @@
6
6
  # GitHub: https://github.com/szczyglis-dev/py-gpt #
7
7
  # MIT License #
8
8
  # Created By : Marcin Szczygliński #
9
- # Updated Date: 2024.12.14 08:00:00 #
9
+ # Updated Date: 2025.01.31 19:00:00 #
10
10
  # ================================================== #
11
11
 
12
12
  from pygpt_net.core.events import Event
@@ -15,6 +15,7 @@ from pygpt_net.item.ctx import CtxItem
15
15
 
16
16
  from .custom import Custom
17
17
  from .template import Template
18
+ from pygpt_net.item.model import ModelItem
18
19
 
19
20
 
20
21
  class Prompt:
@@ -40,11 +41,18 @@ class Prompt:
40
41
  return str(self.window.core.config.get(key))
41
42
  return ""
42
43
 
43
- def build_final_system_prompt(self, prompt: str) -> str:
44
+ def build_final_system_prompt(
45
+ self,
46
+ prompt: str,
47
+ mode: str,
48
+ model: ModelItem = None
49
+ ) -> str:
44
50
  """
45
51
  Build final system prompt
46
52
 
47
53
  :param prompt: prompt
54
+ :param mode: mode
55
+ :param model: model item
48
56
  :return: final system prompt
49
57
  """
50
58
  # tmp dispatch event: system prompt
@@ -63,6 +71,10 @@ class Prompt:
63
71
  if self.window.core.command.is_native_enabled():
64
72
  return prompt
65
73
 
74
+ # abort if model not supported
75
+ if not self.window.core.command.is_model_supports_tools(mode, model):
76
+ return prompt
77
+
66
78
  # cmd syntax tokens
67
79
  data = {
68
80
  'prompt': prompt,
@@ -71,24 +83,28 @@ class Prompt:
71
83
  'cmd': [],
72
84
  }
73
85
 
86
+ # IMPORTANT: append command syntax only if at least one command is detected
74
87
  # tmp dispatch event: command syntax apply
75
88
  # full execute cmd syntax
76
89
  if self.window.core.config.get('cmd'):
77
90
  event = Event(Event.CMD_SYNTAX, data)
78
91
  self.window.dispatch(event)
79
- prompt = self.window.core.command.append_syntax(event.data)
92
+ if event.data and "cmd" in event.data and event.data["cmd"]:
93
+ prompt = self.window.core.command.append_syntax(event.data)
80
94
 
81
95
  # inline cmd syntax only
82
96
  elif self.window.controller.plugins.is_type_enabled("cmd.inline"):
83
97
  event = Event(Event.CMD_SYNTAX_INLINE, data)
84
98
  self.window.dispatch(event)
85
- prompt = self.window.core.command.append_syntax(event.data)
99
+ if event.data and "cmd" in event.data and event.data["cmd"]:
100
+ prompt = self.window.core.command.append_syntax(event.data)
86
101
 
87
102
  return prompt
88
103
 
89
104
  def prepare_sys_prompt(
90
105
  self,
91
106
  mode: str,
107
+ model: ModelItem,
92
108
  sys_prompt: str,
93
109
  ctx: CtxItem,
94
110
  reply: bool,
@@ -100,6 +116,7 @@ class Prompt:
100
116
  Prepare system prompt
101
117
 
102
118
  :param mode: mode
119
+ :param model: model item
103
120
  :param sys_prompt: system prompt
104
121
  :param ctx: context item
105
122
  :param reply: reply from plugins
@@ -134,6 +151,10 @@ class Prompt:
134
151
  if self.window.core.command.is_native_enabled() and not disable_native_tool_calls:
135
152
  return sys_prompt # abort if native func call enabled
136
153
 
154
+ # abort if model not supported
155
+ if not self.window.core.command.is_model_supports_tools(mode, model):
156
+ return sys_prompt
157
+
137
158
  data = {
138
159
  'mode': mode,
139
160
  'prompt': sys_prompt,
@@ -141,16 +162,19 @@ class Prompt:
141
162
  'cmd': [],
142
163
  'is_expert': is_expert,
143
164
  }
165
+ # IMPORTANT: append command syntax only if at least one command is detected
144
166
  # full execute cmd syntax
145
167
  if self.window.core.config.get('cmd'):
146
168
  event = Event(Event.CMD_SYNTAX, data)
147
169
  self.window.dispatch(event)
148
- sys_prompt = self.window.core.command.append_syntax(event.data)
170
+ if event.data and "cmd" in event.data and event.data["cmd"]:
171
+ sys_prompt = self.window.core.command.append_syntax(event.data)
149
172
 
150
173
  # inline cmd syntax only
151
174
  elif self.window.controller.plugins.is_type_enabled("cmd.inline"):
152
175
  event = Event(Event.CMD_SYNTAX_INLINE, data)
153
176
  self.window.dispatch(event)
154
- sys_prompt = self.window.core.command.append_syntax(event.data)
177
+ if event.data and "cmd" in event.data and event.data["cmd"]:
178
+ sys_prompt = self.window.core.command.append_syntax(event.data)
155
179
 
156
180
  return sys_prompt
@@ -6,7 +6,7 @@
6
6
  # GitHub: https://github.com/szczyglis-dev/py-gpt #
7
7
  # MIT License #
8
8
  # Created By : Marcin Szczygliński #
9
- # Updated Date: 2024.12.14 22:00:00 #
9
+ # Updated Date: 2025.01.31 22:00:00 #
10
10
  # ================================================== #
11
11
 
12
12
  from typing import Tuple, List
@@ -320,7 +320,10 @@ class Tokens:
320
320
  sum_tokens, max_current, threshold)
321
321
  """
322
322
  model = self.window.core.config.get('model')
323
- model_id = self.window.core.models.get_id(model)
323
+ model_id = ""
324
+ model_data = self.window.core.models.get(model)
325
+ if model_data is not None:
326
+ model_id = model_data.id
324
327
  mode = self.window.core.config.get('mode')
325
328
  user_name = self.window.core.config.get('user_name')
326
329
  ai_name = self.window.core.config.get('ai_name')
@@ -334,7 +337,7 @@ class Tokens:
334
337
  if mode in CHAT_MODES:
335
338
  # system prompt (without extra tokens)
336
339
  system_prompt = str(self.window.core.config.get('prompt')).strip()
337
- system_prompt = self.window.core.prompt.build_final_system_prompt(system_prompt) # add addons
340
+ system_prompt = self.window.core.prompt.build_final_system_prompt(system_prompt, mode, model_data) # add addons
338
341
 
339
342
  if system_prompt is not None and system_prompt != "":
340
343
  system_tokens = self.from_prompt(system_prompt, "", model_id)
@@ -347,7 +350,7 @@ class Tokens:
347
350
  elif mode == MODE_COMPLETION:
348
351
  # system prompt (without extra tokens)
349
352
  system_prompt = str(self.window.core.config.get('prompt')).strip()
350
- system_prompt = self.window.core.prompt.build_final_system_prompt(system_prompt) # add addons
353
+ system_prompt = self.window.core.prompt.build_final_system_prompt(system_prompt, mode, model_data) # add addons
351
354
  system_tokens = self.from_text(system_prompt, model_id)
352
355
 
353
356
  # input prompt
@@ -6,7 +6,7 @@
6
6
  # GitHub: https://github.com/szczyglis-dev/py-gpt #
7
7
  # MIT License #
8
8
  # Created By : Marcin Szczygliński #
9
- # Updated Date: 2024.12.14 08:00:00 #
9
+ # Updated Date: 2025.01.31 19:00:00 #
10
10
  # ================================================== #
11
11
 
12
12
  import copy
@@ -286,6 +286,18 @@ class Updater:
286
286
  newest_version = data_json["version"]
287
287
  newest_build = data_json["build"]
288
288
 
289
+ # check correct version for Microsoft Store, Snap Store, etc.
290
+ if self.window.core.platforms.is_windows():
291
+ if "version_windows" in data_json:
292
+ newest_version = data_json["version_windows"]
293
+ if "build_windows" in data_json:
294
+ newest_build = data_json["build_windows"]
295
+ elif self.window.core.platforms.is_snap():
296
+ if "version_snap" in data_json:
297
+ newest_version = data_json["version_snap"]
298
+ if "build_snap" in data_json:
299
+ newest_build = data_json["build_snap"]
300
+
289
301
  # changelog, download links
290
302
  changelog = ""
291
303
  download_windows = ""
@@ -1,8 +1,8 @@
1
1
  {
2
2
  "__meta__": {
3
- "version": "2.4.57",
4
- "app.version": "2.4.57",
5
- "updated_at": "2025-01-19T00:00:00"
3
+ "version": "2.5.0",
4
+ "app.version": "2.5.0",
5
+ "updated_at": "2025-01-31T00:00:00"
6
6
  },
7
7
  "access.audio.event.speech": false,
8
8
  "access.audio.event.speech.disabled": [],
@@ -65,6 +65,7 @@
65
65
  "api_key_google": "",
66
66
  "api_key_anthropic": "",
67
67
  "api_key_hugging_face": "",
68
+ "api_key_deepseek": "",
68
69
  "api_proxy": "",
69
70
  "app.env": [],
70
71
  "assistant": "",