mito-ai 0.1.43__py3-none-any.whl → 0.1.45__py3-none-any.whl

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.

Potentially problematic release.


This version of mito-ai might be problematic. Click here for more details.

Files changed (62) hide show
  1. mito_ai/__init__.py +3 -3
  2. mito_ai/_version.py +1 -1
  3. mito_ai/anthropic_client.py +2 -3
  4. mito_ai/{app_builder → app_deploy}/__init__.py +1 -1
  5. mito_ai/app_deploy/app_deploy_utils.py +25 -0
  6. mito_ai/{app_builder → app_deploy}/handlers.py +48 -40
  7. mito_ai/{app_builder → app_deploy}/models.py +17 -14
  8. mito_ai/app_manager/handlers.py +33 -0
  9. mito_ai/app_manager/models.py +15 -1
  10. mito_ai/completions/handlers.py +40 -1
  11. mito_ai/completions/models.py +5 -1
  12. mito_ai/completions/prompt_builders/agent_system_message.py +6 -4
  13. mito_ai/completions/prompt_builders/prompt_constants.py +22 -4
  14. mito_ai/completions/providers.py +5 -11
  15. mito_ai/streamlit_conversion/streamlit_agent_handler.py +6 -3
  16. mito_ai/streamlit_conversion/streamlit_utils.py +15 -7
  17. mito_ai/streamlit_conversion/validate_streamlit_app.py +34 -25
  18. mito_ai/streamlit_preview/handlers.py +49 -70
  19. mito_ai/streamlit_preview/utils.py +41 -0
  20. mito_ai/tests/deploy_app/test_app_deploy_utils.py +71 -0
  21. mito_ai/tests/providers/test_anthropic_client.py +2 -2
  22. mito_ai/tests/streamlit_conversion/test_streamlit_agent_handler.py +0 -84
  23. mito_ai/tests/streamlit_conversion/test_validate_streamlit_app.py +0 -15
  24. mito_ai/tests/streamlit_preview/test_streamlit_preview_handler.py +88 -0
  25. mito_ai/tests/streamlit_preview/test_streamlit_preview_manager.py +4 -1
  26. mito_ai/tests/utils/test_anthropic_utils.py +4 -4
  27. mito_ai/utils/anthropic_utils.py +11 -19
  28. mito_ai/utils/telemetry_utils.py +15 -5
  29. {mito_ai-0.1.43.data → mito_ai-0.1.45.data}/data/share/jupyter/labextensions/mito_ai/build_log.json +100 -100
  30. {mito_ai-0.1.43.data → mito_ai-0.1.45.data}/data/share/jupyter/labextensions/mito_ai/package.json +2 -2
  31. {mito_ai-0.1.43.data → mito_ai-0.1.45.data}/data/share/jupyter/labextensions/mito_ai/schemas/mito_ai/package.json.orig +1 -1
  32. {mito_ai-0.1.43.data → mito_ai-0.1.45.data}/data/share/jupyter/labextensions/mito_ai/schemas/mito_ai/toolbar-buttons.json +0 -5
  33. mito_ai-0.1.43.data/data/share/jupyter/labextensions/mito_ai/static/lib_index_js.81703ac2bc645e5c2fc2.js → mito_ai-0.1.45.data/data/share/jupyter/labextensions/mito_ai/static/lib_index_js.0c3368195d954d2ed033.js +1729 -790
  34. mito_ai-0.1.45.data/data/share/jupyter/labextensions/mito_ai/static/lib_index_js.0c3368195d954d2ed033.js.map +1 -0
  35. mito_ai-0.1.43.data/data/share/jupyter/labextensions/mito_ai/static/remoteEntry.502aef26f0416fab7435.js → mito_ai-0.1.45.data/data/share/jupyter/labextensions/mito_ai/static/remoteEntry.684f82575fcc2e3b350c.js +17 -17
  36. mito_ai-0.1.43.data/data/share/jupyter/labextensions/mito_ai/static/remoteEntry.502aef26f0416fab7435.js.map → mito_ai-0.1.45.data/data/share/jupyter/labextensions/mito_ai/static/remoteEntry.684f82575fcc2e3b350c.js.map +1 -1
  37. {mito_ai-0.1.43.dist-info → mito_ai-0.1.45.dist-info}/METADATA +2 -2
  38. {mito_ai-0.1.43.dist-info → mito_ai-0.1.45.dist-info}/RECORD +61 -57
  39. mito_ai-0.1.43.data/data/share/jupyter/labextensions/mito_ai/static/lib_index_js.81703ac2bc645e5c2fc2.js.map +0 -1
  40. {mito_ai-0.1.43.data → mito_ai-0.1.45.data}/data/etc/jupyter/jupyter_server_config.d/mito_ai.json +0 -0
  41. {mito_ai-0.1.43.data → mito_ai-0.1.45.data}/data/share/jupyter/labextensions/mito_ai/static/node_modules_process_browser_js.4b128e94d31a81ebd209.js +0 -0
  42. {mito_ai-0.1.43.data → mito_ai-0.1.45.data}/data/share/jupyter/labextensions/mito_ai/static/node_modules_process_browser_js.4b128e94d31a81ebd209.js.map +0 -0
  43. {mito_ai-0.1.43.data → mito_ai-0.1.45.data}/data/share/jupyter/labextensions/mito_ai/static/style.js +0 -0
  44. {mito_ai-0.1.43.data → mito_ai-0.1.45.data}/data/share/jupyter/labextensions/mito_ai/static/style_index_js.5876024bb17dbd6a3ee6.js +0 -0
  45. {mito_ai-0.1.43.data → mito_ai-0.1.45.data}/data/share/jupyter/labextensions/mito_ai/static/style_index_js.5876024bb17dbd6a3ee6.js.map +0 -0
  46. {mito_ai-0.1.43.data → mito_ai-0.1.45.data}/data/share/jupyter/labextensions/mito_ai/static/vendors-node_modules_aws-amplify_auth_dist_esm_providers_cognito_apis_signOut_mjs-node_module-75790d.688c25857e7b81b1740f.js +0 -0
  47. {mito_ai-0.1.43.data → mito_ai-0.1.45.data}/data/share/jupyter/labextensions/mito_ai/static/vendors-node_modules_aws-amplify_auth_dist_esm_providers_cognito_apis_signOut_mjs-node_module-75790d.688c25857e7b81b1740f.js.map +0 -0
  48. {mito_ai-0.1.43.data → mito_ai-0.1.45.data}/data/share/jupyter/labextensions/mito_ai/static/vendors-node_modules_aws-amplify_auth_dist_esm_providers_cognito_tokenProvider_tokenProvider_-72f1c8.a917210f057fcfe224ad.js +0 -0
  49. {mito_ai-0.1.43.data → mito_ai-0.1.45.data}/data/share/jupyter/labextensions/mito_ai/static/vendors-node_modules_aws-amplify_auth_dist_esm_providers_cognito_tokenProvider_tokenProvider_-72f1c8.a917210f057fcfe224ad.js.map +0 -0
  50. {mito_ai-0.1.43.data → mito_ai-0.1.45.data}/data/share/jupyter/labextensions/mito_ai/static/vendors-node_modules_aws-amplify_dist_esm_index_mjs.6bac1a8c4cc93f15f6b7.js +0 -0
  51. {mito_ai-0.1.43.data → mito_ai-0.1.45.data}/data/share/jupyter/labextensions/mito_ai/static/vendors-node_modules_aws-amplify_dist_esm_index_mjs.6bac1a8c4cc93f15f6b7.js.map +0 -0
  52. {mito_ai-0.1.43.data → mito_ai-0.1.45.data}/data/share/jupyter/labextensions/mito_ai/static/vendors-node_modules_aws-amplify_ui-react_dist_esm_index_mjs.4fcecd65bef9e9847609.js +0 -0
  53. {mito_ai-0.1.43.data → mito_ai-0.1.45.data}/data/share/jupyter/labextensions/mito_ai/static/vendors-node_modules_aws-amplify_ui-react_dist_esm_index_mjs.4fcecd65bef9e9847609.js.map +0 -0
  54. {mito_ai-0.1.43.data → mito_ai-0.1.45.data}/data/share/jupyter/labextensions/mito_ai/static/vendors-node_modules_react-dom_client_js-node_modules_aws-amplify_ui-react_dist_styles_css.b43d4249e4d3dac9ad7b.js +0 -0
  55. {mito_ai-0.1.43.data → mito_ai-0.1.45.data}/data/share/jupyter/labextensions/mito_ai/static/vendors-node_modules_react-dom_client_js-node_modules_aws-amplify_ui-react_dist_styles_css.b43d4249e4d3dac9ad7b.js.map +0 -0
  56. {mito_ai-0.1.43.data → mito_ai-0.1.45.data}/data/share/jupyter/labextensions/mito_ai/static/vendors-node_modules_semver_index_js.3f6754ac5116d47de76b.js +0 -0
  57. {mito_ai-0.1.43.data → mito_ai-0.1.45.data}/data/share/jupyter/labextensions/mito_ai/static/vendors-node_modules_semver_index_js.3f6754ac5116d47de76b.js.map +0 -0
  58. {mito_ai-0.1.43.data → mito_ai-0.1.45.data}/data/share/jupyter/labextensions/mito_ai/static/vendors-node_modules_vscode-diff_dist_index_js.ea55f1f9346638aafbcf.js +0 -0
  59. {mito_ai-0.1.43.data → mito_ai-0.1.45.data}/data/share/jupyter/labextensions/mito_ai/static/vendors-node_modules_vscode-diff_dist_index_js.ea55f1f9346638aafbcf.js.map +0 -0
  60. {mito_ai-0.1.43.dist-info → mito_ai-0.1.45.dist-info}/WHEEL +0 -0
  61. {mito_ai-0.1.43.dist-info → mito_ai-0.1.45.dist-info}/entry_points.txt +0 -0
  62. {mito_ai-0.1.43.dist-info → mito_ai-0.1.45.dist-info}/licenses/LICENSE +0 -0
mito_ai/__init__.py CHANGED
@@ -5,7 +5,7 @@ from typing import List, Dict
5
5
  from jupyter_server.utils import url_path_join
6
6
  from mito_ai.completions.handlers import CompletionHandler
7
7
  from mito_ai.completions.providers import OpenAIProvider
8
- from mito_ai.app_builder.handlers import AppBuilderHandler
8
+ from mito_ai.app_deploy.handlers import AppDeployHandler
9
9
  from mito_ai.streamlit_preview.handlers import StreamlitPreviewHandler
10
10
  from mito_ai.log.urls import get_log_urls
11
11
  from mito_ai.version_check import VersionCheckHandler
@@ -71,8 +71,8 @@ def _load_jupyter_server_extension(server_app) -> None: # type: ignore
71
71
  {"llm": open_ai_provider},
72
72
  ),
73
73
  (
74
- url_path_join(base_url, "mito-ai", "app-builder"),
75
- AppBuilderHandler,
74
+ url_path_join(base_url, "mito-ai", "app-deploy"),
75
+ AppDeployHandler,
76
76
  {}
77
77
  ),
78
78
  (
mito_ai/_version.py CHANGED
@@ -1,4 +1,4 @@
1
1
  # This file is auto-generated by Hatchling. As such, do not:
2
2
  # - modify
3
3
  # - track in version control e.g. be sure to add to .gitignore
4
- __version__ = VERSION = '0.1.43'
4
+ __version__ = VERSION = '0.1.45'
@@ -52,12 +52,12 @@ def extract_and_parse_anthropic_json_response(response: Message) -> Union[object
52
52
 
53
53
 
54
54
  def get_anthropic_system_prompt_and_messages(messages: List[ChatCompletionMessageParam]) -> Tuple[
55
- Union[str, anthropic.NotGiven], List[MessageParam]]:
55
+ Union[str, anthropic.Omit], List[MessageParam]]:
56
56
  """
57
57
  Convert a list of OpenAI messages to a list of Anthropic messages.
58
58
  """
59
59
 
60
- system_prompt: Union[str, anthropic.NotGiven] = anthropic.NotGiven()
60
+ system_prompt: Union[str, anthropic.Omit] = anthropic.Omit()
61
61
  anthropic_messages: List[MessageParam] = []
62
62
 
63
63
  for message in messages:
@@ -206,7 +206,6 @@ class AnthropicClient:
206
206
  stream=True
207
207
  )
208
208
 
209
-
210
209
  for chunk in stream:
211
210
  if chunk.type == "content_block_delta" and chunk.delta.type == "text_delta":
212
211
  content = chunk.delta.text
@@ -3,4 +3,4 @@
3
3
 
4
4
  """App builder module for Mito AI."""
5
5
 
6
- from .handlers import AppBuilderHandler
6
+ from .handlers import AppDeployHandler
@@ -0,0 +1,25 @@
1
+ # Copyright (c) Saga Inc.
2
+ # Distributed under the terms of the GNU Affero General Public License v3.0 License.
3
+
4
+ import os
5
+ import zipfile
6
+ import logging
7
+ from typing import List, Optional
8
+
9
+ def add_files_to_zip(zip_path: str, base_path: str, files_to_add: List[str], logger: Optional[logging.Logger] = None) -> None:
10
+ """Create a zip file at zip_path and add the selected files/folders."""
11
+ with zipfile.ZipFile(zip_path, "w", zipfile.ZIP_DEFLATED) as zipf:
12
+ for rel_path in files_to_add:
13
+ abs_path = os.path.join(base_path, rel_path)
14
+
15
+ if os.path.isfile(abs_path):
16
+ zipf.write(abs_path, arcname=rel_path)
17
+ elif os.path.isdir(abs_path):
18
+ for root, _, files in os.walk(abs_path):
19
+ for file in files:
20
+ file_abs = os.path.join(root, file)
21
+ arcname = os.path.relpath(file_abs, base_path)
22
+ zipf.write(file_abs, arcname=arcname)
23
+ else:
24
+ if logger:
25
+ logger.warning(f"Skipping missing file: {abs_path}")
@@ -4,27 +4,27 @@
4
4
  import os
5
5
  import time
6
6
  import logging
7
- from typing import Any, Union, Optional
8
- import zipfile
7
+ from typing import Any, Union, List
9
8
  import tempfile
9
+ from mito_ai.streamlit_conversion.streamlit_utils import get_app_path
10
10
  from mito_ai.utils.create import initialize_user
11
11
  from mito_ai.utils.version_utils import is_pro
12
12
  from mito_ai.utils.websocket_base import BaseWebSocketHandler
13
- from mito_ai.app_builder.models import (
14
- BuildAppReply,
15
- AppBuilderError,
16
- BuildAppRequest,
13
+ from mito_ai.app_deploy.app_deploy_utils import add_files_to_zip
14
+ from mito_ai.app_deploy.models import (
15
+ DeployAppReply,
16
+ AppDeployError,
17
+ DeployAppRequest,
17
18
  ErrorMessage,
18
19
  MessageType
19
20
  )
20
- from mito_ai.streamlit_conversion.streamlit_agent_handler import streamlit_handler
21
21
  from mito_ai.logger import get_logger
22
22
  from mito_ai.constants import ACTIVE_STREAMLIT_BASE_URL
23
23
  import requests
24
24
 
25
25
 
26
- class AppBuilderHandler(BaseWebSocketHandler):
27
- """Handler for app building requests."""
26
+ class AppDeployHandler(BaseWebSocketHandler):
27
+ """Handler for app deploy requests."""
28
28
 
29
29
  def initialize(self) -> None:
30
30
  """Initialize the WebSocket handler."""
@@ -57,6 +57,7 @@ class AppBuilderHandler(BaseWebSocketHandler):
57
57
  Args:
58
58
  message: The message received on the WebSocket.
59
59
  """
60
+
60
61
  start = time.time()
61
62
 
62
63
  # Convert bytes to string if needed
@@ -73,13 +74,13 @@ class AppBuilderHandler(BaseWebSocketHandler):
73
74
  parsed_message = self.parse_message(message)
74
75
  message_type = parsed_message.get('type')
75
76
 
76
- if message_type == MessageType.BUILD_APP.value:
77
+ if message_type == MessageType.DEPLOY_APP.value:
77
78
  # Handle build app request
78
- build_app_request = BuildAppRequest(**parsed_message)
79
- await self._handle_build_app(build_app_request)
79
+ deploy_app_request = DeployAppRequest(**parsed_message)
80
+ await self._handle_deploy_app(deploy_app_request)
80
81
  else:
81
82
  self.log.error(f"Unknown message type: {message_type}")
82
- error = AppBuilderError(
83
+ error = AppDeployError(
83
84
  error_type="InvalidRequest",
84
85
  title=f"Unknown message type: {message_type}"
85
86
  )
@@ -87,11 +88,11 @@ class AppBuilderHandler(BaseWebSocketHandler):
87
88
 
88
89
  except ValueError as e:
89
90
  self.log.error("Invalid app builder request", exc_info=e)
90
- error = AppBuilderError.from_exception(e)
91
+ error = AppDeployError.from_exception(e)
91
92
  self.reply(ErrorMessage(**error.__dict__))
92
93
  except Exception as e:
93
94
  self.log.error("Error handling app builder message", exc_info=e)
94
- error = AppBuilderError.from_exception(
95
+ error = AppDeployError.from_exception(
95
96
  e,
96
97
  hint="An error occurred while building the app. Please check the logs for details."
97
98
  )
@@ -100,7 +101,7 @@ class AppBuilderHandler(BaseWebSocketHandler):
100
101
  latency_ms = round((time.time() - start) * 1000)
101
102
  self.log.info(f"App builder handler processed in {latency_ms} ms.")
102
103
 
103
- async def _handle_build_app(self, message: BuildAppRequest) -> None:
104
+ async def _handle_deploy_app(self, message: DeployAppRequest) -> None:
104
105
  """Handle a build app request.
105
106
 
106
107
  Args:
@@ -109,17 +110,18 @@ class AppBuilderHandler(BaseWebSocketHandler):
109
110
  message_id = message.message_id
110
111
  notebook_path = message.notebook_path
111
112
  jwt_token = message.jwt_token
112
-
113
+ files_to_upload = message.selected_files
114
+
113
115
  if not message_id:
114
116
  self.log.error("Missing message_id in request")
115
117
  return
116
118
 
117
119
  if not notebook_path:
118
- error = AppBuilderError(
120
+ error = AppDeployError(
119
121
  error_type="InvalidRequest",
120
122
  title="Missing 'notebook_path' parameter"
121
123
  )
122
- self.reply(BuildAppReply(
124
+ self.reply(DeployAppReply(
123
125
  parent_id=message_id,
124
126
  url="",
125
127
  error=error
@@ -132,12 +134,12 @@ class AppBuilderHandler(BaseWebSocketHandler):
132
134
  is_valid = self._validate_jwt_token(jwt_token) if jwt_token else False
133
135
  if not is_valid or not jwt_token:
134
136
  self.log.error("JWT token validation failed")
135
- error = AppBuilderError(
137
+ error = AppDeployError(
136
138
  error_type="Unauthorized",
137
139
  title="Invalid authentication token",
138
140
  hint="Please sign in again to deploy your app."
139
141
  )
140
- self.reply(BuildAppReply(
142
+ self.reply(DeployAppReply(
141
143
  parent_id=message_id,
142
144
  url="",
143
145
  error=error
@@ -150,25 +152,34 @@ class AppBuilderHandler(BaseWebSocketHandler):
150
152
  notebook_path = str(notebook_path) if notebook_path else ""
151
153
 
152
154
  app_directory = os.path.dirname(notebook_path)
153
- app_path = os.path.join(app_directory, "app.py")
154
-
155
- if not os.path.exists(app_path):
156
- success_flag, app_path_result, result_message = await streamlit_handler(notebook_path)
157
- if not success_flag or app_path_result is None:
158
- raise Exception(result_message)
159
-
160
- deploy_url = await self._deploy_app(app_directory, jwt_token)
155
+
156
+ # Check if the app.py file exists
157
+ app_path = get_app_path(app_directory)
158
+ if app_path is None:
159
+ error = AppDeployError(
160
+ error_type="AppNotFound",
161
+ title="App not found",
162
+ hint="Please make sure the app.py file exists in the same directory as the notebook."
163
+ )
164
+ self.reply(DeployAppReply(
165
+ parent_id=message_id,
166
+ url="",
167
+ error=error
168
+ ))
169
+
170
+ # Finally, deploy the app
171
+ deploy_url = await self._deploy_app(app_directory, files_to_upload, jwt_token)
161
172
 
162
173
  # Send the response
163
- self.reply(BuildAppReply(
174
+ self.reply(DeployAppReply(
164
175
  parent_id=message_id,
165
176
  url=deploy_url
166
177
  ))
167
178
 
168
179
  except Exception as e:
169
180
  self.log.error(f"Error building app: {e}", exc_info=e)
170
- error = AppBuilderError.from_exception(e)
171
- self.reply(BuildAppReply(
181
+ error = AppDeployError.from_exception(e)
182
+ self.reply(DeployAppReply(
172
183
  parent_id=message_id,
173
184
  url="",
174
185
  error=error
@@ -208,11 +219,12 @@ class AppBuilderHandler(BaseWebSocketHandler):
208
219
  return False
209
220
 
210
221
 
211
- async def _deploy_app(self, app_path: str, jwt_token: str = '') -> str:
222
+ async def _deploy_app(self, app_path: str, files_to_upload:List[str], jwt_token: str = '') -> str:
212
223
  """Deploy the app using pre-signed URLs.
213
224
 
214
225
  Args:
215
226
  app_path: Path to the app file.
227
+ files_to_upload: Files the user selected to upload for the app to run
216
228
  jwt_token: JWT token for authentication (optional)
217
229
 
218
230
  Returns:
@@ -247,16 +259,12 @@ class AppBuilderHandler(BaseWebSocketHandler):
247
259
  # Step 2: Create a zip file of the app.
248
260
  temp_zip_path = None
249
261
  try:
250
- # Create temp file and close it before writing to avoid file handle conflicts
251
- with tempfile.NamedTemporaryFile(suffix='.zip', delete=False) as temp_zip:
262
+ # Create temp file
263
+ with tempfile.NamedTemporaryFile(suffix=".zip", delete=False) as temp_zip:
252
264
  temp_zip_path = temp_zip.name
253
265
 
254
266
  self.log.info("Zipping application files...")
255
- with zipfile.ZipFile(temp_zip_path, 'w', zipfile.ZIP_DEFLATED) as zipf:
256
- for root, _, files in os.walk(app_path):
257
- for file in files:
258
- file_path = os.path.join(root, file)
259
- zipf.write(file_path, arcname=os.path.relpath(file_path, app_path))
267
+ add_files_to_zip(temp_zip_path, app_path, files_to_upload, self.log)
260
268
 
261
269
  upload_response = await self._upload_app_to_s3(temp_zip_path, presigned_url)
262
270
  except Exception as e:
@@ -3,17 +3,17 @@
3
3
 
4
4
  from dataclasses import dataclass
5
5
  from enum import Enum
6
- from typing import Literal, Optional
6
+ from typing import Literal, Optional, List
7
7
 
8
8
 
9
9
  class MessageType(str, Enum):
10
- """Types of app builder messages."""
11
- BUILD_APP = "build-app"
10
+ """Types of app deploy messages."""
11
+ DEPLOY_APP = "deploy-app"
12
12
 
13
13
 
14
14
  @dataclass(frozen=True)
15
- class AppBuilderError:
16
- """Error information for app builder operations."""
15
+ class AppDeployError:
16
+ """Error information for app deploy operations."""
17
17
 
18
18
  # Error type.
19
19
  error_type: str
@@ -28,7 +28,7 @@ class AppBuilderError:
28
28
  hint: Optional[str] = None
29
29
 
30
30
  @classmethod
31
- def from_exception(cls, e: Exception, hint: Optional[str] = None) -> "AppBuilderError":
31
+ def from_exception(cls, e: Exception, hint: Optional[str] = None) -> "AppDeployError":
32
32
  """Create an error from an exception.
33
33
 
34
34
  Args:
@@ -47,7 +47,7 @@ class AppBuilderError:
47
47
 
48
48
 
49
49
  @dataclass(frozen=True)
50
- class ErrorMessage(AppBuilderError):
50
+ class ErrorMessage(AppDeployError):
51
51
  """Error message."""
52
52
 
53
53
  # Message type.
@@ -55,25 +55,28 @@ class ErrorMessage(AppBuilderError):
55
55
 
56
56
 
57
57
  @dataclass(frozen=True)
58
- class BuildAppRequest:
59
- """Request to build an app."""
58
+ class DeployAppRequest:
59
+ """Request to deploy an app."""
60
60
 
61
61
  # Request type.
62
- type: Literal["build-app"]
62
+ type: Literal["deploy-app"]
63
63
 
64
64
  # Message ID.
65
65
  message_id: str
66
66
 
67
67
  # Path to the app file.
68
68
  notebook_path: str
69
+
70
+ # Files to be uploaded for the app to run
71
+ selected_files: List[str]
69
72
 
70
73
  # JWT token for authorization.
71
74
  jwt_token: Optional[str] = None
72
75
 
73
76
 
74
77
  @dataclass(frozen=True)
75
- class BuildAppReply:
76
- """Reply to a build app request."""
78
+ class DeployAppReply:
79
+ """Reply to a deplpy app request."""
77
80
 
78
81
  # ID of the request message this is replying to.
79
82
  parent_id: str
@@ -82,7 +85,7 @@ class BuildAppReply:
82
85
  url: str
83
86
 
84
87
  # Optional error information.
85
- error: Optional[AppBuilderError] = None
88
+ error: Optional[AppDeployError] = None
86
89
 
87
90
  # Type of reply.
88
- type: Literal["build-app"] = "build-app"
91
+ type: Literal["deploy-app"] = "deploy-app"
@@ -12,6 +12,8 @@ from mito_ai.app_manager.models import (
12
12
  AppManagerError,
13
13
  ManageAppRequest,
14
14
  ManageAppReply,
15
+ CheckAppStatusRequest,
16
+ CheckAppStatusReply,
15
17
  ErrorMessage,
16
18
  MessageType
17
19
  )
@@ -57,6 +59,10 @@ class AppManagerHandler(BaseWebSocketHandler):
57
59
  # Handle manage app request
58
60
  manage_app_request = ManageAppRequest(**parsed_message)
59
61
  await self._handle_manage_app(manage_app_request)
62
+ elif message_type == MessageType.CHECK_APP_STATUS.value:
63
+ # Handle check app status request
64
+ check_status_request = CheckAppStatusRequest(**parsed_message)
65
+ await self._handle_check_app_status(check_status_request)
60
66
  else:
61
67
  self.log.error(f"Unknown message type: {message_type}")
62
68
  error_response = ErrorMessage(
@@ -131,4 +137,31 @@ class AppManagerHandler(BaseWebSocketHandler):
131
137
  error=error,
132
138
  message_id=request.message_id
133
139
  )
140
+ self.reply(error_reply)
141
+
142
+ async def _handle_check_app_status(self, request: CheckAppStatusRequest) -> None:
143
+ """Handle a check app status request."""
144
+ self.log.info("In check app status")
145
+ try:
146
+ # Make a HEAD request to check if the app URL is accessible
147
+ response = requests.head(request.app_url, timeout=10, verify=False)
148
+ self.log.debug(f"Is app accessible: {response.status_code}")
149
+ is_accessible = response.status_code==200
150
+
151
+ # Create successful response
152
+ reply = CheckAppStatusReply(
153
+ is_accessible=is_accessible
154
+ )
155
+
156
+ self.reply(reply)
157
+
158
+ except Exception as e:
159
+ self.log.error(f"Error checking app status: {e}", exc_info=e)
160
+ error = AppManagerError.from_exception(e)
161
+
162
+ # Return error response
163
+ error_reply = CheckAppStatusReply(
164
+ is_accessible=False,
165
+ error=error
166
+ )
134
167
  self.reply(error_reply)
@@ -8,6 +8,7 @@ from typing import List, Optional
8
8
  class MessageType(str, Enum):
9
9
  """Types of app manager messages."""
10
10
  MANAGE_APP = "manage-app"
11
+ CHECK_APP_STATUS = "check-app-status"
11
12
 
12
13
 
13
14
  @dataclass(frozen=True)
@@ -15,7 +16,7 @@ class ManageAppRequest:
15
16
  """Request to manage apps."""
16
17
  type: str = "manage-app"
17
18
  jwt_token: Optional[str] = None
18
- message_id: Optional[str] = None
19
+ message_id: Optional[str] = None
19
20
 
20
21
  @dataclass(frozen=True)
21
22
  class App:
@@ -48,6 +49,19 @@ class ManageAppReply:
48
49
  error: Optional[AppManagerError] = None
49
50
  message_id: Optional[str] = None
50
51
 
52
+ @dataclass(frozen=True)
53
+ class CheckAppStatusRequest:
54
+ """Request to check app status."""
55
+ app_url: str
56
+ type: str = "check-app-status"
57
+
58
+ @dataclass(frozen=True)
59
+ class CheckAppStatusReply:
60
+ """Reply to a check app status request."""
61
+ is_accessible: bool
62
+ type: str = "check-app-status"
63
+ error: Optional[AppManagerError] = None
64
+
51
65
  @dataclass(frozen=True)
52
66
  class ErrorMessage:
53
67
  """Error message."""
@@ -14,6 +14,7 @@ import tornado.web
14
14
  from jupyter_core.utils import ensure_async
15
15
  from jupyter_server.base.handlers import JupyterHandler
16
16
  from tornado.websocket import WebSocketHandler
17
+ from openai.types.chat import ChatCompletionMessageParam
17
18
  from mito_ai.completions.message_history import GlobalMessageHistory
18
19
  from mito_ai.logger import get_logger
19
20
  from mito_ai.completions.models import (
@@ -67,6 +68,7 @@ class CompletionHandler(JupyterHandler, WebSocketHandler):
67
68
  self._llm = llm
68
69
  self.is_pro = is_pro()
69
70
  self._selected_model = FALLBACK_MODEL
71
+ self.is_electron = False
70
72
  identify(llm.key_type)
71
73
 
72
74
  @property
@@ -128,6 +130,18 @@ class CompletionHandler(JupyterHandler, WebSocketHandler):
128
130
  parsed_message = json.loads(message)
129
131
  metadata_dict = parsed_message.get('metadata', {})
130
132
  type: MessageType = MessageType(parsed_message.get('type'))
133
+
134
+ # Extract environment information from the message
135
+ environment = parsed_message.get('environment', {})
136
+ if environment:
137
+ is_electron = environment.get('isElectron', None)
138
+ if is_electron is not None:
139
+ if is_electron != self.is_electron:
140
+ # If the is_electron status is different, log it
141
+ identify(key_type=self._llm.key_type, is_electron=is_electron)
142
+
143
+ self.is_electron = is_electron
144
+
131
145
  except ValueError as e:
132
146
  self.log.error("Invalid completion request.", exc_info=e)
133
147
  return
@@ -209,7 +223,32 @@ class CompletionHandler(JupyterHandler, WebSocketHandler):
209
223
  )
210
224
  self.reply(reply)
211
225
  return
212
-
226
+
227
+ if type == MessageType.STOP_AGENT:
228
+ thread_id_to_stop = metadata_dict.get('threadId')
229
+ if thread_id_to_stop:
230
+ self.log.info(f"Stopping agent, thread ID: {thread_id_to_stop}")
231
+
232
+ ai_optimized_message: ChatCompletionMessageParam = {
233
+ "role": "assistant",
234
+ "content": "The user made the following request: Stop processing my last request. I want to change it. Please answer my future requests without going back and finising my previous request."
235
+ }
236
+ display_optimized_message: ChatCompletionMessageParam = {
237
+ "role": "assistant",
238
+ "content": "Agent interupted by user "
239
+ }
240
+
241
+ await message_history.append_message(
242
+ ai_optimized_message=ai_optimized_message,
243
+ display_message=display_optimized_message,
244
+ model=self._selected_model,
245
+ llm_provider=self._llm,
246
+ thread_id=thread_id_to_stop
247
+ )
248
+ else:
249
+ self.log.info("Trying to stop agent, but no thread ID available")
250
+ return
251
+
213
252
  try:
214
253
  # Get completion based on message type
215
254
  completion = None
@@ -3,7 +3,7 @@
3
3
 
4
4
  import traceback
5
5
  from dataclasses import dataclass, field
6
- from typing import List, Literal, Optional, NewType, Dict
6
+ from typing import List, Literal, Optional, NewType, Dict, Any
7
7
  from openai.types.chat import ChatCompletionMessageParam
8
8
  from enum import Enum
9
9
  from pydantic import BaseModel
@@ -64,6 +64,7 @@ class MessageType(Enum):
64
64
  DELETE_THREAD = "delete_thread"
65
65
  UPDATE_MODEL_CONFIG = "update_model_config"
66
66
  STREAMLIT_CONVERSION = "streamlit_conversion"
67
+ STOP_AGENT = "stop_agent"
67
68
 
68
69
 
69
70
  @dataclass(frozen=True)
@@ -155,6 +156,9 @@ class CompletionRequest:
155
156
  # Whether to stream the response (if supported by the model).
156
157
  stream: bool = False
157
158
 
159
+ # Environment information from the client
160
+ environment: Optional[Dict[str, Any]] = None
161
+
158
162
 
159
163
  @dataclass(frozen=True)
160
164
  class AICapabilities:
@@ -51,7 +51,8 @@ Format:
51
51
  code: str
52
52
  code_summary: str
53
53
  cell_type: 'code' | 'markdown'
54
- }}
54
+ }},
55
+ analysis_assumptions: Optional[List[str]]
55
56
  }}
56
57
 
57
58
  Important information:
@@ -60,7 +61,7 @@ Important information:
60
61
  3. The code should be the full contents of that updated code cell. The code that you return will overwrite the existing contents of the code cell so it must contain all necessary code.
61
62
  4. The code_summary must be a very short phrase (1–5 words maximum) that begins with a verb ending in "-ing" (e.g., "Loading data", "Filtering rows", "Calculating average", "Plotting revenue"). Avoid full sentences or explanations—this should read like a quick commit message or code label, not a description.
62
63
  5. Important: Only use the CELL_UPDATE tool if you want to add/modify a notebook cell in response to the user's request. If the user is just sending you a friendly greeting or asking you a question about yourself, you SHOULD NOT USE A CELL_UPDATE tool because it does not require modifying the notebook. Instead, just use the FINISHED_TASK response.
63
- 6. The assumptions is an optional list of critical assumptions that you made about the data or analysis approach. The assumptions you list here will be displayed to the user so that they can confirm or correct the assumptions. For example: ["NaN values in the impressions column represent 0 impressions", "Only crashes with pedestrian or cyclist fatalities are considered fatal crashes", "Intervention priority combines both volume and severity to identify maximum impact opportunities"].
64
+ 6. The analysis_assumptions is an optional list of critical assumptions that you made about the data or analysis approach. The assumptions you list here will be displayed to the user so that they can confirm or correct the assumptions. For example: ["NaN values in the impressions column represent 0 impressions", "Only crashes with pedestrian or cyclist fatalities are considered fatal crashes", "Intervention priority combines both volume and severity to identify maximum impact opportunities"].
64
65
  7. Only include important data and analytical assumptions that if incorrect would fundamentally change your analysis conclusions. These should be data handling decisions, methodological choices, and definitional boundaries. Do not include: obvious statements ("Each record is counted once"), result interpretation guidance ("Gaps in the plot represent zero values"), display choices ("Data is sorted for clarity"), internal reasoning ("Bar chart is better than line plot"), or environment assumptions ("Library X is installed"). Prioritize quality over quantity - include only the most critical assumptions or omit the field entirely if there are no critical assumptions made in this step that have not already be shared with the user. If you ever doubt whether an assumption is critical enough to be shared with the user as an assumption, don't include it. Most messages should not include an assumption.
65
66
  8. Do not include the same assumption or variations of the same assumption multiple times in the same conversation. Once you have presented the assumption to the user, they will already have the opportunity to confirm or correct it so do not include it again.
66
67
 
@@ -77,7 +78,8 @@ Format:
77
78
  code: str
78
79
  code_summary: str
79
80
  cell_type: 'code' | 'markdown'
80
- }}
81
+ }},
82
+ analysis_assumptions: Optional[List[str]]
81
83
  }}
82
84
 
83
85
  Important information:
@@ -86,7 +88,7 @@ Important information:
86
88
  3. The code should be the full contents of that updated code cell. The code that you return will overwrite the existing contents of the code cell so it must contain all necessary code.
87
89
  4. code_summary must be a very short phrase (1–5 words maximum) that begins with a verb ending in "-ing" (e.g., "Loading data", "Filtering rows", "Calculating average", "Plotting revenue"). Avoid full sentences or explanations—this should read like a quick commit message or code label, not a description.
88
90
  5. The cell_type should only be 'markdown' if there is no code to add. There may be times where the code has comments. These are still code cells and should have the cell_type 'code'. Any cells that are labeled 'markdown' will be converted to markdown cells by the user.
89
- 6. The assumptions is an optional list of critical assumptions that you made about the data or analysis approach. The assumptions you list here will be displayed to the user so that they can confirm or correct the assumptions. For example: ["NaN values in the impressions column represent 0 impressions", "Only crashes with pedestrian or cyclist fatalities are considered fatal crashes", "Intervention priority combines both volume and severity to identify maximum impact opportunities"].
91
+ 6. The analysis_assumptions is an optional list of critical assumptions that you made about the data or analysis approach. The assumptions you list here will be displayed to the user so that they can confirm or correct the assumptions. For example: ["NaN values in the impressions column represent 0 impressions", "Only crashes with pedestrian or cyclist fatalities are considered fatal crashes", "Intervention priority combines both volume and severity to identify maximum impact opportunities"].
90
92
  7. Only include important data and analytical assumptions that if incorrect would fundamentally change your analysis conclusions. These should be data handling decisions, methodological choices, and definitional boundaries. Do not include: obvious statements ("Each record is counted once"), result interpretation guidance ("Gaps in the plot represent zero values"), display choices ("Data is sorted for clarity"), internal reasoning ("Bar chart is better than line plot"), or environment assumptions ("Library X is installed"). Prioritize quality over quantity - include only the most critical assumptions or omit the field entirely if there are no critical assumptions made in this step that have not already be shared with the user. If you ever doubt whether an assumption is critical enough to be shared with the user as an assumption, don't include it. Most messages should not include an assumption.
91
93
  8. Do not include the same assumption or variations of the same assumption multiple times in the same conversation. Once you have presented the assumption to the user, they will already have the opportunity to confirm or correct it so do not include it again.
92
94
 
@@ -125,15 +125,33 @@ If the user has requested data that you believe is stored in the database:
125
125
  connections[connection_name]["username"]
126
126
  ```
127
127
 
128
+ - The user may colloquially ask for a "list of x", always assume they want a pandas DataFrame.
129
+ - When working with dataframes created from an SQL query, ALWAYS use lowercase column names.
130
+ - If you think the requested data is stored in the database, but you are unsure, then ask the user for clarification.
131
+
132
+ ## Additional MSSQL Rules
133
+
134
+ - When connecting to a Microsoft SQL Server (MSSQL) database, use the following format:
135
+
136
+ ```
137
+ import urllib.parse
138
+
139
+ encoded_password = urllib.parse.quote_plus(password)
140
+ conn_str = f"mssql+pyodbc://username:encoded_password@host:port/database?driver=ODBC+Driver+18+for+SQL+Server&TrustServerCertificate=yes"
141
+ ```
142
+
143
+ - Always URL-encode passwords for MSSQL connections to handle special characters properly.
144
+ - Include the port number in MSSQL connection strings.
145
+ - Use "ODBC+Driver+18+for+SQL+Server" (with plus signs) in the driver parameter.
146
+ - Always include "TrustServerCertificate=yes" for MSSQL connections to avoid SSL certificate issues.
147
+
148
+ ## Additional Oracle Rules
149
+
128
150
  - When connecting to an Oracle database, use the following format:
129
151
  ```
130
152
  conn_str = f"oracle+oracledb://username:password@host:port?service_name=service_name"
131
153
  ```
132
154
 
133
- - The user may colloquially ask for a "list of x", always assume they want a pandas DataFrame.
134
- - When working with dataframes created from an SQL query, ALWAYS use lowercase column names.
135
- - If you think the requested data is stored in the database, but you are unsure, then ask the user for clarification.
136
-
137
155
  Here is the schema:
138
156
  {schemas}
139
157
  """