@aj-archipelago/cortex 1.1.33 → 1.1.35

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
Files changed (35) hide show
  1. package/README.md +98 -1
  2. package/config/dynamicPathwaysConfig.example.json +4 -0
  3. package/config.js +83 -10
  4. package/helper-apps/cortex-autogen/function_app.py +7 -0
  5. package/helper-apps/cortex-autogen/myautogen.py +109 -20
  6. package/helper-apps/cortex-autogen/prompt_summary.txt +13 -4
  7. package/helper-apps/cortex-autogen/requirements.txt +1 -1
  8. package/helper-apps/cortex-file-handler/package-lock.json +387 -203
  9. package/helper-apps/cortex-file-handler/package.json +3 -3
  10. package/helper-apps/cortex-whisper-wrapper/.dockerignore +1 -0
  11. package/helper-apps/cortex-whisper-wrapper/app.py +3 -1
  12. package/helper-apps/cortex-whisper-wrapper/requirements.txt +1 -1
  13. package/lib/pathwayManager.js +422 -0
  14. package/lib/requestExecutor.js +19 -15
  15. package/lib/util.js +4 -1
  16. package/package.json +5 -1
  17. package/pathways/code_human_input.js +47 -0
  18. package/pathways/dynamic/pathways.json +1 -0
  19. package/pathways/flux_image.js +12 -0
  20. package/pathways/index.js +4 -0
  21. package/server/graphql.js +67 -37
  22. package/server/modelExecutor.js +4 -0
  23. package/server/pathwayResolver.js +1 -1
  24. package/server/plugins/claude3VertexPlugin.js +86 -79
  25. package/server/plugins/gemini15VisionPlugin.js +23 -12
  26. package/server/plugins/geminiVisionPlugin.js +32 -25
  27. package/server/plugins/modelPlugin.js +15 -2
  28. package/server/plugins/openAiChatPlugin.js +1 -1
  29. package/server/plugins/openAiVisionPlugin.js +19 -6
  30. package/server/plugins/runwareAIPlugin.js +81 -0
  31. package/server/rest.js +31 -13
  32. package/server/typeDef.js +33 -15
  33. package/tests/claude3VertexPlugin.test.js +1 -1
  34. package/tests/multimodal_conversion.test.js +328 -0
  35. package/tests/vision.test.js +20 -5
package/README.md CHANGED
@@ -432,7 +432,7 @@ Configuration of Cortex is done via a [convict](https://github.com/mozilla/node-
432
432
  - `PORT`: The port number for the Cortex server. Default is 4000. The value can be set using the `CORTEX_PORT` environment variable.
433
433
  - `storageConnectionString`: The connection string used for accessing storage. This is sensitive information and has no default value. The value can be set using the `STORAGE_CONNECTION_STRING` environment variable.
434
434
 
435
- The `buildPathways` function takes the config object and builds the `pathways` object by loading the core pathways and any custom pathways specified in the `pathwaysPath` property of the config object. The function returns the `pathways` object.
435
+ The `buildPathways` function takes the config object and builds the `pathways` and `pathwayManager` objects by loading the core pathways and any custom pathways specified in the `pathwaysPath` property of the config object. The function returns the `pathways` and `pathwayManager` objects.
436
436
 
437
437
  The `buildModels` function takes the `config` object and builds the `models` object by compiling handlebars templates for each model specified in the `models` property of the config object. The function returns the `models` object.
438
438
 
@@ -471,3 +471,100 @@ Cortex is a constantly evolving project, and the following features are coming s
471
471
  * Prompt execution context preservation between calls (to enable interactive, multi-call integrations with LangChain and other technologies)
472
472
  * Model-specific cache key optimizations to increase hit rate and reduce cache size
473
473
  * Structured analytics and reporting on AI API call frequency, cost, cache hit rate, etc.
474
+
475
+ ## Dynamic Pathways
476
+
477
+ Cortex supports dynamic pathways, which allow for the creation and management of pathways at runtime. This feature enables users to define custom pathways without modifying the core Cortex codebase.
478
+
479
+ ### How It Works
480
+
481
+ 1. Dynamic pathways are stored either locally or in cloud storage (Azure Blob Storage or AWS S3).
482
+ 2. The `PathwayManager` class handles loading, saving, and managing these dynamic pathways.
483
+ 3. Dynamic pathways can be added, updated, or removed via GraphQL mutations.
484
+
485
+ ### Configuration
486
+
487
+ To use dynamic pathways, you need to provide a JSON configuration file or a JSON string. There are two ways to specify this configuration:
488
+
489
+ 1. Using a configuration file:
490
+ Set the `DYNAMIC_PATHWAYS_CONFIG_FILE` environment variable to the path of your JSON configuration file.
491
+
492
+ 2. Using a JSON string:
493
+ Set the `DYNAMIC_PATHWAYS_CONFIG_JSON` environment variable with the JSON configuration as a string.
494
+
495
+ The configuration should include the following properties:
496
+
497
+ ```json
498
+ {
499
+ "storageType": "local" | "azure" | "s3",
500
+ "filePath": "./dynamic/pathways.json", // Only for local storage
501
+ "azureStorageConnectionString": "your_connection_string", // Only for Azure
502
+ "azureContainerName": "cortexdynamicpathways", // Optional, default is "cortexdynamicpathways"
503
+ "awsAccessKeyId": "your_access_key_id", // Only for AWS S3
504
+ "awsSecretAccessKey": "your_secret_access_key", // Only for AWS S3
505
+ "awsRegion": "your_aws_region", // Only for AWS S3
506
+ "awsBucketName": "cortexdynamicpathways" // Optional, default is "cortexdynamicpathways"
507
+ }
508
+ ```
509
+
510
+ ### Storage Options
511
+
512
+ 1. Local Storage (default):
513
+ - Set `storageType` to `"local"`
514
+ - Specify `filePath` for the local JSON file (default: "./dynamic/pathways.json")
515
+
516
+ 2. Azure Blob Storage:
517
+ - Set `storageType` to `"azure"`
518
+ - Provide `azureStorageConnectionString`
519
+ - Optionally set `azureContainerName` (default: "cortexdynamicpathways")
520
+
521
+ 3. AWS S3:
522
+ - Set `storageType` to `"s3"`
523
+ - Provide `awsAccessKeyId`, `awsSecretAccessKey`, and `awsRegion`
524
+ - Optionally set `awsBucketName` (default: "cortexdynamicpathways")
525
+
526
+ ### Usage
527
+
528
+ Dynamic pathways can be managed through GraphQL mutations. Here are the available operations:
529
+
530
+ 1. Adding or updating a pathway:
531
+
532
+ ```graphql
533
+ mutation PutPathway($name: String!, $pathway: PathwayInput!, $userId: String!, $secret: String!, $displayName: String, $key: String!) {
534
+ putPathway(name: $name, pathway: $pathway, userId: $userId, secret: $secret, displayName: $displayName, key: $key) {
535
+ name
536
+ }
537
+ }
538
+ ```
539
+
540
+ 2. Deleting a pathway:
541
+
542
+ ```graphql
543
+ mutation DeletePathway($name: String!, $userId: String!, $secret: String!, $key: String!) {
544
+ deletePathway(name: $name, userId: $userId, secret: $secret, key: $key)
545
+ }
546
+ ```
547
+
548
+ 3. Executing a dynamic pathway:
549
+
550
+ ```graphql
551
+ query ExecuteWorkspace($userId: String!, $pathwayName: String!, $text: String!) {
552
+ executeWorkspace(userId: $userId, pathwayName: $pathwayName, text: $text) {
553
+ result
554
+ }
555
+ }
556
+ ```
557
+
558
+ ### Security
559
+
560
+ To ensure the security of dynamic pathways:
561
+
562
+ 1. A `PATHWAY_PUBLISH_KEY` environment variable must be set to enable pathway publishing.
563
+ 2. This key must be provided in the `key` parameter when adding, updating, or deleting pathways.
564
+ 3. Each pathway is associated with a `userId` and `secret`. The secret must be provided to modify or delete an existing pathway.
565
+
566
+ ### Synchronization across multiple instances
567
+
568
+ Each instance of Cortex maintains its own local cache of pathways. On every dynamic pathway request, it checks if the local cache is up to date by comparing the last modified timestamp of the storage with the last update time of the local cache. If the local cache is out of date, it reloads the pathways from storage.
569
+
570
+ This approach ensures that all instances of Cortex will eventually have access to the most up-to-date dynamic pathways without requiring immediate synchronization.
@@ -0,0 +1,4 @@
1
+ {
2
+ "storageType": "azure",
3
+ "filePath": "./pathways/dynamic/pathways.json",
4
+ }
package/config.js CHANGED
@@ -5,18 +5,19 @@ import fs from 'fs';
5
5
  import { fileURLToPath, pathToFileURL } from 'url';
6
6
  import GcpAuthTokenHelper from './lib/gcpAuthTokenHelper.js';
7
7
  import logger from './lib/logger.js';
8
+ import PathwayManager from './lib/pathwayManager.js';
8
9
 
9
10
  const __dirname = path.dirname(fileURLToPath(import.meta.url));
10
11
 
11
12
  convict.addFormat({
12
13
  name: 'string-array',
13
- validate: function(val) {
14
- if (!Array.isArray(val)) {
15
- throw new Error('must be of type Array');
16
- }
14
+ validate: function (val) {
15
+ if (!Array.isArray(val)) {
16
+ throw new Error('must be of type Array');
17
+ }
17
18
  },
18
- coerce: function(val) {
19
- return val.split(',');
19
+ coerce: function (val) {
20
+ return val.split(',');
20
21
  },
21
22
  });
22
23
 
@@ -194,6 +195,13 @@ var config = convict({
194
195
  "requestsPerSecond": 10,
195
196
  "maxTokenLength": 200000
196
197
  },
198
+ "runware-flux-schnell": {
199
+ "type": "RUNWARE-AI",
200
+ "url": "https://api.runware.ai/v1",
201
+ "headers": {
202
+ "Content-Type": "application/json"
203
+ },
204
+ },
197
205
  },
198
206
  env: 'CORTEX_MODELS'
199
207
  },
@@ -240,6 +248,12 @@ var config = convict({
240
248
  env: 'REDIS_ENCRYPTION_KEY',
241
249
  sensitive: true
242
250
  },
251
+ runwareAiApiKey: {
252
+ format: String,
253
+ default: null,
254
+ env: 'RUNWARE_API_KEY',
255
+ sensitive: true
256
+ },
243
257
  dalleImageApiUrl: {
244
258
  format: String,
245
259
  default: 'null',
@@ -290,6 +304,39 @@ if (config.get('gcpServiceAccountKey')) {
290
304
  config.set('gcpAuthTokenHelper', gcpAuthTokenHelper);
291
305
  }
292
306
 
307
+ // Load dynamic pathways from JSON file or cloud storage
308
+ const createDynamicPathwayManager = async (config, basePathway) => {
309
+ const { dynamicPathwayConfig } = config.getProperties();
310
+
311
+ if (!dynamicPathwayConfig) {
312
+ return null;
313
+ }
314
+
315
+ const storageConfig = {
316
+ storageType: dynamicPathwayConfig.storageType || 'local',
317
+ filePath: dynamicPathwayConfig.filePath || "./dynamic/pathways.json",
318
+ azureStorageConnectionString: dynamicPathwayConfig.azureStorageConnectionString,
319
+ azureContainerName: dynamicPathwayConfig.azureContainerName || 'cortexdynamicpathways',
320
+ awsAccessKeyId: dynamicPathwayConfig.awsAccessKeyId,
321
+ awsSecretAccessKey: dynamicPathwayConfig.awsSecretAccessKey,
322
+ awsRegion: dynamicPathwayConfig.awsRegion,
323
+ awsBucketName: dynamicPathwayConfig.awsBucketName || 'cortexdynamicpathways',
324
+ };
325
+
326
+ const pathwayManager = new PathwayManager(storageConfig, basePathway);
327
+
328
+ try {
329
+ const dynamicPathways = await pathwayManager.initialize();
330
+ logger.info(`Dynamic pathways loaded successfully`);
331
+ logger.info(`Loaded dynamic pathways for users: [${Object.keys(dynamicPathways).join(", ")}]`);
332
+
333
+ return pathwayManager;
334
+ } catch (error) {
335
+ logger.error(`Error loading dynamic pathways: ${error.message}`);
336
+ return pathwayManager;
337
+ }
338
+ };
339
+
293
340
  // Build and load pathways to config
294
341
  const buildPathways = async (config) => {
295
342
  const { pathwaysPath, corePathwaysPath, basePathwayPath } = config.getProperties();
@@ -312,6 +359,32 @@ const buildPathways = async (config) => {
312
359
  loadedPathways = { ...loadedPathways, ...customPathways };
313
360
  }
314
361
 
362
+
363
+ const { DYNAMIC_PATHWAYS_CONFIG_FILE, DYNAMIC_PATHWAYS_CONFIG_JSON } = process.env;
364
+
365
+ let dynamicPathwayConfig;
366
+
367
+ // Load dynamic pathways
368
+ let pathwayManager;
369
+ try {
370
+ if (DYNAMIC_PATHWAYS_CONFIG_FILE) {
371
+ logger.info(`Reading dynamic pathway config from ${DYNAMIC_PATHWAYS_CONFIG_FILE}`);
372
+ dynamicPathwayConfig = JSON.parse(fs.readFileSync(DYNAMIC_PATHWAYS_CONFIG_FILE, 'utf8'));
373
+ } else if (DYNAMIC_PATHWAYS_CONFIG_JSON) {
374
+ logger.info(`Reading dynamic pathway config from DYNAMIC_PATHWAYS_CONFIG_JSON variable`);
375
+ dynamicPathwayConfig = JSON.parse(DYNAMIC_PATHWAYS_CONFIG_JSON);
376
+ }
377
+ else {
378
+ logger.warn('Dynamic pathways are not enabled. Please set the DYNAMIC_PATHWAYS_CONFIG_FILE or DYNAMIC_PATHWAYS_CONFIG_JSON environment variable to enable dynamic pathways.');
379
+ }
380
+
381
+ config.load({ dynamicPathwayConfig });
382
+ pathwayManager = await createDynamicPathwayManager(config, basePathway);
383
+ } catch (error) {
384
+ logger.error(`Error loading dynamic pathways: ${error.message}`);
385
+ process.exit(1);
386
+ }
387
+
315
388
  // This is where we integrate pathway overrides from the config
316
389
  // file. This can run into a partial definition issue if the
317
390
  // config file contains pathways that no longer exist.
@@ -322,9 +395,9 @@ const buildPathways = async (config) => {
322
395
  }
323
396
 
324
397
  // Add pathways to config
325
- config.load({ pathways })
398
+ config.load({ pathways });
326
399
 
327
- return pathways;
400
+ return { pathwayManager, pathways };
328
401
  }
329
402
 
330
403
  // Build and load models to config
@@ -336,7 +409,7 @@ const buildModels = (config) => {
336
409
  if (!model.name) {
337
410
  model.name = key;
338
411
  }
339
-
412
+
340
413
  // if model is in old format, convert it to new format
341
414
  if (!model.endpoints) {
342
415
  model = {
@@ -354,7 +427,7 @@ const buildModels = (config) => {
354
427
  }
355
428
 
356
429
  // compile handlebars templates for each endpoint
357
- model.endpoints = model.endpoints.map(endpoint =>
430
+ model.endpoints = model.endpoints.map(endpoint =>
358
431
  JSON.parse(HandleBars.compile(JSON.stringify(endpoint))({ ...model, ...config.getEnv(), ...config.getProperties() }))
359
432
  );
360
433
 
@@ -5,6 +5,13 @@ from azure.storage.queue import QueueClient
5
5
  import os
6
6
  import redis
7
7
  from myautogen import process_message
8
+ import subprocess
9
+ import sys
10
+
11
+ def install_packages():
12
+ subprocess.check_call([sys.executable, "-m", "pip", "install", "-r", "requirements.txt"])
13
+
14
+ install_packages()
8
15
 
9
16
  app = func.FunctionApp()
10
17
 
@@ -12,11 +12,28 @@ import requests
12
12
  import pathlib
13
13
  import pymongo
14
14
  import logging
15
- from datetime import datetime, timezone
16
- from tools.sasfileuploader import autogen_sas_uploader
15
+ from datetime import datetime, timezone, timedelta
17
16
  import shutil
17
+ import time
18
+ import base64
19
+ import zipfile
20
+ from azure.storage.blob import BlobServiceClient, generate_blob_sas, BlobSasPermissions
21
+
18
22
  load_dotenv()
19
23
 
24
+ connection_string = os.environ["AZURE_STORAGE_CONNECTION_STRING"]
25
+ human_input_queue_name = os.environ.get("HUMAN_INPUT_QUEUE_NAME", "autogen-human-input-queue")
26
+ human_input_queue_client = QueueClient.from_connection_string(connection_string, human_input_queue_name)
27
+
28
+ def check_for_human_input(request_id):
29
+ messages = human_input_queue_client.receive_messages()
30
+ for message in messages:
31
+ content = json.loads(base64.b64decode(message.content).decode('utf-8'))
32
+ if content['codeRequestId'] == request_id:
33
+ human_input_queue_client.delete_message(message)
34
+ return content['text']
35
+ return None
36
+
20
37
  DEFAULT_SUMMARY_PROMPT = "Summarize the takeaway from the conversation. Do not add any introductory phrases."
21
38
  try:
22
39
  with open("prompt_summary.txt", "r") as file:
@@ -36,12 +53,6 @@ def store_in_mongo(data):
36
53
  except Exception as e:
37
54
  logging.error(f"An error occurred while storing data in MongoDB: {str(e)}")
38
55
 
39
- app = func.FunctionApp()
40
-
41
- connection_string = os.environ["AZURE_STORAGE_CONNECTION_STRING"]
42
- queue_name = os.environ.get("QUEUE_NAME", "autogen-message-queue")
43
- queue_client = QueueClient.from_connection_string(connection_string, queue_name)
44
-
45
56
  redis_client = redis.from_url(os.environ['REDIS_CONNECTION_STRING'])
46
57
  channel = 'requestProgress'
47
58
 
@@ -58,7 +69,7 @@ def publish_request_progress(data):
58
69
  if connect_redis():
59
70
  try:
60
71
  message = json.dumps(data)
61
- logging.info(f"Publishing message {message} to channel {channel}")
72
+ #logging.info(f"Publishing message {message} to channel {channel}")
62
73
  redis_client.publish(channel, message)
63
74
  except Exception as e:
64
75
  logging.error(f"Error publishing message: {e}")
@@ -95,8 +106,41 @@ def fetch_from_url(url):
95
106
  logging.error(f"Error fetching from URL: {e}")
96
107
  return ""
97
108
 
109
+
110
+ def zip_and_upload_tmp_folder(temp_dir):
111
+ zip_path = os.path.join(temp_dir, "tmp_contents.zip")
112
+ with zipfile.ZipFile(zip_path, 'w', zipfile.ZIP_DEFLATED) as zipf:
113
+ for root, _, files in os.walk(temp_dir):
114
+ for file in files:
115
+ file_path = os.path.join(root, file)
116
+ arcname = os.path.relpath(file_path, temp_dir)
117
+ zipf.write(file_path, arcname)
118
+
119
+ blob_service_client = BlobServiceClient.from_connection_string(os.environ["AZURE_STORAGE_CONNECTION_STRING"])
120
+ container_name = os.environ.get("AZURE_BLOB_CONTAINER", "autogen-uploads")
121
+ blob_name = f"tmp_contents_{datetime.now(timezone.utc).strftime('%Y%m%d%H%M%S')}.zip"
122
+ blob_client = blob_service_client.get_blob_client(container=container_name, blob=blob_name)
123
+
124
+ with open(zip_path, "rb") as data:
125
+ blob_client.upload_blob(data)
126
+
127
+ account_key = blob_service_client.credential.account_key
128
+ account_name = blob_service_client.account_name
129
+ expiry = datetime.now(timezone.utc) + timedelta(hours=1)
130
+
131
+ sas_token = generate_blob_sas(
132
+ account_name,
133
+ container_name,
134
+ blob_name,
135
+ account_key=account_key,
136
+ permission=BlobSasPermissions(read=True),
137
+ expiry=expiry
138
+ )
139
+
140
+ return f"{blob_client.url}?{sas_token}"
141
+
98
142
  def process_message(message_data, original_request_message):
99
- logging.info(f"Processing Message: {message_data}")
143
+ # logging.info(f"Processing Message: {message_data}")
100
144
  try:
101
145
  started_at = datetime.now()
102
146
  message = message_data['message']
@@ -117,14 +161,15 @@ def process_message(message_data, original_request_message):
117
161
  total_messages = 20 * 2
118
162
  all_messages = []
119
163
 
164
+ terminate_count = 0
120
165
  def is_termination_msg(m):
121
- content = m.get("content", "")
122
- if message_count == 0:
166
+ nonlocal terminate_count
167
+ content = m.get("content", "").strip()
168
+ if not content:
123
169
  return False
124
- return (m.get("role") == "assistant" and not content.strip()) or \
125
- content.rstrip().endswith("TERMINATE") or \
126
- "first message must use the" in content.lower() or \
127
- len(content.strip()) == 0
170
+ if content.rstrip().endswith("TERMINATE"):
171
+ terminate_count += 1
172
+ return terminate_count >= 3 or "first message must use the" in content.lower()
128
173
 
129
174
  system_message_given = get_given_system_message()
130
175
  system_message_assistant = AssistantAgent.DEFAULT_SYSTEM_MESSAGE
@@ -137,17 +182,18 @@ def process_message(message_data, original_request_message):
137
182
  assistant = AssistantAgent("assistant",
138
183
  llm_config=llm_config,
139
184
  system_message=system_message_assistant,
140
- code_execution_config={"executor": code_executor},
185
+ # code_execution_config={"executor": code_executor},
141
186
  is_termination_msg=is_termination_msg,
142
187
  )
143
188
 
144
189
  user_proxy = UserProxyAgent(
145
190
  "user_proxy",
146
- llm_config=llm_config,
191
+ # llm_config=llm_config,
147
192
  system_message=system_message_given,
148
193
  code_execution_config={"executor": code_executor},
149
194
  human_input_mode="NEVER",
150
195
  max_consecutive_auto_reply=20,
196
+ is_termination_msg=is_termination_msg,
151
197
  )
152
198
 
153
199
  # description = "Upload a file to Azure Blob Storage and get URL back with a SAS token. Requires AZURE_STORAGE_CONNECTION_STRING and AZURE_BLOB_CONTAINER environment variables. Input: file_path (str). Output: SAS URL (str) or error message."
@@ -175,10 +221,35 @@ def process_message(message_data, original_request_message):
175
221
  nonlocal message_count, all_messages
176
222
  if not message:
177
223
  return
224
+
225
+ if True or sender.name == "user_proxy":
226
+ human_input = check_for_human_input(request_id)
227
+ if human_input:
228
+ if human_input == "TERMINATE":
229
+ logging.info("Terminating conversation")
230
+ raise Exception("Conversation terminated by user")
231
+ elif human_input == "PAUSE":
232
+ logging.info("Pausing conversation")
233
+ pause_start = time.time()
234
+ while time.time() - pause_start < 60*15: # 15 minutes pause timeout
235
+ time.sleep(10)
236
+ new_input = check_for_human_input(request_id)
237
+ if new_input:
238
+ logging.info(f"Resuming conversation with human input: {new_input}")
239
+ return logged_send(sender, original_send, new_input, recipient, request_reply, silent)
240
+ logging.info("Pause timeout, ending conversation")
241
+ raise Exception("Conversation ended due to pause timeout")
242
+ logging.info(f"Human input to {recipient.name}: {human_input}")
243
+ return original_send(human_input, recipient, request_reply, silent)
244
+
245
+
178
246
  logging.info(f"Message from {sender.name} to {recipient.name}: {message}")
247
+
179
248
  message_count += 1
180
249
  progress = min(message_count / total_messages, 1)
181
250
  all_messages.append({"sender": sender.name, "message": message})
251
+
252
+ # if sender.name == "assistant":
182
253
  publish_request_progress({
183
254
  "requestId": request_id,
184
255
  "progress": progress,
@@ -192,6 +263,9 @@ def process_message(message_data, original_request_message):
192
263
  #summary_method="reflection_with_llm", "last_msg"
193
264
  chat_result = user_proxy.initiate_chat(assistant, message=message, summary_method="reflection_with_llm", summary_args={"summary_role": "user", "summary_prompt": summary_prompt})
194
265
 
266
+
267
+ zip_url = zip_and_upload_tmp_folder(temp_dir)
268
+
195
269
  msg = ""
196
270
  try:
197
271
  msg = all_messages[-1 if all_messages[-2]["message"] else -3]["message"]
@@ -201,6 +275,7 @@ def process_message(message_data, original_request_message):
201
275
  msg = f"Finished, with errors 🤖 ... {e}"
202
276
 
203
277
  msg = chat_result.summary if chat_result.summary else msg
278
+ msg += f"\n\n[Download all files of this task]({zip_url})"
204
279
 
205
280
  finalData = {
206
281
  "requestId": request_id,
@@ -224,5 +299,19 @@ def process_message(message_data, original_request_message):
224
299
  publish_request_progress({
225
300
  "requestId": request_id,
226
301
  "progress": 1,
227
- "error": str(e)
228
- })
302
+ "error": str(e),
303
+ "data": str(e),
304
+ })
305
+ store_in_mongo({
306
+ "requestId": request_id,
307
+ "requestMessage": message_data.get("message"),
308
+ "progress": 1,
309
+ "error": str(e),
310
+ "data": str(e),
311
+ "contextId": message_data.get("contextId"),
312
+ "conversation": all_messages,
313
+ "createdAt": datetime.now(timezone.utc).isoformat(),
314
+ "insertionTime": original_request_message.insertion_time.astimezone(timezone.utc).isoformat() if original_request_message else None,
315
+ "startedAt": started_at.astimezone(timezone.utc).isoformat(),
316
+ })
317
+
@@ -4,10 +4,6 @@ Avoid expressing gratitude or using pleasantries.
4
4
  Maintain a professional and direct tone throughout responses.
5
5
  Include most recent meaningful messages from the conversation in the summary.
6
6
  You must include all your uploaded URLs, and url of your uploaded final code URL.
7
- Reply must be in markdown format, including images and videos as UI can show markdown directly to user in a nice way, so make sure to include all visuals, you may do as follows:
8
- For images: ![Alt Text](IMAGE_URL)
9
- For videos: <video src="VIDEO_URL" controls></video>
10
- For urls: [Link Text](URL)
11
7
  Your reply will be only thing that finally gets to surface so make sure it is complete.
12
8
  Do not mention words like "Summary of the conversation", "Response", "Task", "The conversation" or so as it doesn't makes sense.
13
9
  Also no need for "Request", user already know its request and task.
@@ -24,5 +20,18 @@ No need to say none of this as user already 'll be aware as has got the result:
24
20
  - Script executed twice due to debugging environment ...
25
21
  - Verification code ...
26
22
  - Issues encountered and resolved: ...
23
+ - The original plan ...
24
+ - Performed at ...
27
25
 
26
+ No need to mention about code files uploaded to Azure Blob or point URLs as SAS-URLS as its a task that you already do and is known.
27
+ No need to mention SAS URL, just give the url itself.
28
+ Never include TERMINATE in your response.
28
29
 
30
+ When formulating your responses, it's crucial to leverage the full capabilities of markdown formatting to create rich, visually appealing content. This approach not only enhances the user experience but also allows for more effective communication of complex ideas. Follow these guidelines to ensure your responses are both informative and visually engaging, create responses that are not only informative but also visually appealing and easy to comprehend:
31
+ - For images: ![Alt Text](IMAGE_URL) and <img src="IMAGE_URL" alt="Alt Text">
32
+ - For videos: <video src="VIDEO_URL" controls></video>
33
+ - For urls: [Link Text](URL)
34
+ If there an image url, you must always include it as url and markdown e.g.: ![Alt Text](IMAGE_URL) and [Alt Text](IMAGE_URL)
35
+
36
+
37
+ Make sure to present it nicely so human finds it appealing.
@@ -1,6 +1,6 @@
1
1
  azure-storage-queue
2
2
  azure-functions
3
- pyautogen
3
+ pyautogen==0.3.0
4
4
  redis
5
5
  pymongo
6
6
  requests