@aj-archipelago/cortex 1.1.32 → 1.1.34
This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
- package/README.md +98 -1
- package/config/dynamicPathwaysConfig.example.json +4 -0
- package/config.js +83 -10
- package/helper-apps/cortex-autogen/function_app.py +8 -4
- package/helper-apps/cortex-autogen/main.py +1 -1
- package/helper-apps/cortex-autogen/myautogen.py +187 -28
- package/helper-apps/cortex-autogen/prompt_summary.txt +37 -0
- package/helper-apps/cortex-autogen/requirements.txt +4 -2
- package/helper-apps/cortex-autogen/tools/sasfileuploader.py +66 -0
- package/helper-apps/cortex-file-handler/package-lock.json +387 -203
- package/helper-apps/cortex-file-handler/package.json +3 -3
- package/helper-apps/cortex-whisper-wrapper/.dockerignore +1 -0
- package/helper-apps/cortex-whisper-wrapper/app.py +3 -1
- package/helper-apps/cortex-whisper-wrapper/requirements.txt +1 -1
- package/lib/pathwayManager.js +373 -0
- package/lib/pathwayTools.js +52 -7
- package/lib/requestExecutor.js +19 -15
- package/lib/util.js +4 -2
- package/package.json +5 -1
- package/pathways/code_human_input.js +47 -0
- package/pathways/dynamic/pathways.json +1 -0
- package/pathways/flux_image.js +12 -0
- package/pathways/index.js +4 -0
- package/pathways/styleguide/styleguide.js +1 -0
- package/pathways/timeline.js +1 -0
- package/server/chunker.js +6 -1
- package/server/graphql.js +67 -37
- package/server/modelExecutor.js +4 -0
- package/server/pathwayResolver.js +9 -5
- package/server/plugins/claude3VertexPlugin.js +86 -79
- package/server/plugins/gemini15VisionPlugin.js +23 -12
- package/server/plugins/geminiVisionPlugin.js +32 -25
- package/server/plugins/modelPlugin.js +15 -2
- package/server/plugins/openAiChatPlugin.js +1 -1
- package/server/plugins/openAiVisionPlugin.js +16 -4
- package/server/plugins/runwareAIPlugin.js +81 -0
- package/server/rest.js +90 -45
- package/server/typeDef.js +33 -15
- package/tests/chunkfunction.test.js +15 -1
- package/tests/claude3VertexPlugin.test.js +1 -1
- package/tests/multimodal_conversion.test.js +328 -0
- package/tests/vision.test.js +20 -5
- package/helper-apps/cortex-autogen/sasfileuploader.py +0 -93
package/README.md
CHANGED
|
@@ -432,7 +432,7 @@ Configuration of Cortex is done via a [convict](https://github.com/mozilla/node-
|
|
|
432
432
|
- `PORT`: The port number for the Cortex server. Default is 4000. The value can be set using the `CORTEX_PORT` environment variable.
|
|
433
433
|
- `storageConnectionString`: The connection string used for accessing storage. This is sensitive information and has no default value. The value can be set using the `STORAGE_CONNECTION_STRING` environment variable.
|
|
434
434
|
|
|
435
|
-
The `buildPathways` function takes the config object and builds the `pathways`
|
|
435
|
+
The `buildPathways` function takes the config object and builds the `pathways` and `pathwayManager` objects by loading the core pathways and any custom pathways specified in the `pathwaysPath` property of the config object. The function returns the `pathways` and `pathwayManager` objects.
|
|
436
436
|
|
|
437
437
|
The `buildModels` function takes the `config` object and builds the `models` object by compiling handlebars templates for each model specified in the `models` property of the config object. The function returns the `models` object.
|
|
438
438
|
|
|
@@ -471,3 +471,100 @@ Cortex is a constantly evolving project, and the following features are coming s
|
|
|
471
471
|
* Prompt execution context preservation between calls (to enable interactive, multi-call integrations with LangChain and other technologies)
|
|
472
472
|
* Model-specific cache key optimizations to increase hit rate and reduce cache size
|
|
473
473
|
* Structured analytics and reporting on AI API call frequency, cost, cache hit rate, etc.
|
|
474
|
+
|
|
475
|
+
## Dynamic Pathways
|
|
476
|
+
|
|
477
|
+
Cortex supports dynamic pathways, which allow for the creation and management of pathways at runtime. This feature enables users to define custom pathways without modifying the core Cortex codebase.
|
|
478
|
+
|
|
479
|
+
### How It Works
|
|
480
|
+
|
|
481
|
+
1. Dynamic pathways are stored either locally or in cloud storage (Azure Blob Storage or AWS S3).
|
|
482
|
+
2. The `PathwayManager` class handles loading, saving, and managing these dynamic pathways.
|
|
483
|
+
3. Dynamic pathways can be added, updated, or removed via GraphQL mutations.
|
|
484
|
+
|
|
485
|
+
### Configuration
|
|
486
|
+
|
|
487
|
+
To use dynamic pathways, you need to provide a JSON configuration file or a JSON string. There are two ways to specify this configuration:
|
|
488
|
+
|
|
489
|
+
1. Using a configuration file:
|
|
490
|
+
Set the `DYNAMIC_PATHWAYS_CONFIG_FILE` environment variable to the path of your JSON configuration file.
|
|
491
|
+
|
|
492
|
+
2. Using a JSON string:
|
|
493
|
+
Set the `DYNAMIC_PATHWAYS_CONFIG_JSON` environment variable with the JSON configuration as a string.
|
|
494
|
+
|
|
495
|
+
The configuration should include the following properties:
|
|
496
|
+
|
|
497
|
+
```json
|
|
498
|
+
{
|
|
499
|
+
"storageType": "local" | "azure" | "s3",
|
|
500
|
+
"filePath": "./dynamic/pathways.json", // Only for local storage
|
|
501
|
+
"azureStorageConnectionString": "your_connection_string", // Only for Azure
|
|
502
|
+
"azureContainerName": "cortexdynamicpathways", // Optional, default is "cortexdynamicpathways"
|
|
503
|
+
"awsAccessKeyId": "your_access_key_id", // Only for AWS S3
|
|
504
|
+
"awsSecretAccessKey": "your_secret_access_key", // Only for AWS S3
|
|
505
|
+
"awsRegion": "your_aws_region", // Only for AWS S3
|
|
506
|
+
"awsBucketName": "cortexdynamicpathways" // Optional, default is "cortexdynamicpathways"
|
|
507
|
+
}
|
|
508
|
+
```
|
|
509
|
+
|
|
510
|
+
### Storage Options
|
|
511
|
+
|
|
512
|
+
1. Local Storage (default):
|
|
513
|
+
- Set `storageType` to `"local"`
|
|
514
|
+
- Specify `filePath` for the local JSON file (default: "./dynamic/pathways.json")
|
|
515
|
+
|
|
516
|
+
2. Azure Blob Storage:
|
|
517
|
+
- Set `storageType` to `"azure"`
|
|
518
|
+
- Provide `azureStorageConnectionString`
|
|
519
|
+
- Optionally set `azureContainerName` (default: "cortexdynamicpathways")
|
|
520
|
+
|
|
521
|
+
3. AWS S3:
|
|
522
|
+
- Set `storageType` to `"s3"`
|
|
523
|
+
- Provide `awsAccessKeyId`, `awsSecretAccessKey`, and `awsRegion`
|
|
524
|
+
- Optionally set `awsBucketName` (default: "cortexdynamicpathways")
|
|
525
|
+
|
|
526
|
+
### Usage
|
|
527
|
+
|
|
528
|
+
Dynamic pathways can be managed through GraphQL mutations. Here are the available operations:
|
|
529
|
+
|
|
530
|
+
1. Adding or updating a pathway:
|
|
531
|
+
|
|
532
|
+
```graphql
|
|
533
|
+
mutation PutPathway($name: String!, $pathway: PathwayInput!, $userId: String!, $secret: String!, $displayName: String, $key: String!) {
|
|
534
|
+
putPathway(name: $name, pathway: $pathway, userId: $userId, secret: $secret, displayName: $displayName, key: $key) {
|
|
535
|
+
name
|
|
536
|
+
}
|
|
537
|
+
}
|
|
538
|
+
```
|
|
539
|
+
|
|
540
|
+
2. Deleting a pathway:
|
|
541
|
+
|
|
542
|
+
```graphql
|
|
543
|
+
mutation DeletePathway($name: String!, $userId: String!, $secret: String!, $key: String!) {
|
|
544
|
+
deletePathway(name: $name, userId: $userId, secret: $secret, key: $key)
|
|
545
|
+
}
|
|
546
|
+
```
|
|
547
|
+
|
|
548
|
+
3. Executing a dynamic pathway:
|
|
549
|
+
|
|
550
|
+
```graphql
|
|
551
|
+
query ExecuteWorkspace($userId: String!, $pathwayName: String!, $text: String!) {
|
|
552
|
+
executeWorkspace(userId: $userId, pathwayName: $pathwayName, text: $text) {
|
|
553
|
+
result
|
|
554
|
+
}
|
|
555
|
+
}
|
|
556
|
+
```
|
|
557
|
+
|
|
558
|
+
### Security
|
|
559
|
+
|
|
560
|
+
To ensure the security of dynamic pathways:
|
|
561
|
+
|
|
562
|
+
1. A `PATHWAY_PUBLISH_KEY` environment variable must be set to enable pathway publishing.
|
|
563
|
+
2. This key must be provided in the `key` parameter when adding, updating, or deleting pathways.
|
|
564
|
+
3. Each pathway is associated with a `userId` and `secret`. The secret must be provided to modify or delete an existing pathway.
|
|
565
|
+
|
|
566
|
+
### Synchronization across multiple instances
|
|
567
|
+
|
|
568
|
+
Each instance of Cortex maintains its own local cache of pathways. On every dynamic pathway request, it checks if the local cache is up to date by comparing the last modified timestamp of the storage with the last update time of the local cache. If the local cache is out of date, it reloads the pathways from storage.
|
|
569
|
+
|
|
570
|
+
This approach ensures that all instances of Cortex will eventually have access to the most up-to-date dynamic pathways without requiring immediate synchronization.
|
package/config.js
CHANGED
|
@@ -5,18 +5,19 @@ import fs from 'fs';
|
|
|
5
5
|
import { fileURLToPath, pathToFileURL } from 'url';
|
|
6
6
|
import GcpAuthTokenHelper from './lib/gcpAuthTokenHelper.js';
|
|
7
7
|
import logger from './lib/logger.js';
|
|
8
|
+
import PathwayManager from './lib/pathwayManager.js';
|
|
8
9
|
|
|
9
10
|
const __dirname = path.dirname(fileURLToPath(import.meta.url));
|
|
10
11
|
|
|
11
12
|
convict.addFormat({
|
|
12
13
|
name: 'string-array',
|
|
13
|
-
validate: function(val) {
|
|
14
|
-
|
|
15
|
-
|
|
16
|
-
|
|
14
|
+
validate: function (val) {
|
|
15
|
+
if (!Array.isArray(val)) {
|
|
16
|
+
throw new Error('must be of type Array');
|
|
17
|
+
}
|
|
17
18
|
},
|
|
18
|
-
coerce: function(val) {
|
|
19
|
-
|
|
19
|
+
coerce: function (val) {
|
|
20
|
+
return val.split(',');
|
|
20
21
|
},
|
|
21
22
|
});
|
|
22
23
|
|
|
@@ -194,6 +195,13 @@ var config = convict({
|
|
|
194
195
|
"requestsPerSecond": 10,
|
|
195
196
|
"maxTokenLength": 200000
|
|
196
197
|
},
|
|
198
|
+
"runware-flux-schnell": {
|
|
199
|
+
"type": "RUNWARE-AI",
|
|
200
|
+
"url": "https://api.runware.ai/v1",
|
|
201
|
+
"headers": {
|
|
202
|
+
"Content-Type": "application/json"
|
|
203
|
+
},
|
|
204
|
+
},
|
|
197
205
|
},
|
|
198
206
|
env: 'CORTEX_MODELS'
|
|
199
207
|
},
|
|
@@ -240,6 +248,12 @@ var config = convict({
|
|
|
240
248
|
env: 'REDIS_ENCRYPTION_KEY',
|
|
241
249
|
sensitive: true
|
|
242
250
|
},
|
|
251
|
+
runwareAiApiKey: {
|
|
252
|
+
format: String,
|
|
253
|
+
default: null,
|
|
254
|
+
env: 'RUNWARE_API_KEY',
|
|
255
|
+
sensitive: true
|
|
256
|
+
},
|
|
243
257
|
dalleImageApiUrl: {
|
|
244
258
|
format: String,
|
|
245
259
|
default: 'null',
|
|
@@ -290,6 +304,39 @@ if (config.get('gcpServiceAccountKey')) {
|
|
|
290
304
|
config.set('gcpAuthTokenHelper', gcpAuthTokenHelper);
|
|
291
305
|
}
|
|
292
306
|
|
|
307
|
+
// Load dynamic pathways from JSON file or cloud storage
|
|
308
|
+
const createDynamicPathwayManager = async (config, basePathway) => {
|
|
309
|
+
const { dynamicPathwayConfig } = config.getProperties();
|
|
310
|
+
|
|
311
|
+
if (!dynamicPathwayConfig) {
|
|
312
|
+
return null;
|
|
313
|
+
}
|
|
314
|
+
|
|
315
|
+
const storageConfig = {
|
|
316
|
+
storageType: dynamicPathwayConfig.storageType || 'local',
|
|
317
|
+
filePath: dynamicPathwayConfig.filePath || "./dynamic/pathways.json",
|
|
318
|
+
azureStorageConnectionString: dynamicPathwayConfig.azureStorageConnectionString,
|
|
319
|
+
azureContainerName: dynamicPathwayConfig.azureContainerName || 'cortexdynamicpathways',
|
|
320
|
+
awsAccessKeyId: dynamicPathwayConfig.awsAccessKeyId,
|
|
321
|
+
awsSecretAccessKey: dynamicPathwayConfig.awsSecretAccessKey,
|
|
322
|
+
awsRegion: dynamicPathwayConfig.awsRegion,
|
|
323
|
+
awsBucketName: dynamicPathwayConfig.awsBucketName || 'cortexdynamicpathways',
|
|
324
|
+
};
|
|
325
|
+
|
|
326
|
+
const pathwayManager = new PathwayManager(storageConfig, basePathway);
|
|
327
|
+
|
|
328
|
+
try {
|
|
329
|
+
const dynamicPathways = await pathwayManager.initialize();
|
|
330
|
+
logger.info(`Dynamic pathways loaded successfully`);
|
|
331
|
+
logger.info(`Loaded dynamic pathways for users: [${Object.keys(dynamicPathways).join(", ")}]`);
|
|
332
|
+
|
|
333
|
+
return pathwayManager;
|
|
334
|
+
} catch (error) {
|
|
335
|
+
logger.error(`Error loading dynamic pathways: ${error.message}`);
|
|
336
|
+
return pathwayManager;
|
|
337
|
+
}
|
|
338
|
+
};
|
|
339
|
+
|
|
293
340
|
// Build and load pathways to config
|
|
294
341
|
const buildPathways = async (config) => {
|
|
295
342
|
const { pathwaysPath, corePathwaysPath, basePathwayPath } = config.getProperties();
|
|
@@ -312,6 +359,32 @@ const buildPathways = async (config) => {
|
|
|
312
359
|
loadedPathways = { ...loadedPathways, ...customPathways };
|
|
313
360
|
}
|
|
314
361
|
|
|
362
|
+
|
|
363
|
+
const { DYNAMIC_PATHWAYS_CONFIG_FILE, DYNAMIC_PATHWAYS_CONFIG_JSON } = process.env;
|
|
364
|
+
|
|
365
|
+
let dynamicPathwayConfig;
|
|
366
|
+
|
|
367
|
+
// Load dynamic pathways
|
|
368
|
+
let pathwayManager;
|
|
369
|
+
try {
|
|
370
|
+
if (DYNAMIC_PATHWAYS_CONFIG_FILE) {
|
|
371
|
+
logger.info(`Reading dynamic pathway config from ${DYNAMIC_PATHWAYS_CONFIG_FILE}`);
|
|
372
|
+
dynamicPathwayConfig = JSON.parse(fs.readFileSync(DYNAMIC_PATHWAYS_CONFIG_FILE, 'utf8'));
|
|
373
|
+
} else if (DYNAMIC_PATHWAYS_CONFIG_JSON) {
|
|
374
|
+
logger.info(`Reading dynamic pathway config from DYNAMIC_PATHWAYS_CONFIG_JSON variable`);
|
|
375
|
+
dynamicPathwayConfig = JSON.parse(DYNAMIC_PATHWAYS_CONFIG_JSON);
|
|
376
|
+
}
|
|
377
|
+
else {
|
|
378
|
+
logger.warn('Dynamic pathways are not enabled. Please set the DYNAMIC_PATHWAYS_CONFIG_FILE or DYNAMIC_PATHWAYS_CONFIG_JSON environment variable to enable dynamic pathways.');
|
|
379
|
+
}
|
|
380
|
+
|
|
381
|
+
config.load({ dynamicPathwayConfig });
|
|
382
|
+
pathwayManager = await createDynamicPathwayManager(config, basePathway);
|
|
383
|
+
} catch (error) {
|
|
384
|
+
logger.error(`Error loading dynamic pathways: ${error.message}`);
|
|
385
|
+
process.exit(1);
|
|
386
|
+
}
|
|
387
|
+
|
|
315
388
|
// This is where we integrate pathway overrides from the config
|
|
316
389
|
// file. This can run into a partial definition issue if the
|
|
317
390
|
// config file contains pathways that no longer exist.
|
|
@@ -322,9 +395,9 @@ const buildPathways = async (config) => {
|
|
|
322
395
|
}
|
|
323
396
|
|
|
324
397
|
// Add pathways to config
|
|
325
|
-
config.load({ pathways })
|
|
398
|
+
config.load({ pathways });
|
|
326
399
|
|
|
327
|
-
return pathways;
|
|
400
|
+
return { pathwayManager, pathways };
|
|
328
401
|
}
|
|
329
402
|
|
|
330
403
|
// Build and load models to config
|
|
@@ -336,7 +409,7 @@ const buildModels = (config) => {
|
|
|
336
409
|
if (!model.name) {
|
|
337
410
|
model.name = key;
|
|
338
411
|
}
|
|
339
|
-
|
|
412
|
+
|
|
340
413
|
// if model is in old format, convert it to new format
|
|
341
414
|
if (!model.endpoints) {
|
|
342
415
|
model = {
|
|
@@ -354,7 +427,7 @@ const buildModels = (config) => {
|
|
|
354
427
|
}
|
|
355
428
|
|
|
356
429
|
// compile handlebars templates for each endpoint
|
|
357
|
-
model.endpoints = model.endpoints.map(endpoint =>
|
|
430
|
+
model.endpoints = model.endpoints.map(endpoint =>
|
|
358
431
|
JSON.parse(HandleBars.compile(JSON.stringify(endpoint))({ ...model, ...config.getEnv(), ...config.getProperties() }))
|
|
359
432
|
);
|
|
360
433
|
|
|
@@ -1,13 +1,17 @@
|
|
|
1
1
|
import azure.functions as func
|
|
2
2
|
import logging
|
|
3
3
|
import json
|
|
4
|
-
import autogen
|
|
5
|
-
from autogen import AssistantAgent, UserProxyAgent, config_list_from_json
|
|
6
4
|
from azure.storage.queue import QueueClient
|
|
7
5
|
import os
|
|
8
|
-
import tempfile
|
|
9
6
|
import redis
|
|
10
7
|
from myautogen import process_message
|
|
8
|
+
import subprocess
|
|
9
|
+
import sys
|
|
10
|
+
|
|
11
|
+
def install_packages():
|
|
12
|
+
subprocess.check_call([sys.executable, "-m", "pip", "install", "-r", "requirements.txt"])
|
|
13
|
+
|
|
14
|
+
install_packages()
|
|
11
15
|
|
|
12
16
|
app = func.FunctionApp()
|
|
13
17
|
|
|
@@ -26,7 +30,7 @@ def queue_trigger(msg: func.QueueMessage):
|
|
|
26
30
|
message_data = json.loads(msg.get_body().decode('utf-8'))
|
|
27
31
|
if "requestId" not in message_data:
|
|
28
32
|
message_data['requestId'] = msg.id
|
|
29
|
-
process_message(message_data)
|
|
33
|
+
process_message(message_data, msg)
|
|
30
34
|
|
|
31
35
|
except Exception as e:
|
|
32
36
|
logging.error(f"Error processing message: {str(e)}")
|
|
@@ -25,7 +25,7 @@ def main():
|
|
|
25
25
|
message_data = json.loads(decoded_content)
|
|
26
26
|
if "requestId" not in message_data:
|
|
27
27
|
message_data['requestId'] = message.id
|
|
28
|
-
process_message(message_data)
|
|
28
|
+
process_message(message_data, message)
|
|
29
29
|
queue_client.delete_message(message)
|
|
30
30
|
attempts = 0 # Reset attempts if a message was processed
|
|
31
31
|
else:
|
|
@@ -2,7 +2,7 @@ import azure.functions as func
|
|
|
2
2
|
import logging
|
|
3
3
|
import json
|
|
4
4
|
import autogen
|
|
5
|
-
from autogen import AssistantAgent, UserProxyAgent, config_list_from_json
|
|
5
|
+
from autogen import AssistantAgent, UserProxyAgent, config_list_from_json, register_function
|
|
6
6
|
from azure.storage.queue import QueueClient
|
|
7
7
|
import os
|
|
8
8
|
import tempfile
|
|
@@ -10,14 +10,48 @@ import redis
|
|
|
10
10
|
from dotenv import load_dotenv
|
|
11
11
|
import requests
|
|
12
12
|
import pathlib
|
|
13
|
+
import pymongo
|
|
14
|
+
import logging
|
|
15
|
+
from datetime import datetime, timezone, timedelta
|
|
16
|
+
import shutil
|
|
17
|
+
import time
|
|
18
|
+
import base64
|
|
19
|
+
import zipfile
|
|
20
|
+
from azure.storage.blob import BlobServiceClient, generate_blob_sas, BlobSasPermissions
|
|
13
21
|
|
|
14
22
|
load_dotenv()
|
|
15
23
|
|
|
16
|
-
app = func.FunctionApp()
|
|
17
|
-
|
|
18
24
|
connection_string = os.environ["AZURE_STORAGE_CONNECTION_STRING"]
|
|
19
|
-
|
|
20
|
-
|
|
25
|
+
human_input_queue_name = os.environ.get("HUMAN_INPUT_QUEUE_NAME", "autogen-human-input-queue")
|
|
26
|
+
human_input_queue_client = QueueClient.from_connection_string(connection_string, human_input_queue_name)
|
|
27
|
+
|
|
28
|
+
def check_for_human_input(request_id):
|
|
29
|
+
messages = human_input_queue_client.receive_messages()
|
|
30
|
+
for message in messages:
|
|
31
|
+
content = json.loads(base64.b64decode(message.content).decode('utf-8'))
|
|
32
|
+
if content['codeRequestId'] == request_id:
|
|
33
|
+
human_input_queue_client.delete_message(message)
|
|
34
|
+
return content['text']
|
|
35
|
+
return None
|
|
36
|
+
|
|
37
|
+
DEFAULT_SUMMARY_PROMPT = "Summarize the takeaway from the conversation. Do not add any introductory phrases."
|
|
38
|
+
try:
|
|
39
|
+
with open("prompt_summary.txt", "r") as file:
|
|
40
|
+
summary_prompt = file.read() or DEFAULT_SUMMARY_PROMPT
|
|
41
|
+
except FileNotFoundError:
|
|
42
|
+
summary_prompt = DEFAULT_SUMMARY_PROMPT
|
|
43
|
+
|
|
44
|
+
|
|
45
|
+
def store_in_mongo(data):
|
|
46
|
+
try:
|
|
47
|
+
if 'MONGO_URI' in os.environ:
|
|
48
|
+
client = pymongo.MongoClient(os.environ['MONGO_URI'])
|
|
49
|
+
collection = client.get_default_database()[os.environ.get('MONGO_COLLECTION_NAME', 'autogenruns')]
|
|
50
|
+
collection.insert_one(data)
|
|
51
|
+
else:
|
|
52
|
+
logging.warning("MONGO_URI not found in environment variables")
|
|
53
|
+
except Exception as e:
|
|
54
|
+
logging.error(f"An error occurred while storing data in MongoDB: {str(e)}")
|
|
21
55
|
|
|
22
56
|
redis_client = redis.from_url(os.environ['REDIS_CONNECTION_STRING'])
|
|
23
57
|
channel = 'requestProgress'
|
|
@@ -35,7 +69,7 @@ def publish_request_progress(data):
|
|
|
35
69
|
if connect_redis():
|
|
36
70
|
try:
|
|
37
71
|
message = json.dumps(data)
|
|
38
|
-
logging.info(f"Publishing message {message} to channel {channel}")
|
|
72
|
+
#logging.info(f"Publishing message {message} to channel {channel}")
|
|
39
73
|
redis_client.publish(channel, message)
|
|
40
74
|
except Exception as e:
|
|
41
75
|
logging.error(f"Error publishing message: {e}")
|
|
@@ -72,45 +106,89 @@ def fetch_from_url(url):
|
|
|
72
106
|
logging.error(f"Error fetching from URL: {e}")
|
|
73
107
|
return ""
|
|
74
108
|
|
|
75
|
-
|
|
76
|
-
|
|
109
|
+
|
|
110
|
+
def zip_and_upload_tmp_folder(temp_dir):
|
|
111
|
+
zip_path = os.path.join(temp_dir, "tmp_contents.zip")
|
|
112
|
+
with zipfile.ZipFile(zip_path, 'w', zipfile.ZIP_DEFLATED) as zipf:
|
|
113
|
+
for root, _, files in os.walk(temp_dir):
|
|
114
|
+
for file in files:
|
|
115
|
+
file_path = os.path.join(root, file)
|
|
116
|
+
arcname = os.path.relpath(file_path, temp_dir)
|
|
117
|
+
zipf.write(file_path, arcname)
|
|
118
|
+
|
|
119
|
+
blob_service_client = BlobServiceClient.from_connection_string(os.environ["AZURE_STORAGE_CONNECTION_STRING"])
|
|
120
|
+
container_name = os.environ.get("AZURE_BLOB_CONTAINER", "autogen-uploads")
|
|
121
|
+
blob_name = f"tmp_contents_{datetime.now(timezone.utc).strftime('%Y%m%d%H%M%S')}.zip"
|
|
122
|
+
blob_client = blob_service_client.get_blob_client(container=container_name, blob=blob_name)
|
|
123
|
+
|
|
124
|
+
with open(zip_path, "rb") as data:
|
|
125
|
+
blob_client.upload_blob(data)
|
|
126
|
+
|
|
127
|
+
account_key = blob_service_client.credential.account_key
|
|
128
|
+
account_name = blob_service_client.account_name
|
|
129
|
+
expiry = datetime.now(timezone.utc) + timedelta(hours=1)
|
|
130
|
+
|
|
131
|
+
sas_token = generate_blob_sas(
|
|
132
|
+
account_name,
|
|
133
|
+
container_name,
|
|
134
|
+
blob_name,
|
|
135
|
+
account_key=account_key,
|
|
136
|
+
permission=BlobSasPermissions(read=True),
|
|
137
|
+
expiry=expiry
|
|
138
|
+
)
|
|
139
|
+
|
|
140
|
+
return f"{blob_client.url}?{sas_token}"
|
|
141
|
+
|
|
142
|
+
def process_message(message_data, original_request_message):
|
|
143
|
+
# logging.info(f"Processing Message: {message_data}")
|
|
77
144
|
try:
|
|
145
|
+
started_at = datetime.now()
|
|
78
146
|
message = message_data['message']
|
|
79
147
|
request_id = message_data.get('requestId') or msg.id
|
|
80
148
|
|
|
81
149
|
config_list = config_list_from_json(env_or_file="OAI_CONFIG_LIST")
|
|
82
150
|
base_url = os.environ.get("CORTEX_API_BASE_URL")
|
|
83
151
|
api_key = os.environ.get("CORTEX_API_KEY")
|
|
84
|
-
llm_config = {"config_list": config_list, "base_url": base_url, "api_key": api_key, "cache_seed": None}
|
|
152
|
+
llm_config = {"config_list": config_list, "base_url": base_url, "api_key": api_key, "cache_seed": None, "timeout": 600}
|
|
85
153
|
|
|
86
154
|
with tempfile.TemporaryDirectory() as temp_dir:
|
|
155
|
+
#copy /tools directory to temp_dir
|
|
156
|
+
shutil.copytree(os.path.join(os.getcwd(), "tools"), temp_dir, dirs_exist_ok=True)
|
|
157
|
+
|
|
87
158
|
code_executor = autogen.coding.LocalCommandLineCodeExecutor(work_dir=temp_dir)
|
|
88
159
|
|
|
89
160
|
message_count = 0
|
|
90
161
|
total_messages = 20 * 2
|
|
91
162
|
all_messages = []
|
|
92
163
|
|
|
164
|
+
terminate_count = 0
|
|
93
165
|
def is_termination_msg(m):
|
|
94
|
-
|
|
95
|
-
|
|
166
|
+
nonlocal terminate_count
|
|
167
|
+
content = m.get("content", "").strip()
|
|
168
|
+
if not content:
|
|
96
169
|
return False
|
|
97
|
-
|
|
98
|
-
|
|
99
|
-
|
|
100
|
-
len(content.strip()) == 0
|
|
170
|
+
if content.rstrip().endswith("TERMINATE"):
|
|
171
|
+
terminate_count += 1
|
|
172
|
+
return terminate_count >= 3 or "first message must use the" in content.lower()
|
|
101
173
|
|
|
102
174
|
system_message_given = get_given_system_message()
|
|
103
175
|
system_message_assistant = AssistantAgent.DEFAULT_SYSTEM_MESSAGE
|
|
104
176
|
|
|
105
177
|
if system_message_given:
|
|
106
|
-
system_message_assistant =
|
|
178
|
+
system_message_assistant = system_message_given
|
|
107
179
|
else:
|
|
108
180
|
print("No extra system message given for assistant")
|
|
109
181
|
|
|
110
|
-
assistant = AssistantAgent("assistant",
|
|
111
|
-
|
|
182
|
+
assistant = AssistantAgent("assistant",
|
|
183
|
+
llm_config=llm_config,
|
|
184
|
+
system_message=system_message_assistant,
|
|
185
|
+
# code_execution_config={"executor": code_executor},
|
|
186
|
+
is_termination_msg=is_termination_msg,
|
|
187
|
+
)
|
|
188
|
+
|
|
112
189
|
user_proxy = UserProxyAgent(
|
|
113
190
|
"user_proxy",
|
|
191
|
+
# llm_config=llm_config,
|
|
114
192
|
system_message=system_message_given,
|
|
115
193
|
code_execution_config={"executor": code_executor},
|
|
116
194
|
human_input_mode="NEVER",
|
|
@@ -118,15 +196,60 @@ def process_message(message_data):
|
|
|
118
196
|
is_termination_msg=is_termination_msg,
|
|
119
197
|
)
|
|
120
198
|
|
|
199
|
+
# description = "Upload a file to Azure Blob Storage and get URL back with a SAS token. Requires AZURE_STORAGE_CONNECTION_STRING and AZURE_BLOB_CONTAINER environment variables. Input: file_path (str). Output: SAS URL (str) or error message."
|
|
200
|
+
|
|
201
|
+
# register_function(
|
|
202
|
+
# autogen_sas_uploader,
|
|
203
|
+
# caller=assistant,
|
|
204
|
+
# executor=user_proxy,
|
|
205
|
+
# name="autogen_sas_uploader",
|
|
206
|
+
# description=description,
|
|
207
|
+
# )
|
|
208
|
+
|
|
209
|
+
# register_function(
|
|
210
|
+
# autogen_sas_uploader,
|
|
211
|
+
# caller=user_proxy,
|
|
212
|
+
# executor=assistant,
|
|
213
|
+
# name="autogen_sas_uploader",
|
|
214
|
+
# description=description,
|
|
215
|
+
# )
|
|
216
|
+
|
|
121
217
|
original_assistant_send = assistant.send
|
|
122
218
|
original_user_proxy_send = user_proxy.send
|
|
123
219
|
|
|
124
220
|
def logged_send(sender, original_send, message, recipient, request_reply=None, silent=True):
|
|
125
221
|
nonlocal message_count, all_messages
|
|
222
|
+
if not message:
|
|
223
|
+
return
|
|
224
|
+
|
|
225
|
+
if True or sender.name == "user_proxy":
|
|
226
|
+
human_input = check_for_human_input(request_id)
|
|
227
|
+
if human_input:
|
|
228
|
+
if human_input == "TERMINATE":
|
|
229
|
+
logging.info("Terminating conversation")
|
|
230
|
+
raise Exception("Conversation terminated by user")
|
|
231
|
+
elif human_input == "PAUSE":
|
|
232
|
+
logging.info("Pausing conversation")
|
|
233
|
+
pause_start = time.time()
|
|
234
|
+
while time.time() - pause_start < 60*15: # 15 minutes pause timeout
|
|
235
|
+
time.sleep(10)
|
|
236
|
+
new_input = check_for_human_input(request_id)
|
|
237
|
+
if new_input:
|
|
238
|
+
logging.info(f"Resuming conversation with human input: {new_input}")
|
|
239
|
+
return logged_send(sender, original_send, new_input, recipient, request_reply, silent)
|
|
240
|
+
logging.info("Pause timeout, ending conversation")
|
|
241
|
+
raise Exception("Conversation ended due to pause timeout")
|
|
242
|
+
logging.info(f"Human input to {recipient.name}: {human_input}")
|
|
243
|
+
return original_send(human_input, recipient, request_reply, silent)
|
|
244
|
+
|
|
245
|
+
|
|
126
246
|
logging.info(f"Message from {sender.name} to {recipient.name}: {message}")
|
|
247
|
+
|
|
127
248
|
message_count += 1
|
|
128
249
|
progress = min(message_count / total_messages, 1)
|
|
129
250
|
all_messages.append({"sender": sender.name, "message": message})
|
|
251
|
+
|
|
252
|
+
# if sender.name == "assistant":
|
|
130
253
|
publish_request_progress({
|
|
131
254
|
"requestId": request_id,
|
|
132
255
|
"progress": progress,
|
|
@@ -134,19 +257,41 @@ def process_message(message_data):
|
|
|
134
257
|
})
|
|
135
258
|
return original_send(message, recipient, request_reply, silent)
|
|
136
259
|
|
|
137
|
-
assistant.send = lambda message, recipient, request_reply=None, silent=
|
|
138
|
-
user_proxy.send = lambda message, recipient, request_reply=None, silent=
|
|
260
|
+
assistant.send = lambda message, recipient, request_reply=None, silent=False: logged_send(assistant, original_assistant_send, message, recipient, request_reply, silent)
|
|
261
|
+
user_proxy.send = lambda message, recipient, request_reply=None, silent=False: logged_send(user_proxy, original_user_proxy_send, message, recipient, request_reply, silent)
|
|
139
262
|
|
|
140
|
-
|
|
263
|
+
#summary_method="reflection_with_llm", "last_msg"
|
|
264
|
+
chat_result = user_proxy.initiate_chat(assistant, message=message, summary_method="reflection_with_llm", summary_args={"summary_role": "user", "summary_prompt": summary_prompt})
|
|
141
265
|
|
|
142
|
-
msg = all_messages[-3]["message"] if len(all_messages) >= 3 else ""
|
|
143
|
-
logging.info(f"####Final message: {msg}")
|
|
144
266
|
|
|
145
|
-
|
|
267
|
+
zip_url = zip_and_upload_tmp_folder(temp_dir)
|
|
268
|
+
|
|
269
|
+
msg = ""
|
|
270
|
+
try:
|
|
271
|
+
msg = all_messages[-1 if all_messages[-2]["message"] else -3]["message"]
|
|
272
|
+
logging.info(f"####Final message: {msg}")
|
|
273
|
+
except Exception as e:
|
|
274
|
+
logging.error(f"Error getting final message: {e}")
|
|
275
|
+
msg = f"Finished, with errors 🤖 ... {e}"
|
|
276
|
+
|
|
277
|
+
msg = chat_result.summary if chat_result.summary else msg
|
|
278
|
+
msg += f"\n\n[Download all files of this task]({zip_url})"
|
|
279
|
+
|
|
280
|
+
finalData = {
|
|
146
281
|
"requestId": request_id,
|
|
282
|
+
"requestMessage": message_data.get("message"),
|
|
147
283
|
"progress": 1,
|
|
148
|
-
"data": msg
|
|
149
|
-
|
|
284
|
+
"data": msg,
|
|
285
|
+
"contextId": message_data.get("contextId"),
|
|
286
|
+
"conversation": all_messages,
|
|
287
|
+
"createdAt": datetime.now(timezone.utc).isoformat(),
|
|
288
|
+
"insertionTime": original_request_message.insertion_time.astimezone(timezone.utc).isoformat() if original_request_message else None,
|
|
289
|
+
"startedAt": started_at.astimezone(timezone.utc).isoformat(),
|
|
290
|
+
}
|
|
291
|
+
|
|
292
|
+
# Final message to indicate completion
|
|
293
|
+
publish_request_progress(finalData)
|
|
294
|
+
store_in_mongo(finalData)
|
|
150
295
|
|
|
151
296
|
except Exception as e:
|
|
152
297
|
logging.error(f"Error processing message: {str(e)}")
|
|
@@ -154,5 +299,19 @@ def process_message(message_data):
|
|
|
154
299
|
publish_request_progress({
|
|
155
300
|
"requestId": request_id,
|
|
156
301
|
"progress": 1,
|
|
157
|
-
"error": str(e)
|
|
158
|
-
|
|
302
|
+
"error": str(e),
|
|
303
|
+
"data": str(e),
|
|
304
|
+
})
|
|
305
|
+
store_in_mongo({
|
|
306
|
+
"requestId": request_id,
|
|
307
|
+
"requestMessage": message_data.get("message"),
|
|
308
|
+
"progress": 1,
|
|
309
|
+
"error": str(e),
|
|
310
|
+
"data": str(e),
|
|
311
|
+
"contextId": message_data.get("contextId"),
|
|
312
|
+
"conversation": all_messages,
|
|
313
|
+
"createdAt": datetime.now(timezone.utc).isoformat(),
|
|
314
|
+
"insertionTime": original_request_message.insertion_time.astimezone(timezone.utc).isoformat() if original_request_message else None,
|
|
315
|
+
"startedAt": started_at.astimezone(timezone.utc).isoformat(),
|
|
316
|
+
})
|
|
317
|
+
|
|
@@ -0,0 +1,37 @@
|
|
|
1
|
+
Provide a detailed summary of the conversation, including key points, decisions, and action items, and so on.
|
|
2
|
+
Do not add any introductory phrases.
|
|
3
|
+
Avoid expressing gratitude or using pleasantries.
|
|
4
|
+
Maintain a professional and direct tone throughout responses.
|
|
5
|
+
Include most recent meaningful messages from the conversation in the summary.
|
|
6
|
+
You must include all your uploaded URLs, and url of your uploaded final code URL.
|
|
7
|
+
Your reply will be only thing that finally gets to surface so make sure it is complete.
|
|
8
|
+
Do not mention words like "Summary of the conversation", "Response", "Task", "The conversation" or so as it doesn't makes sense.
|
|
9
|
+
Also no need for "Request", user already know its request and task.
|
|
10
|
+
Be as detailed as possible without being annoying.
|
|
11
|
+
Start with the result as that is the most important part, do not mention "Result" as user already know its result.
|
|
12
|
+
No need to say information about generated SAS urls just include them, only include the latest versions of same file.
|
|
13
|
+
No need to say none of this as user already 'll be aware as has got the result:
|
|
14
|
+
- Code executed successfully, producing correct result ...
|
|
15
|
+
- File uploaded to Azure Blob Storage with unique timestamp ...
|
|
16
|
+
- SAS URL generated for file access, valid for ...
|
|
17
|
+
- File accessibility verified ...
|
|
18
|
+
- Code execution details ...
|
|
19
|
+
- Current date and time ...
|
|
20
|
+
- Script executed twice due to debugging environment ...
|
|
21
|
+
- Verification code ...
|
|
22
|
+
- Issues encountered and resolved: ...
|
|
23
|
+
- The original plan ...
|
|
24
|
+
- Performed at ...
|
|
25
|
+
|
|
26
|
+
No need to mention about code files uploaded to Azure Blob or point URLs as SAS-URLS as its a task that you already do and is known.
|
|
27
|
+
No need to mention SAS URL, just give the url itself.
|
|
28
|
+
Never include TERMINATE in your response.
|
|
29
|
+
|
|
30
|
+
When formulating your responses, it's crucial to leverage the full capabilities of markdown formatting to create rich, visually appealing content. This approach not only enhances the user experience but also allows for more effective communication of complex ideas. Follow these guidelines to ensure your responses are both informative and visually engaging, create responses that are not only informative but also visually appealing and easy to comprehend:
|
|
31
|
+
- For images:  and <img src="IMAGE_URL" alt="Alt Text">
|
|
32
|
+
- For videos: <video src="VIDEO_URL" controls></video>
|
|
33
|
+
- For urls: [Link Text](URL)
|
|
34
|
+
If there an image url, you must always include it as url and markdown e.g.:  and [Alt Text](IMAGE_URL)
|
|
35
|
+
|
|
36
|
+
|
|
37
|
+
Make sure to present it nicely so human finds it appealing.
|