@aj-archipelago/cortex 1.3.62 → 1.3.63

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
Files changed (211) hide show
  1. package/.github/workflows/cortex-file-handler-test.yml +61 -0
  2. package/README.md +31 -7
  3. package/config/default.example.json +15 -0
  4. package/config.js +133 -12
  5. package/helper-apps/cortex-autogen2/DigiCertGlobalRootCA.crt.pem +22 -0
  6. package/helper-apps/cortex-autogen2/Dockerfile +31 -0
  7. package/helper-apps/cortex-autogen2/Dockerfile.worker +41 -0
  8. package/helper-apps/cortex-autogen2/README.md +183 -0
  9. package/helper-apps/cortex-autogen2/__init__.py +1 -0
  10. package/helper-apps/cortex-autogen2/agents.py +131 -0
  11. package/helper-apps/cortex-autogen2/docker-compose.yml +20 -0
  12. package/helper-apps/cortex-autogen2/function_app.py +55 -0
  13. package/helper-apps/cortex-autogen2/host.json +15 -0
  14. package/helper-apps/cortex-autogen2/main.py +126 -0
  15. package/helper-apps/cortex-autogen2/poetry.lock +3652 -0
  16. package/helper-apps/cortex-autogen2/pyproject.toml +36 -0
  17. package/helper-apps/cortex-autogen2/requirements.txt +20 -0
  18. package/helper-apps/cortex-autogen2/send_task.py +105 -0
  19. package/helper-apps/cortex-autogen2/services/__init__.py +1 -0
  20. package/helper-apps/cortex-autogen2/services/azure_queue.py +85 -0
  21. package/helper-apps/cortex-autogen2/services/redis_publisher.py +153 -0
  22. package/helper-apps/cortex-autogen2/task_processor.py +488 -0
  23. package/helper-apps/cortex-autogen2/tools/__init__.py +24 -0
  24. package/helper-apps/cortex-autogen2/tools/azure_blob_tools.py +175 -0
  25. package/helper-apps/cortex-autogen2/tools/azure_foundry_agents.py +601 -0
  26. package/helper-apps/cortex-autogen2/tools/coding_tools.py +72 -0
  27. package/helper-apps/cortex-autogen2/tools/download_tools.py +48 -0
  28. package/helper-apps/cortex-autogen2/tools/file_tools.py +545 -0
  29. package/helper-apps/cortex-autogen2/tools/search_tools.py +646 -0
  30. package/helper-apps/cortex-azure-cleaner/README.md +36 -0
  31. package/helper-apps/cortex-file-converter/README.md +93 -0
  32. package/helper-apps/cortex-file-converter/key_to_pdf.py +104 -0
  33. package/helper-apps/cortex-file-converter/list_blob_extensions.py +89 -0
  34. package/helper-apps/cortex-file-converter/process_azure_keynotes.py +181 -0
  35. package/helper-apps/cortex-file-converter/requirements.txt +1 -0
  36. package/helper-apps/cortex-file-handler/.env.test.azure.ci +7 -0
  37. package/helper-apps/cortex-file-handler/.env.test.azure.sample +1 -1
  38. package/helper-apps/cortex-file-handler/.env.test.gcs.ci +10 -0
  39. package/helper-apps/cortex-file-handler/.env.test.gcs.sample +2 -2
  40. package/helper-apps/cortex-file-handler/INTERFACE.md +41 -0
  41. package/helper-apps/cortex-file-handler/package.json +1 -1
  42. package/helper-apps/cortex-file-handler/scripts/setup-azure-container.js +41 -17
  43. package/helper-apps/cortex-file-handler/scripts/setup-test-containers.js +30 -15
  44. package/helper-apps/cortex-file-handler/scripts/test-azure.sh +32 -6
  45. package/helper-apps/cortex-file-handler/scripts/test-gcs.sh +24 -2
  46. package/helper-apps/cortex-file-handler/scripts/validate-env.js +128 -0
  47. package/helper-apps/cortex-file-handler/src/blobHandler.js +161 -51
  48. package/helper-apps/cortex-file-handler/src/constants.js +3 -0
  49. package/helper-apps/cortex-file-handler/src/fileChunker.js +10 -8
  50. package/helper-apps/cortex-file-handler/src/index.js +116 -9
  51. package/helper-apps/cortex-file-handler/src/redis.js +61 -1
  52. package/helper-apps/cortex-file-handler/src/services/ConversionService.js +11 -8
  53. package/helper-apps/cortex-file-handler/src/services/FileConversionService.js +2 -2
  54. package/helper-apps/cortex-file-handler/src/services/storage/AzureStorageProvider.js +88 -6
  55. package/helper-apps/cortex-file-handler/src/services/storage/GCSStorageProvider.js +58 -0
  56. package/helper-apps/cortex-file-handler/src/services/storage/StorageFactory.js +25 -5
  57. package/helper-apps/cortex-file-handler/src/services/storage/StorageProvider.js +9 -0
  58. package/helper-apps/cortex-file-handler/src/services/storage/StorageService.js +120 -16
  59. package/helper-apps/cortex-file-handler/src/start.js +27 -17
  60. package/helper-apps/cortex-file-handler/tests/FileConversionService.test.js +52 -1
  61. package/helper-apps/cortex-file-handler/tests/blobHandler.test.js +40 -0
  62. package/helper-apps/cortex-file-handler/tests/checkHashShortLived.test.js +553 -0
  63. package/helper-apps/cortex-file-handler/tests/cleanup.test.js +46 -52
  64. package/helper-apps/cortex-file-handler/tests/containerConversionFlow.test.js +451 -0
  65. package/helper-apps/cortex-file-handler/tests/containerNameParsing.test.js +229 -0
  66. package/helper-apps/cortex-file-handler/tests/containerParameterFlow.test.js +392 -0
  67. package/helper-apps/cortex-file-handler/tests/conversionResilience.test.js +7 -2
  68. package/helper-apps/cortex-file-handler/tests/deleteOperations.test.js +348 -0
  69. package/helper-apps/cortex-file-handler/tests/fileChunker.test.js +23 -2
  70. package/helper-apps/cortex-file-handler/tests/fileUpload.test.js +11 -5
  71. package/helper-apps/cortex-file-handler/tests/getOperations.test.js +58 -24
  72. package/helper-apps/cortex-file-handler/tests/postOperations.test.js +11 -4
  73. package/helper-apps/cortex-file-handler/tests/shortLivedUrlConversion.test.js +225 -0
  74. package/helper-apps/cortex-file-handler/tests/start.test.js +8 -12
  75. package/helper-apps/cortex-file-handler/tests/storage/StorageFactory.test.js +80 -0
  76. package/helper-apps/cortex-file-handler/tests/storage/StorageService.test.js +388 -22
  77. package/helper-apps/cortex-file-handler/tests/testUtils.helper.js +74 -0
  78. package/lib/cortexResponse.js +153 -0
  79. package/lib/entityConstants.js +21 -3
  80. package/lib/logger.js +21 -4
  81. package/lib/pathwayTools.js +28 -9
  82. package/lib/util.js +49 -0
  83. package/package.json +1 -1
  84. package/pathways/basePathway.js +1 -0
  85. package/pathways/bing_afagent.js +54 -1
  86. package/pathways/call_tools.js +2 -3
  87. package/pathways/chat_jarvis.js +1 -1
  88. package/pathways/google_cse.js +27 -0
  89. package/pathways/grok_live_search.js +18 -0
  90. package/pathways/system/entity/memory/sys_memory_lookup_required.js +1 -0
  91. package/pathways/system/entity/memory/sys_memory_required.js +1 -0
  92. package/pathways/system/entity/memory/sys_search_memory.js +1 -0
  93. package/pathways/system/entity/sys_entity_agent.js +56 -4
  94. package/pathways/system/entity/sys_generator_quick.js +1 -0
  95. package/pathways/system/entity/tools/sys_tool_bing_search_afagent.js +26 -0
  96. package/pathways/system/entity/tools/sys_tool_google_search.js +141 -0
  97. package/pathways/system/entity/tools/sys_tool_grok_x_search.js +237 -0
  98. package/pathways/system/entity/tools/sys_tool_image.js +1 -1
  99. package/pathways/system/rest_streaming/sys_claude_37_sonnet.js +21 -0
  100. package/pathways/system/rest_streaming/sys_claude_41_opus.js +21 -0
  101. package/pathways/system/rest_streaming/sys_claude_4_sonnet.js +21 -0
  102. package/pathways/system/rest_streaming/sys_google_gemini_25_flash.js +25 -0
  103. package/pathways/system/rest_streaming/{sys_google_gemini_chat.js → sys_google_gemini_25_pro.js} +6 -4
  104. package/pathways/system/rest_streaming/sys_grok_4.js +23 -0
  105. package/pathways/system/rest_streaming/sys_grok_4_fast_non_reasoning.js +23 -0
  106. package/pathways/system/rest_streaming/sys_grok_4_fast_reasoning.js +23 -0
  107. package/pathways/system/rest_streaming/sys_openai_chat.js +3 -0
  108. package/pathways/system/rest_streaming/sys_openai_chat_gpt41.js +22 -0
  109. package/pathways/system/rest_streaming/sys_openai_chat_gpt41_mini.js +21 -0
  110. package/pathways/system/rest_streaming/sys_openai_chat_gpt41_nano.js +21 -0
  111. package/pathways/system/rest_streaming/{sys_claude_35_sonnet.js → sys_openai_chat_gpt4_omni.js} +6 -4
  112. package/pathways/system/rest_streaming/sys_openai_chat_gpt4_omni_mini.js +21 -0
  113. package/pathways/system/rest_streaming/{sys_claude_3_haiku.js → sys_openai_chat_gpt5.js} +7 -5
  114. package/pathways/system/rest_streaming/sys_openai_chat_gpt5_chat.js +21 -0
  115. package/pathways/system/rest_streaming/sys_openai_chat_gpt5_mini.js +21 -0
  116. package/pathways/system/rest_streaming/sys_openai_chat_gpt5_nano.js +21 -0
  117. package/pathways/system/rest_streaming/{sys_openai_chat_o1.js → sys_openai_chat_o3.js} +6 -3
  118. package/pathways/system/rest_streaming/sys_openai_chat_o3_mini.js +3 -0
  119. package/pathways/system/workspaces/run_workspace_prompt.js +99 -0
  120. package/pathways/vision.js +1 -1
  121. package/server/graphql.js +1 -1
  122. package/server/modelExecutor.js +8 -0
  123. package/server/pathwayResolver.js +166 -16
  124. package/server/pathwayResponseParser.js +16 -8
  125. package/server/plugins/azureFoundryAgentsPlugin.js +1 -1
  126. package/server/plugins/claude3VertexPlugin.js +193 -45
  127. package/server/plugins/gemini15ChatPlugin.js +21 -0
  128. package/server/plugins/gemini15VisionPlugin.js +360 -0
  129. package/server/plugins/googleCsePlugin.js +94 -0
  130. package/server/plugins/grokVisionPlugin.js +365 -0
  131. package/server/plugins/modelPlugin.js +3 -1
  132. package/server/plugins/openAiChatPlugin.js +106 -13
  133. package/server/plugins/openAiVisionPlugin.js +42 -30
  134. package/server/resolver.js +28 -4
  135. package/server/rest.js +270 -53
  136. package/server/typeDef.js +1 -0
  137. package/tests/{mocks.js → helpers/mocks.js} +5 -2
  138. package/tests/{server.js → helpers/server.js} +2 -2
  139. package/tests/helpers/sseAssert.js +23 -0
  140. package/tests/helpers/sseClient.js +73 -0
  141. package/tests/helpers/subscriptionAssert.js +11 -0
  142. package/tests/helpers/subscriptions.js +113 -0
  143. package/tests/{sublong.srt → integration/features/translate/sublong.srt} +4543 -4543
  144. package/tests/integration/features/translate/translate_chunking_stream.test.js +100 -0
  145. package/tests/{translate_srt.test.js → integration/features/translate/translate_srt.test.js} +2 -2
  146. package/tests/integration/graphql/async/stream/agentic.test.js +477 -0
  147. package/tests/integration/graphql/async/stream/subscription_streaming.test.js +62 -0
  148. package/tests/integration/graphql/async/stream/sys_entity_start_streaming.test.js +71 -0
  149. package/tests/integration/graphql/async/stream/vendors/claude_streaming.test.js +56 -0
  150. package/tests/integration/graphql/async/stream/vendors/gemini_streaming.test.js +66 -0
  151. package/tests/integration/graphql/async/stream/vendors/grok_streaming.test.js +56 -0
  152. package/tests/integration/graphql/async/stream/vendors/openai_streaming.test.js +72 -0
  153. package/tests/integration/graphql/features/google/sysToolGoogleSearch.test.js +96 -0
  154. package/tests/integration/graphql/features/grok/grok.test.js +688 -0
  155. package/tests/integration/graphql/features/grok/grok_x_search_tool.test.js +354 -0
  156. package/tests/{main.test.js → integration/graphql/features/main.test.js} +1 -1
  157. package/tests/{call_tools.test.js → integration/graphql/features/tools/call_tools.test.js} +2 -2
  158. package/tests/{vision.test.js → integration/graphql/features/vision/vision.test.js} +1 -1
  159. package/tests/integration/graphql/subscriptions/connection.test.js +26 -0
  160. package/tests/{openai_api.test.js → integration/rest/oai/openai_api.test.js} +63 -238
  161. package/tests/integration/rest/oai/tool_calling_api.test.js +343 -0
  162. package/tests/integration/rest/oai/tool_calling_streaming.test.js +85 -0
  163. package/tests/integration/rest/vendors/claude_streaming.test.js +47 -0
  164. package/tests/integration/rest/vendors/claude_tool_calling_streaming.test.js +75 -0
  165. package/tests/integration/rest/vendors/gemini_streaming.test.js +47 -0
  166. package/tests/integration/rest/vendors/gemini_tool_calling_streaming.test.js +75 -0
  167. package/tests/integration/rest/vendors/grok_streaming.test.js +55 -0
  168. package/tests/integration/rest/vendors/grok_tool_calling_streaming.test.js +75 -0
  169. package/tests/{azureAuthTokenHelper.test.js → unit/core/azureAuthTokenHelper.test.js} +1 -1
  170. package/tests/{chunkfunction.test.js → unit/core/chunkfunction.test.js} +2 -2
  171. package/tests/{config.test.js → unit/core/config.test.js} +3 -3
  172. package/tests/{encodeCache.test.js → unit/core/encodeCache.test.js} +1 -1
  173. package/tests/{fastLruCache.test.js → unit/core/fastLruCache.test.js} +1 -1
  174. package/tests/{handleBars.test.js → unit/core/handleBars.test.js} +1 -1
  175. package/tests/{memoryfunction.test.js → unit/core/memoryfunction.test.js} +2 -2
  176. package/tests/unit/core/mergeResolver.test.js +952 -0
  177. package/tests/{parser.test.js → unit/core/parser.test.js} +3 -3
  178. package/tests/unit/core/pathwayResolver.test.js +187 -0
  179. package/tests/{requestMonitor.test.js → unit/core/requestMonitor.test.js} +1 -1
  180. package/tests/{requestMonitorDurationEstimator.test.js → unit/core/requestMonitorDurationEstimator.test.js} +1 -1
  181. package/tests/{truncateMessages.test.js → unit/core/truncateMessages.test.js} +3 -3
  182. package/tests/{util.test.js → unit/core/util.test.js} +1 -1
  183. package/tests/{apptekTranslatePlugin.test.js → unit/plugins/apptekTranslatePlugin.test.js} +3 -3
  184. package/tests/{azureFoundryAgents.test.js → unit/plugins/azureFoundryAgents.test.js} +136 -1
  185. package/tests/{claude3VertexPlugin.test.js → unit/plugins/claude3VertexPlugin.test.js} +32 -10
  186. package/tests/{claude3VertexToolConversion.test.js → unit/plugins/claude3VertexToolConversion.test.js} +3 -3
  187. package/tests/unit/plugins/googleCsePlugin.test.js +111 -0
  188. package/tests/unit/plugins/grokVisionPlugin.test.js +1392 -0
  189. package/tests/{modelPlugin.test.js → unit/plugins/modelPlugin.test.js} +3 -3
  190. package/tests/{multimodal_conversion.test.js → unit/plugins/multimodal_conversion.test.js} +4 -4
  191. package/tests/{openAiChatPlugin.test.js → unit/plugins/openAiChatPlugin.test.js} +13 -4
  192. package/tests/{openAiToolPlugin.test.js → unit/plugins/openAiToolPlugin.test.js} +35 -27
  193. package/tests/{tokenHandlingTests.test.js → unit/plugins/tokenHandlingTests.test.js} +5 -5
  194. package/tests/{translate_apptek.test.js → unit/plugins/translate_apptek.test.js} +3 -3
  195. package/tests/{streaming.test.js → unit/plugins.streaming/plugin_stream_events.test.js} +19 -58
  196. package/helper-apps/mogrt-handler/tests/test-files/test.gif +0 -1
  197. package/helper-apps/mogrt-handler/tests/test-files/test.mogrt +0 -1
  198. package/helper-apps/mogrt-handler/tests/test-files/test.mp4 +0 -1
  199. package/pathways/system/rest_streaming/sys_openai_chat_gpt4.js +0 -19
  200. package/pathways/system/rest_streaming/sys_openai_chat_gpt4_32.js +0 -19
  201. package/pathways/system/rest_streaming/sys_openai_chat_gpt4_turbo.js +0 -19
  202. package/pathways/system/workspaces/run_claude35_sonnet.js +0 -21
  203. package/pathways/system/workspaces/run_claude3_haiku.js +0 -20
  204. package/pathways/system/workspaces/run_gpt35turbo.js +0 -20
  205. package/pathways/system/workspaces/run_gpt4.js +0 -20
  206. package/pathways/system/workspaces/run_gpt4_32.js +0 -20
  207. package/tests/agentic.test.js +0 -256
  208. package/tests/pathwayResolver.test.js +0 -78
  209. package/tests/subscription.test.js +0 -387
  210. /package/tests/{subchunk.srt → integration/features/translate/subchunk.srt} +0 -0
  211. /package/tests/{subhorizontal.srt → integration/features/translate/subhorizontal.srt} +0 -0
@@ -0,0 +1,61 @@
1
+ name: Cortex File Handler Tests
2
+
3
+ on:
4
+ push:
5
+ branches: [ main, dev ]
6
+ paths:
7
+ - 'helper-apps/cortex-file-handler/**'
8
+ - '.github/workflows/cortex-file-handler-test.yml'
9
+ pull_request:
10
+ branches: [ main, dev ]
11
+ paths:
12
+ - 'helper-apps/cortex-file-handler/**'
13
+ - '.github/workflows/cortex-file-handler-test.yml'
14
+
15
+ jobs:
16
+ test:
17
+ runs-on: ubuntu-latest
18
+
19
+
20
+ defaults:
21
+ run:
22
+ working-directory: helper-apps/cortex-file-handler
23
+
24
+ steps:
25
+ - name: Checkout code
26
+ uses: actions/checkout@v4
27
+
28
+ - name: Setup Node.js 20.x
29
+ uses: actions/setup-node@v4
30
+ with:
31
+ node-version: '20.x'
32
+ cache: 'npm'
33
+ cache-dependency-path: helper-apps/cortex-file-handler/package-lock.json
34
+
35
+ - name: Install system dependencies
36
+ run: |
37
+ sudo apt-get update
38
+ sudo apt-get install -y ffmpeg
39
+
40
+ - name: Install dependencies
41
+ run: npm ci
42
+
43
+ - name: Install Azurite
44
+ run: npm install -g azurite
45
+
46
+ - name: Setup GCS test environment
47
+ run: cp .env.test.gcs.ci .env.test.gcs
48
+
49
+ - name: Setup Azure test environment
50
+ run: cp .env.test.azure.ci .env.test.azure
51
+
52
+ - name: Run GCS tests
53
+ run: npm run test:gcs
54
+
55
+ - name: Run Azure tests
56
+ run: npm run test:azure
57
+
58
+ - name: Run tests
59
+ run: npm test
60
+ env:
61
+ NODE_ENV: test
package/README.md CHANGED
@@ -1,5 +1,6 @@
1
1
  # Cortex
2
- Cortex simplifies and accelerates the process of creating applications that harness the power of modern AI models like GPT-4o (chatGPT), o1, o3-mini, Gemini, the Claude series, Flux, Grok and more by poviding a structured interface (GraphQL or REST) to a powerful prompt execution environment. This enables complex augmented prompting and abstracts away most of the complexity of managing model connections like chunking input, rate limiting, formatting output, caching, and handling errors.
2
+ Cortex simplifies and accelerates the process of creating applications that harness the power of modern AI models like GPT-5 (chatGPT), o4, Gemini, the Claude series, Flux, Grok and more by poviding a structured interface (GraphQL or REST) to a powerful prompt execution environment. This enables complex augmented prompting and abstracts away most of the complexity of managing model connections like chunking input, rate limiting, formatting output, caching, and handling errors.
3
+
3
4
  ## Why build Cortex?
4
5
  Modern AI models are transformational, but a number of complexities emerge when developers start using them to deliver application-ready functions. Most models require precisely formatted, carefully engineered and sequenced prompts to produce consistent results, and the responses are typically largely unstructured text without validation or formatting. Additionally, these models are evolving rapidly, are typically costly and slow to query and implement hard request size and rate restrictions that need to be carefully navigated for optimum throughput. Cortex offers a solution to these problems and provides a simple and extensible package for interacting with NL AI models.
5
6
 
@@ -18,6 +19,7 @@ Just about anything! It's kind of an LLM swiss army knife. Here are some ideas:
18
19
  * Simple architecture to build custom functional endpoints (called `pathways`), that implement common NL AI tasks. Default pathways include chat, summarization, translation, paraphrasing, completion, spelling and grammar correction, entity extraction, sentiment analysis, and bias analysis.
19
20
  * Extensive model support with built-in integrations for:
20
21
  - OpenAI models:
22
+ - GPT-5 (all flavors and router)
21
23
  - GPT-4.1 (+mini, +nano)
22
24
  - GPT-4 Omni (GPT-4o)
23
25
  - O3 and O4-mini (Advanced reasoning models)
@@ -28,10 +30,15 @@ Just about anything! It's kind of an LLM swiss army knife. Here are some ideas:
28
30
  - Gemini 2.0 Flash
29
31
  - Earlier Google models (Gemini 1.5 series)
30
32
  - Anthropic models:
31
- - Claude 3.7 Sonnet
32
- - Claude 3.5 Sonnet
33
- - Claude 3.5 Haiku
34
- - Claude 3 Series
33
+ - Claude 4 Sonnet (Vertex)
34
+ - Claude 4.1 Opus (Vertex)
35
+ - Claude 3.7 Sonnet
36
+ - Claude 3.5 Sonnet
37
+ - Claude 3.5 Haiku
38
+ - Claude 3 Series
39
+ - Grok (XAI) models:
40
+ - Grok 3 and Grok 4 series (including fast-reasoning and code-fast variants)
41
+ - Multimodal chat with vision, streaming, and tool calling
35
42
  - Ollama support
36
43
  - Azure OpenAI support
37
44
  - Custom model implementations
@@ -419,6 +426,11 @@ Each pathway can define the following properties (with defaults from basePathway
419
426
  - `temperature`: Model temperature setting (0.0 to 1.0). Default: 0.9
420
427
  - `json`: Require valid JSON response from model. Default: false
421
428
  - `manageTokenLength`: Manage input token length for model. Default: true
429
+
430
+ #### Dynamic model override
431
+
432
+ - `model`: In many cases, specifying the model as an input parameter will tell the pathway which model to use when setting up the pathway for execution.
433
+ - `modelOverride`: In some cases, you need even more dynamic model selection. At runtime, a pathway can optionally specify `modelOverride` in request args to switch the model used for execution without restarting the server. Cortex will attempt a hot swap and continue execution; errors are logged gracefully if the model is invalid.
422
434
 
423
435
  ## Core (Default) Pathways
424
436
 
@@ -528,6 +540,7 @@ Models are configured in the `models` section of the config. Each model can have
528
540
  - `GEMINI-1.5-CHAT`: For Gemini 1.5 Pro chat models
529
541
  - `GEMINI-1.5-VISION`: For Gemini vision models (including 2.0 Flash experimental)
530
542
  - `CLAUDE-3-VERTEX`: For Claude-3 and 3.5 models (Haiku, Opus, Sonnet)
543
+ - `GROK-VISION`: For XAI Grok models (Grok-3, Grok-4, fast-reasoning, code-fast) with multimodal/vision and reasoning
531
544
  - `AZURE-TRANSLATE`: For Azure translation services
532
545
 
533
546
  Each model configuration can include:
@@ -606,6 +619,18 @@ To enable Ollama support, add the following to your configuration:
606
619
  }
607
620
  ```
608
621
 
622
+ #### Tool Calling and Structured Responses
623
+
624
+ When using the OpenAI-compatible REST endpoints, Cortex supports vendor-agnostic tool calling with OpenAI-style `tool_calls` deltas in streaming mode. Pathway responses now include a structured `resultData` field (also exposed via GraphQL) that may contain:
625
+
626
+ - `toolCalls` and/or `functionCall` objects
627
+ - vendor-specific metadata (e.g., search citations)
628
+ - usage details
629
+
630
+ Notes:
631
+ - `tool_choice` accepts either a string (e.g., `"auto"`, `"required"`) or an object (`{ type: 'function', function: 'name' }`); Cortex normalizes this across vendors (OpenAI, Claude via Vertex, Gemini, Grok).
632
+ - Arrays for `[String]` inputs are passed directly through REST conversion.
633
+
609
634
  You can then use any Ollama model through the standard OpenAI-compatible endpoints:
610
635
 
611
636
  ```bash
@@ -846,8 +871,7 @@ Cortex includes a powerful Entity System that allows you to build autonomous age
846
871
  ### Overview
847
872
 
848
873
  The Entity System is built around two core pathways:
849
- - `sys_entity_start.js`: The entry point for entity interactions, handling initial routing and tool selection
850
- - `sys_entity_continue.js`: Manages callback execution in synchronous mode
874
+ - `sys_entity_agent.js`: The entry point for entity interactions, handling initial routing and tool selection
851
875
 
852
876
  ### Key Features
853
877
 
@@ -159,6 +159,21 @@
159
159
  ],
160
160
  "requestsPerSecond": 10,
161
161
  "maxTokenLength": 1024
162
+ },
163
+ "xai-grok-3": {
164
+ "type": "GROK-VISION",
165
+ "url": "https://api.x.ai/v1/chat/completions",
166
+ "headers": {
167
+ "Authorization": "Bearer {{XAI_API_KEY}}",
168
+ "Content-Type": "application/json"
169
+ },
170
+ "params": {
171
+ "model": "grok-3-latest"
172
+ },
173
+ "requestsPerSecond": 10,
174
+ "maxTokenLength": 131072,
175
+ "maxReturnTokens": 4096,
176
+ "supportsStreaming": true
162
177
  }
163
178
  },
164
179
  "enableCache": false,
package/config.js CHANGED
@@ -59,7 +59,7 @@ var config = convict({
59
59
  },
60
60
  defaultModelName: {
61
61
  format: String,
62
- default: null,
62
+ default: 'oai-gpt4o',
63
63
  env: 'DEFAULT_MODEL_NAME'
64
64
  },
65
65
  defaultEntityName: {
@@ -94,12 +94,12 @@ var config = convict({
94
94
  },
95
95
  claudeVertexUrl: {
96
96
  format: String,
97
- default: 'https://region.googleapis.com/v1/projects/projectid/locations/location/publishers/anthropic/models/claude-3-5-sonnet@20240620',
97
+ default: 'https://region.googleapis.com/v1/projects/projectid/locations/location/publishers/anthropic/models/claude-4-sonnet@20250722',
98
98
  env: 'CLAUDE_VERTEX_URL'
99
99
  },
100
100
  geminiFlashUrl: {
101
101
  format: String,
102
- default: 'https://region.googleapis.com/v1/projects/projectid/locations/location/publishers/google/models/gemini-2.0-flash-001',
102
+ default: 'https://region.googleapis.com/v1/projects/projectid/locations/location/publishers/google/models/gemini-2.5-flash',
103
103
  env: 'GEMINI_FLASH_URL'
104
104
  },
105
105
  entityConfig: {
@@ -295,6 +295,21 @@ var config = convict({
295
295
  "maxReturnTokens": 100000,
296
296
  "supportsStreaming": false
297
297
  },
298
+ "oai-o3": {
299
+ "type": "OPENAI-REASONING",
300
+ "url": "https://api.openai.com/v1/chat/completions",
301
+ "headers": {
302
+ "Authorization": "Bearer {{OPENAI_API_KEY}}",
303
+ "Content-Type": "application/json"
304
+ },
305
+ "params": {
306
+ "model": "o3"
307
+ },
308
+ "requestsPerSecond": 10,
309
+ "maxTokenLength": 200000,
310
+ "maxReturnTokens": 100000,
311
+ "supportsStreaming": true
312
+ },
298
313
  "oai-o3-mini": {
299
314
  "type": "OPENAI-REASONING",
300
315
  "url": "https://api.openai.com/v1/chat/completions",
@@ -308,7 +323,7 @@ var config = convict({
308
323
  "requestsPerSecond": 10,
309
324
  "maxTokenLength": 200000,
310
325
  "maxReturnTokens": 100000,
311
- "supportsStreaming": false
326
+ "supportsStreaming": true
312
327
  },
313
328
  "azure-bing": {
314
329
  "type": "AZURE-BING",
@@ -320,6 +335,15 @@ var config = convict({
320
335
  "requestsPerSecond": 10,
321
336
  "maxTokenLength": 200000
322
337
  },
338
+ "google-cse": {
339
+ "type": "GOOGLE-CSE",
340
+ "url": "https://www.googleapis.com/customsearch/v1",
341
+ "headers": {
342
+ "Content-Type": "application/json"
343
+ },
344
+ "requestsPerSecond": 10,
345
+ "maxTokenLength": 200000
346
+ },
323
347
  "runware-flux-schnell": {
324
348
  "type": "RUNWARE-AI",
325
349
  "url": "https://api.runware.ai/v1",
@@ -449,7 +473,7 @@ var config = convict({
449
473
  "maxReturnTokens": 4096,
450
474
  "supportsStreaming": true
451
475
  },
452
- "claude-35-sonnet-vertex": {
476
+ "claude-37-sonnet-vertex": {
453
477
  "type": "CLAUDE-3-VERTEX",
454
478
  "url": "{{claudeVertexUrl}}",
455
479
  "headers": {
@@ -461,15 +485,102 @@ var config = convict({
461
485
  "maxImageSize": 5242880,
462
486
  "supportsStreaming": true
463
487
  },
464
- "gemini-flash-20-vision": {
465
- "type": "GEMINI-1.5-VISION",
466
- "url": "{{geminiFlashUrl}}",
488
+ "claude-4-sonnet-vertex": {
489
+ "type": "CLAUDE-3-VERTEX",
490
+ "url": "{{claudeVertexUrl}}",
467
491
  "headers": {
468
492
  "Content-Type": "application/json"
469
493
  },
470
494
  "requestsPerSecond": 10,
471
495
  "maxTokenLength": 200000,
472
496
  "maxReturnTokens": 4096,
497
+ "maxImageSize": 5242880,
498
+ "supportsStreaming": true
499
+ },
500
+ "gemini-flash-25-vision": {
501
+ "type": "GEMINI-1.5-VISION",
502
+ "url": "{{geminiFlashUrl}}",
503
+ "headers": {
504
+ "Content-Type": "application/json"
505
+ },
506
+ "requestsPerSecond": 10,
507
+ "maxTokenLength": 1048576,
508
+ "maxReturnTokens": 65535,
509
+ "supportsStreaming": true
510
+ },
511
+ "xai-grok-3": {
512
+ "type": "GROK-VISION",
513
+ "url": "https://api.x.ai/v1/chat/completions",
514
+ "headers": {
515
+ "Authorization": "Bearer {{XAI_API_KEY}}",
516
+ "Content-Type": "application/json"
517
+ },
518
+ "params": {
519
+ "model": "grok-3-latest"
520
+ },
521
+ "requestsPerSecond": 10,
522
+ "maxTokenLength": 131072,
523
+ "maxReturnTokens": 32000,
524
+ "supportsStreaming": true
525
+ },
526
+ "xai-grok-4": {
527
+ "type": "GROK-VISION",
528
+ "url": "https://api.x.ai/v1/chat/completions",
529
+ "headers": {
530
+ "Authorization": "Bearer {{XAI_API_KEY}}",
531
+ "Content-Type": "application/json"
532
+ },
533
+ "params": {
534
+ "model": "grok-4-0709"
535
+ },
536
+ "requestsPerSecond": 10,
537
+ "maxTokenLength": 256000,
538
+ "maxReturnTokens": 128000,
539
+ "supportsStreaming": true
540
+ },
541
+ "xai-grok-code-fast-1": {
542
+ "type": "GROK-VISION",
543
+ "url": "https://api.x.ai/v1/chat/completions",
544
+ "headers": {
545
+ "Authorization": "Bearer {{XAI_API_KEY}}",
546
+ "Content-Type": "application/json"
547
+ },
548
+ "params": {
549
+ "model": "grok-code-fast-1"
550
+ },
551
+ "requestsPerSecond": 10,
552
+ "maxTokenLength": 2000000,
553
+ "maxReturnTokens": 128000,
554
+ "supportsStreaming": true
555
+ },
556
+ "xai-grok-4-fast-reasoning": {
557
+ "type": "GROK-VISION",
558
+ "url": "https://api.x.ai/v1/chat/completions",
559
+ "headers": {
560
+ "Authorization": "Bearer {{XAI_API_KEY}}",
561
+ "Content-Type": "application/json"
562
+ },
563
+ "params": {
564
+ "model": "grok-4-fast-reasoning"
565
+ },
566
+ "requestsPerSecond": 10,
567
+ "maxTokenLength": 2000000,
568
+ "maxReturnTokens": 128000,
569
+ "supportsStreaming": true
570
+ },
571
+ "xai-grok-4-fast-non-reasoning": {
572
+ "type": "GROK-VISION",
573
+ "url": "https://api.x.ai/v1/chat/completions",
574
+ "headers": {
575
+ "Authorization": "Bearer {{XAI_API_KEY}}",
576
+ "Content-Type": "application/json"
577
+ },
578
+ "params": {
579
+ "model": "grok-4-fast-non-reasoning"
580
+ },
581
+ "requestsPerSecond": 10,
582
+ "maxTokenLength": 256000,
583
+ "maxReturnTokens": 128000,
473
584
  "supportsStreaming": true
474
585
  },
475
586
  "apptek-translate": {
@@ -617,6 +728,11 @@ var config = convict({
617
728
  format: String,
618
729
  default: null,
619
730
  env: 'AZURE_FOUNDRY_AGENT_ID'
731
+ },
732
+ azureFoundryBingSearchConnectionId: {
733
+ format: String,
734
+ default: null,
735
+ env: 'AZURE_FOUNDRY_BING_SEARCH_CONNECTION_ID'
620
736
  }
621
737
  });
622
738
 
@@ -741,10 +857,15 @@ const buildPathways = async (config) => {
741
857
  Object.assign(pathways, subPathways);
742
858
  } else if (file.name.endsWith('.js')) {
743
859
  // Load individual pathway file
744
- const pathwayURL = pathToFileURL(fullPath).toString();
745
- const pathway = await import(pathwayURL).then(module => module.default || module);
746
- const pathwayName = path.basename(file.name, '.js');
747
- pathways[pathwayName] = pathway;
860
+ try {
861
+ const pathwayURL = pathToFileURL(fullPath).toString();
862
+ const pathway = await import(pathwayURL).then(module => module.default || module);
863
+ const pathwayName = path.basename(file.name, '.js');
864
+ pathways[pathwayName] = pathway;
865
+ } catch (pathwayError) {
866
+ logger.error(`Error loading pathway file ${fullPath}: ${pathwayError.message}`);
867
+ throw pathwayError; // Re-throw to be caught by outer catch block
868
+ }
748
869
  }
749
870
  }
750
871
  } catch (error) {
@@ -0,0 +1,22 @@
1
+ -----BEGIN CERTIFICATE-----
2
+ MIIDrzCCApegAwIBAgIQCDvgVpBCRrGhdWrJWZHHSjANBgkqhkiG9w0BAQUFADBh
3
+ MQswCQYDVQQGEwJVUzEVMBMGA1UEChMMRGlnaUNlcnQgSW5jMRkwFwYDVQQLExB3
4
+ d3cuZGlnaWNlcnQuY29tMSAwHgYDVQQDExdEaWdpQ2VydCBHbG9iYWwgUm9vdCBD
5
+ QTAeFw0wNjExMTAwMDAwMDBaFw0zMTExMTAwMDAwMDBaMGExCzAJBgNVBAYTAlVT
6
+ MRUwEwYDVQQKEwxEaWdpQ2VydCBJbmMxGTAXBgNVBAsTEHd3dy5kaWdpY2VydC5j
7
+ b20xIDAeBgNVBAMTF0RpZ2lDZXJ0IEdsb2JhbCBSb290IENBMIIBIjANBgkqhkiG
8
+ 9w0BAQEFAAOCAQ8AMIIBCgKCAQEA4jvhEXLeqKTTo1eqUKKPC3eQyaKl7hLOllsB
9
+ CSDMAZOnTjC3U/dDxGkAV53ijSLdhwZAAIEJzs4bg7/fzTtxRuLWZscFs3YnFo97
10
+ nh6Vfe63SKMI2tavegw5BmV/Sl0fvBf4q77uKNd0f3p4mVmFaG5cIzJLv07A6Fpt
11
+ 43C/dxC//AH2hdmoRBBYMql1GNXRor5H4idq9Joz+EkIYIvUX7Q6hL+hqkpMfT7P
12
+ T19sdl6gSzeRntwi5m3OFBqOasv+zbMUZBfHWymeMr/y7vrTC0LUq7dBMtoM1O/4
13
+ gdW7jVg/tRvoSSiicNoxBN33shbyTApOB6jtSj1etX+jkMOvJwIDAQABo2MwYTAO
14
+ BgNVHQ8BAf8EBAMCAYYwDwYDVR0TAQH/BAUwAwEB/zAdBgNVHQ4EFgQUA95QNVbR
15
+ TLtm8KPiGxvDl7I90VUwHwYDVR0jBBgwFoAUA95QNVbRTLtm8KPiGxvDl7I90VUw
16
+ DQYJKoZIhvcNAQEFBQADggEBAMucN6pIExIK+t1EnE9SsPTfrgT1eXkIoyQY/Esr
17
+ hMAtudXH/vTBH1jLuG2cenTnmCmrEbXjcKChzUyImZOMkXDiqw8cvpOp/2PV5Adg
18
+ 06O/nVsJ8dWO41P0jmP6P6fbtGbfYmbW0W5BjfIttep3Sp+dWOIrWcBAI+0tKIJF
19
+ PnlUkiaY4IBIqDfv8NZ5YBberOgOzW6sRBc4L0na4UU+Krk2U886UAb3LujEV0ls
20
+ YSEY1QSteDwsOoBrp+uvFRTp2InBuThs4pFsiv9kuXclVzDAGySj4dzp30d8tbQk
21
+ CAUw7C29C79Fv1C5qfPrmAESrciIxpg0X40KPMbp1ZWVbd4=
22
+ -----END CERTIFICATE-----
@@ -0,0 +1,31 @@
1
+ # Use the official Azure Functions Python base image
2
+ FROM mcr.microsoft.com/azure-functions/python:4-python3.11
3
+
4
+ # Set the working directory
5
+ WORKDIR /home/site/wwwroot
6
+
7
+ # Install system dependencies
8
+ RUN apt-get update && apt-get install -y \
9
+ build-essential \
10
+ curl \
11
+ git \
12
+ libgirepository1.0-dev \
13
+ pkg-config \
14
+ libfreetype6-dev \
15
+ libpng-dev \
16
+ fontconfig \
17
+ && rm -rf /var/lib/apt/lists/*
18
+ RUN curl https://packages.microsoft.com/keys/microsoft.asc | gpg --dearmor > microsoft.gpg \
19
+ && mv microsoft.gpg /etc/apt/trusted.gpg.d/microsoft.gpg \
20
+ && sh -c 'echo "deb [arch=amd64,arm64,armhf] https://packages.microsoft.com/repos/microsoft-debian-bullseye-prod bullseye main" > /etc/apt/sources.list.d/azure-cli.list' \
21
+ && apt-get update \
22
+ && apt-get install -y azure-functions-core-tools
23
+
24
+ # Copy requirements and install Python dependencies
25
+ COPY requirements.txt .
26
+ RUN pip install --no-cache-dir -r requirements.txt
27
+
28
+ # Copy application code
29
+ COPY . .
30
+ CMD ["func", "start", "--port", "80", "--verbose"]
31
+
@@ -0,0 +1,41 @@
1
+ # Use an official Python runtime as a parent image
2
+ FROM python:3.11-slim
3
+
4
+ # Set the working directory in the container
5
+ WORKDIR /app
6
+
7
+ # Install system dependencies that might be needed
8
+ RUN apt-get update && apt-get install -y \
9
+ gcc \
10
+ curl \
11
+ && rm -rf /var/lib/apt/lists/*
12
+
13
+ # Install poetry
14
+ RUN pip install poetry
15
+
16
+ # Configure poetry: Don't create virtual env (we're already in container)
17
+ RUN poetry config virtualenvs.create false
18
+
19
+ # Copy project descriptor files first (for better Docker layer caching)
20
+ COPY pyproject.toml poetry.lock* ./
21
+
22
+ # Create the coding directory inside the container
23
+ RUN mkdir -p /app/coding
24
+
25
+ # Create a minimal README.md if it's needed for build context
26
+ RUN touch README.md
27
+
28
+ # Copy the application's code before installing dependencies
29
+ COPY . /app/src/cortex_autogen2
30
+
31
+ # Install dependencies (including the current project)
32
+ RUN poetry install --without dev --no-ansi --no-interaction
33
+
34
+ # Install Playwright browser binaries
35
+ RUN playwright install
36
+
37
+ # Ensure Python can import our source package
38
+ ENV PYTHONPATH="/app/src:/app/src/cortex_autogen2"
39
+
40
+ # The command to run the application
41
+ CMD ["python", "-m", "cortex_autogen2.main"]
@@ -0,0 +1,183 @@
1
+ ## Cortex AutoGen: Advanced AI Agent System 🤖
2
+
3
+ Multi-agent task automation with real code execution, Azure Storage Queue ingestion, Azure Blob uploads, and live progress via Redis.
4
+
5
+ ### Highlights
6
+ - **Selector-based orchestration** with `SelectorGroupChat`
7
+ - **Agents**: coder, code executor, cloud file uploader, presenter, terminator
8
+ - **Real execution** in a sandboxed working directory (`CORTEX_WORK_DIR`)
9
+ - **Azure native**: Queue (ingress) + Blob (files)
10
+ - **Live progress** published to Redis (`info`, `progress`, optional `data`)
11
+
12
+ ### Architecture
13
+ - Shared core in `task_processor.py` used by both the long-running worker (`main.py`) and the Azure Functions container (`function_app.py`).
14
+ - Queue messages are Base64-encoded JSON; task text is read from `message` or `content`.
15
+
16
+ ## Quick Start
17
+
18
+ ### Prerequisites
19
+ - Python 3.11+
20
+ - Redis instance
21
+ - Azure Storage account (Queue + Blob)
22
+ - Docker (optional, for containerized Azure Functions local run)
23
+
24
+ ### 1) Set environment variables
25
+ Create a `.env` in the project root:
26
+
27
+ ```dotenv
28
+ # Core
29
+ AZURE_STORAGE_CONNECTION_STRING=...
30
+ AZURE_QUEUE_NAME=autogen-test-message-queue # used by worker (main.py)
31
+ AZURE_BLOB_CONTAINER=autogentempfiles
32
+ REDIS_CONNECTION_STRING=redis://localhost:6379
33
+ REDIS_CHANNEL=requestProgress
34
+
35
+ # Models API
36
+ CORTEX_API_KEY=...
37
+ CORTEX_API_BASE_URL=http://host.docker.internal:4000/v1
38
+
39
+ # Working directory for code execution (must be writable)
40
+ CORTEX_WORK_DIR=/tmp/coding
41
+
42
+ # Azure Functions variant uses QUEUE_NAME (not AZURE_QUEUE_NAME)
43
+ QUEUE_NAME=autogen-test-message-queue
44
+ ```
45
+
46
+ Keep secrets out of version control. You can also configure `local.settings.json` for local Functions.
47
+
48
+ ### 2) Install dependencies
49
+ - Using Poetry:
50
+ ```bash
51
+ poetry install
52
+ ```
53
+ - Or with pip:
54
+ ```bash
55
+ python -m venv .venv && source .venv/bin/activate # project uses .venv
56
+ pip install -r requirements.txt
57
+ ```
58
+
59
+ ### 3) Run the worker locally
60
+ - Activate your virtualenv (`source .venv/bin/activate`) and ensure a clean worker state.
61
+ - Recommended workflow (non-continuous, exits when queue is empty):
62
+ ```bash
63
+ # Kill any previously running worker (module or script form)
64
+ pkill -f "python -m src.cortex_autogen2.main" || true
65
+ pkill -f "python main.py" || true
66
+ CONTINUOUS_MODE=false python -m src.cortex_autogen2.main &
67
+ # Alternative (direct script):
68
+ # CONTINUOUS_MODE=false python main.py &
69
+ ```
70
+ Tip: Use the module path variant if your repository layout exposes `src/cortex_autogen2` on `PYTHONPATH` (e.g., in a monorepo). Otherwise, run `python main.py` directly.
71
+
72
+ Then send a task:
73
+ ```bash
74
+ python send_task.py "create a simple PDF about cats"
75
+ ```
76
+
77
+ Notes:
78
+ - `CONTINUOUS_MODE=false` runs once and exits after the queue is empty.
79
+ - Use background run `&` to keep logs visible in the current terminal.
80
+
81
+ ### 4) Run the worker in Docker (optional)
82
+ Build and run the worker image using `Dockerfile.worker`:
83
+ ```bash
84
+ docker build -f Dockerfile.worker -t cortex-autogen-worker .
85
+ docker run --rm --env-file .env -e CORTEX_WORK_DIR=/app/coding --network host cortex-autogen-worker
86
+ ```
87
+
88
+ ### 5) Run the Azure Functions container locally (optional)
89
+ Use Docker Compose and pass your `.env` so the container gets your variables:
90
+ ```bash
91
+ docker compose --env-file .env up --build
92
+ ```
93
+ This builds `Dockerfile` (Functions) and starts on port `7071` (mapped to container `80`).
94
+
95
+ ## Usage Details
96
+
97
+ ### Sending tasks
98
+ `send_task.py` publishes a Base64-encoded JSON message with `content` to the queue defined by `AZURE_STORAGE_CONNECTION_STRING` and `AZURE_QUEUE_NAME` (or `QUEUE_NAME` for Functions).
99
+
100
+ ```bash
101
+ python send_task.py "list the files in the current directory"
102
+ # Override queue/connection if needed:
103
+ python send_task.py "create a simple PDF about cats" --queue autogen-test-message-queue --connection "<AZURE_STORAGE_CONNECTION_STRING>"
104
+ ```
105
+
106
+ Message format published to the queue (before Base64 encoding):
107
+ ```json
108
+ {
109
+ "request_id": "<uuid>",
110
+ "message_id": "<uuid>",
111
+ "content": "<task text>"
112
+ }
113
+ ```
114
+
115
+ ### Progress updates
116
+ - Channel: set via `REDIS_CHANNEL` (recommend `requestProgress`)
117
+ - Payload fields: `requestId`, `progress` (0-1), `info` (short status), optional `data` (final Markdown)
118
+ - Final result publishes `progress=1.0` with `data` containing the Markdown for UI
119
+
120
+ ### Working directory
121
+ - Code execution uses `CORTEX_WORK_DIR`. Defaults: `/home/site/wwwroot/coding` in Functions container; set to `/app/coding` in worker container; recommend `/tmp/coding` locally. Always use absolute paths within this directory.
122
+
123
+ ## Project Structure
124
+ ```
125
+ cortex-autogen2/
126
+ ├── Dockerfile # Azure Functions container
127
+ ├── Dockerfile.worker # Traditional worker container
128
+ ├── docker-compose.yml # Local Functions container orchestrator
129
+ ├── main.py # Long-running worker
130
+ ├── function_app.py # Azure Functions entry
131
+ ├── task_processor.py # Shared processing logic
132
+ ├── host.json # Azure Functions host config
133
+ ├── local.settings.json # Local Functions settings (do not commit secrets)
134
+ ├── requirements.txt # Functions deps (pip)
135
+ ├── pyproject.toml, poetry.lock # Poetry project config
136
+ ├── send_task.py # Queue task sender
137
+ ├── agents.py # Agent definitions
138
+ ├── services/
139
+ │ ├── azure_queue.py
140
+ │ └── redis_publisher.py
141
+ └── tools/
142
+ ├── azure_blob_tools.py
143
+ ├── coding_tools.py
144
+ ├── download_tools.py
145
+ ├── file_tools.py
146
+ └── search_tools.py
147
+ ```
148
+
149
+ ## Environment variables reference
150
+ | Name | Required | Default | Used by | Description |
151
+ |--------------------------------|----------|---------------------------------|-------------------|-------------|
152
+ | `AZURE_STORAGE_CONNECTION_STRING` | Yes | — | Worker/Functions | Storage account connection string |
153
+ | `AZURE_QUEUE_NAME` | Yes (worker) | — | Worker | Queue name for worker (`main.py`) |
154
+ | `QUEUE_NAME` | Yes (Functions) | `autogen-message-queue` | Functions | Queue name for Functions (`function_app.py`) |
155
+ | `AZURE_BLOB_CONTAINER` | Yes | — | Uploader tool | Blob container for uploaded files |
156
+ | `REDIS_CONNECTION_STRING` | Yes | — | Progress | Redis connection string |
157
+ | `REDIS_CHANNEL` | Yes | `requestProgress` | Progress | Redis pub/sub channel for progress |
158
+ | `CORTEX_API_KEY` | Yes | — | Models | API key for Cortex/OpenAI-style API |
159
+ | `CORTEX_API_BASE_URL` | No | `http://host.docker.internal:4000/v1` | Models | API base URL |
160
+ | `CORTEX_WORK_DIR` | No | `/tmp/coding` or container path | Code executor | Writable work dir for code execution |
161
+
162
+ ## Notes
163
+ - Health endpoint referenced in `docker-compose.yml` is optional; if you add one, expose it under `/api/health` in the Functions app.
164
+ - Do not commit `.env` or `local.settings.json` with secrets.
165
+ - On macOS, Docker's `network_mode: host` is not supported; remove it from `docker-compose.yml` if needed and rely on published ports and `host.docker.internal` for host access.
166
+
167
+ ## Troubleshooting
168
+ - No tasks processed: verify `AZURE_QUEUE_NAME`/`QUEUE_NAME` and that messages are Base64-encoded JSON with `content` or `message`.
169
+ - No progress visible: ensure `REDIS_CONNECTION_STRING` and `REDIS_CHANNEL` (e.g., `requestProgress`) are set, and network access to Redis.
170
+ - Container cannot reach host services: use `--network host` on macOS/Linux and `host.docker.internal` URLs inside containers.
171
+
172
+ ## Contributing
173
+ - Open a PR with clear description and include documentation updates when applicable.
174
+
175
+ ## Examples
176
+ - Send a research/report task:
177
+ ```bash
178
+ python send_task.py "Summarize the latest trends in AI agent frameworks with references"
179
+ ```
180
+ - Generate and upload a file:
181
+ ```bash
182
+ python send_task.py "Create a simple PDF about cats with 3 bullet points and upload it"
183
+ ```