@aj-archipelago/cortex 1.0.0 → 1.0.1

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
package/LICENSE CHANGED
@@ -1,10 +1,6 @@
1
1
  MIT License
2
2
 
3
- <<<<<<< HEAD
4
- Copyright (c) 2022 Al Jazeera Media Network
5
- =======
6
- Copyright (c) 2023 aj-archipelago
7
- >>>>>>> b78a6667176ebfd09d2c7ed1549d8dc18691c344
3
+ Copyright (c) 2023 Al Jazeera Media Network
8
4
 
9
5
  Permission is hereby granted, free of charge, to any person obtaining a copy
10
6
  of this software and associated documentation files (the "Software"), to deal
package/README.md CHANGED
@@ -305,7 +305,7 @@ export default {
305
305
 
306
306
  ### Building and Loading Pathways
307
307
 
308
- Pathways are loaded from modules in the `pathways` directory. The pathways are built and loaded to the `config` object using the `buildPathways` function. The `buildPathways` function loads the base pathway, the core pathways, and any custom pathways. It then creates a new object that contains all the pathways and adds it to the pathways property of the config object. The order of loading means that custom pathways will always override any core pathways that Cortext provides. While pathways are designed to be self-contained, you can override some pathway properties - including whether they're even available at all - in the `pathways` section of the config file.
308
+ Pathways are loaded from modules in the `pathways` directory. The pathways are built and loaded to the `config` object using the `buildPathways` function. The `buildPathways` function loads the base pathway, the core pathways, and any custom pathways. It then creates a new object that contains all the pathways and adds it to the pathways property of the config object. The order of loading means that custom pathways will always override any core pathways that Cortex provides. While pathways are designed to be self-contained, you can override some pathway properties - including whether they're even available at all - in the `pathways` section of the config file.
309
309
 
310
310
  ## Core (Default) Pathways
311
311
 
@@ -435,7 +435,7 @@ If you encounter any issues while using Cortex, there are a few things you can d
435
435
  If you would like to contribute to Cortex, there are two ways to do so. You can submit issues to the Cortex GitHub repository or submit pull requests with your proposed changes.
436
436
 
437
437
  ## License
438
- Cortex is released under the MIT License. See [LICENSE](https://github.com/ALJAZEERAPLUS/cortex/blob/main/LICENSE) for more details.
438
+ Cortex is released under the MIT License. See [LICENSE](https://github.com/aj-archipelago/cortex/blob/main/LICENSE) for more details.
439
439
 
440
440
  ## API Reference
441
441
  Detailed documentation on Cortex's API can be found in the /graphql endpoint of your project. Examples of queries and responses can also be found in the Cortex documentation, along with tips for getting the most out of Cortex.
package/SECURITY.md ADDED
@@ -0,0 +1,22 @@
1
+ # Security Policy
2
+
3
+ ## Supported Versions
4
+
5
+ We take the security of our project seriously. The table below shows the versions of Cortex currently being supported with security updates.
6
+
7
+ | Version | Supported |
8
+ | ------- | ------------------ |
9
+ | 1.x.x | :white_check_mark: |
10
+
11
+ ## Reporting a Vulnerability
12
+
13
+ If you have discovered a security vulnerability in Cortex, please follow these steps to report it:
14
+
15
+ 1. **Do not** create a public GitHub issue, as this might expose the vulnerability to others.
16
+ 2. Please follow the GitHub process for [Privately Reporting a Security Vulnerability](https://docs.github.com/en/code-security/security-advisories/guidance-on-reporting-and-writing/privately-reporting-a-security-vulnerability)
17
+
18
+ ## Disclosure Policy
19
+
20
+ Cortex follows responsible disclosure practices. Once a vulnerability is confirmed and a fix is developed, we will release a security update and publicly disclose the vulnerability. We will credit the reporter of the vulnerability in the disclosure, unless the reporter wishes to remain anonymous.
21
+
22
+ We appreciate your help in keeping Cortex secure and your responsible disclosure of any security vulnerabilities you discover.
@@ -151,7 +151,7 @@ class PathwayResolver {
151
151
  }
152
152
  const encoded = encode(text);
153
153
  if (!this.useInputChunking || encoded.length <= chunkTokenLength) { // no chunking, return as is
154
- if (encoded.length >= chunkTokenLength) {
154
+ if (encoded.length > 0 && encoded.length >= chunkTokenLength) {
155
155
  const warnText = `Truncating long input text. Text length: ${text.length}`;
156
156
  this.warnings.push(warnText);
157
157
  console.warn(warnText);
@@ -189,7 +189,7 @@ class PathwayResolver {
189
189
  // the token ratio is the ratio of the total prompt to the result text - both have to be included
190
190
  // in computing the max token length
191
191
  const promptRatio = this.pathwayPrompter.plugin.getPromptTokenRatio();
192
- let chunkMaxTokenLength = promptRatio * this.pathwayPrompter.plugin.getModelMaxTokenLength() - maxPromptTokenLength;
192
+ let chunkMaxTokenLength = promptRatio * this.pathwayPrompter.plugin.getModelMaxTokenLength() - maxPromptTokenLength - 1;
193
193
 
194
194
  // if we have to deal with prompts that have both text input
195
195
  // and previous result, we need to split the maxChunkToken in half
@@ -0,0 +1,23 @@
1
+ // localModelPlugin.js
2
+ import ModelPlugin from './modelPlugin.js';
3
+ import { execFileSync } from 'child_process';
4
+
5
+ class LocalModelPlugin extends ModelPlugin {
6
+ constructor(config, pathway) {
7
+ super(config, pathway);
8
+ }
9
+
10
+ async execute(text, parameters, prompt, pathwayResolver) {
11
+ const { modelPromptText } = this.getCompiledPrompt(text, parameters, prompt);
12
+
13
+ try {
14
+ const result = execFileSync(executablePath, [text], { encoding: 'utf8' });
15
+ return result;
16
+ } catch (error) {
17
+ console.error('Error running local model:', error);
18
+ throw error;
19
+ }
20
+ }
21
+ }
22
+
23
+ export default LocalModelPlugin;
@@ -12,14 +12,14 @@ class OpenAIChatPlugin extends ModelPlugin {
12
12
  const { stream } = parameters;
13
13
 
14
14
  // Define the model's max token length
15
- const modelMaxTokenLength = this.getModelMaxTokenLength() * this.getPromptTokenRatio();
15
+ const modelTargetTokenLength = this.getModelMaxTokenLength() * this.getPromptTokenRatio();
16
16
 
17
17
  let requestMessages = modelPromptMessages || [{ "role": "user", "content": modelPromptText }];
18
18
 
19
19
  // Check if the token length exceeds the model's max token length
20
- if (tokenLength > modelMaxTokenLength) {
20
+ if (tokenLength > modelTargetTokenLength) {
21
21
  // Remove older messages until the token length is within the model's limit
22
- requestMessages = this.truncateMessagesToTargetLength(requestMessages, modelMaxTokenLength);
22
+ requestMessages = this.truncateMessagesToTargetLength(requestMessages, modelTargetTokenLength);
23
23
  }
24
24
 
25
25
  const requestParameters = {
@@ -13,19 +13,22 @@ class OpenAICompletionPlugin extends ModelPlugin {
13
13
  let { modelPromptMessages, modelPromptText, tokenLength } = this.getCompiledPrompt(text, parameters, prompt);
14
14
  const { stream } = parameters;
15
15
  let modelPromptMessagesML = '';
16
- const modelMaxTokenLength = this.getModelMaxTokenLength();
16
+ // Define the model's max token length
17
+ const modelTargetTokenLength = this.getModelMaxTokenLength() * this.getPromptTokenRatio();
17
18
  let requestParameters = {};
18
19
 
19
20
  if (modelPromptMessages) {
20
- const requestMessages = this.truncateMessagesToTargetLength(modelPromptMessages, modelMaxTokenLength - 1);
21
+ const minMsg = [{ role: "system", content: "" }];
22
+ const addAssistantTokens = encode(this.messagesToChatML(minMsg, true).replace(this.messagesToChatML(minMsg, false), '')).length;
23
+ const requestMessages = this.truncateMessagesToTargetLength(modelPromptMessages, (modelTargetTokenLength - addAssistantTokens));
21
24
  modelPromptMessagesML = this.messagesToChatML(requestMessages);
22
25
  tokenLength = encode(modelPromptMessagesML).length;
23
26
 
24
- if (tokenLength >= modelMaxTokenLength) {
25
- throw new Error(`The maximum number of tokens for this model is ${modelMaxTokenLength}. Please reduce the number of messages in the prompt.`);
27
+ if (tokenLength > modelTargetTokenLength) {
28
+ throw new Error(`Input is too long at ${tokenLength} tokens (this target token length for this pathway is ${modelTargetTokenLength} tokens because the response is expected to take up the rest of the model's max tokens (${this.getModelMaxTokenLength()}). You must reduce the size of the prompt to continue.`);
26
29
  }
27
30
 
28
- const max_tokens = modelMaxTokenLength - tokenLength - 1;
31
+ const max_tokens = this.getModelMaxTokenLength() - tokenLength;
29
32
 
30
33
  requestParameters = {
31
34
  prompt: modelPromptMessagesML,
@@ -38,11 +41,11 @@ class OpenAICompletionPlugin extends ModelPlugin {
38
41
  stream
39
42
  };
40
43
  } else {
41
- if (tokenLength >= modelMaxTokenLength) {
42
- throw new Error(`The maximum number of tokens for this model is ${modelMaxTokenLength}. Please reduce the length of the prompt.`);
44
+ if (tokenLength > modelTargetTokenLength) {
45
+ throw new Error(`Input is too long at ${tokenLength} tokens. The target token length for this pathway is ${modelTargetTokenLength} tokens because the response is expected to take up the rest of the ${this.getModelMaxTokenLength()} tokens that the model can handle. You must reduce the size of the prompt to continue.`);
43
46
  }
44
47
 
45
- const max_tokens = modelMaxTokenLength - tokenLength - 1;
48
+ const max_tokens = this.getModelMaxTokenLength() - tokenLength;
46
49
 
47
50
  requestParameters = {
48
51
  prompt: modelPromptText,
@@ -1,4 +1,4 @@
1
- // OpenAICompletionPlugin.js
1
+ // openAiWhisperPlugin.js
2
2
  import ModelPlugin from './modelPlugin.js';
3
3
 
4
4
  import FormData from 'form-data';
package/package.json CHANGED
@@ -1,6 +1,6 @@
1
1
  {
2
2
  "name": "@aj-archipelago/cortex",
3
- "version": "1.0.0",
3
+ "version": "1.0.1",
4
4
  "description": "Cortex is a GraphQL API for AI. It provides a simple, extensible interface for using AI services from OpenAI, Azure and others.",
5
5
  "repository": {
6
6
  "type": "git",