@aj-archipelago/cortex 0.0.11 → 1.0.0

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
Files changed (2) hide show
  1. package/README.md +192 -28
  2. package/package.json +1 -1
package/README.md CHANGED
@@ -5,7 +5,7 @@ Modern AI models are transformational, but a number of complexities emerge when
5
5
  ## Features
6
6
 
7
7
  * Simple architecture to build custom functional endpoints (called `pathways`), that implement common NL AI tasks. Default pathways include chat, summarization, translation, paraphrasing, completion, spelling and grammar correction, entity extraction, sentiment analysis, and bias analysis.
8
- * Allows for building multi-model, multi-vendor, and model-agnostic pathways (choose the right model or combination of models for the job, implement redundancy) with built-in support for OpenAI GPT-3, GPT-3.5 (chatGPT), and GPT-4 models - both from OpenAI directly and through Azure OpenAI, OpenAI Whisper, Azure Translator, and more.
8
+ * Allows for building multi-model, multi-tool, multi-vendor, and model-agnostic pathways (choose the right model or combination of models and tools for the job, implement redundancy) with built-in support for OpenAI GPT-3, GPT-3.5 (chatGPT), and GPT-4 models - both from OpenAI directly and through Azure OpenAI, OpenAI Whisper, Azure Translator, LangChain.js and more.
9
9
  * Easy, templatized prompt definition with flexible support for most prompt engineering techniques and strategies ranging from simple single prompts to complex custom prompt chains with context continuity.
10
10
  * Built in support for long-running, asynchronous operations with progress updates or streaming responses
11
11
  * Integrated context persistence: have your pathways "remember" whatever you want and use it on the next request to the model
@@ -15,7 +15,7 @@ Modern AI models are transformational, but a number of complexities emerge when
15
15
  * Caching of repeated queries to provide instant results and avoid excess requests to the underlying model in repetitive use cases (chat bots, unit tests, etc.)
16
16
 
17
17
  ## Installation
18
- In order to use Cortex, you must first have a working Node.js environment. The version of Node.js should be at least 14 or higher. After verifying that you have the correct version of Node.js installed, you can get the simplest form up and running with a couple of commands.
18
+ In order to use Cortex, you must first have a working Node.js environment. The version of Node.js should be 18 or higher (lower versions supported with some reduction in features). After verifying that you have the correct version of Node.js installed, you can get the simplest form up and running with a couple of commands.
19
19
  ## Quick Start
20
20
  ```sh
21
21
  git clone git@github.com:aj-archipelago/cortex.git
@@ -55,18 +55,20 @@ apolloClient.query({
55
55
  ## Cortex Pathways: Supercharged Prompts
56
56
  Pathways are a core concept in Cortex. Each pathway is a single JavaScript file that encapsulates the data and logic needed to define a functional API endpoint. When the client makes a request via the API, one or more pathways are executed and the result is sent back to the client. Pathways can be very simple:
57
57
  ```js
58
- module.exports = {
59
- prompt: `{{text}}\n\nRewrite the above using British English spelling:`
58
+ export default {
59
+ prompt: `{{text}}\n\nRewrite the above using British English spelling:`
60
60
  }
61
61
  ```
62
62
  The real power of Cortex starts to show as the pathways get more complex. This pathway, for example, uses a three-part sequential prompt to ensure that specific people and place names are correctly translated:
63
63
  ```js
64
- prompt:
65
- [
66
- `{{{text}}}\nCopy the names of all people and places exactly from this document in the language above:\n`,
67
- `Original Language:\n{{{previousResult}}}\n\n{{to}}:\n`,
68
- `Entities in the document:\n\n{{{previousResult}}}\n\nDocument:\n{{{text}}}\nRewrite the document in {{to}}. If the document is already in {{to}}, copy it exactly below:\n`
69
- ]
64
+ export default {
65
+ prompt:
66
+ [
67
+ `{{{text}}}\nCopy the names of all people and places exactly from this document in the language above:\n`,
68
+ `Original Language:\n{{{previousResult}}}\n\n{{to}}:\n`,
69
+ `Entities in the document:\n\n{{{previousResult}}}\n\nDocument:\n{{{text}}}\nRewrite the document in {{to}}. If the document is already in {{to}}, copy it exactly below:\n`
70
+ ]
71
+ }
70
72
  ```
71
73
  Cortex pathway prompt enhancements include:
72
74
  * **Templatized prompt definition**: Pathways allow for easy and flexible prompt definition using Handlebars templating. This makes it simple to create and modify prompts using variables and context from the application as well as extensible internal functions provided by Cortex.
@@ -109,7 +111,7 @@ If you look closely at the examples above, you'll notice embedded parameters lik
109
111
  ### Parameters
110
112
  Pathways support an arbitrary number of input parameters. These are defined in the pathway like this:
111
113
  ```js
112
- module.exports = {
114
+ export default {
113
115
  prompt:
114
116
  [
115
117
  `{{{chatContext}}}\n\n{{{text}}}\n\nGiven the information above, create a short summary of the conversation to date making sure to include all of the personal details about the user that you encounter:\n\n`,
@@ -161,7 +163,7 @@ A core function of Cortex is dealing with token limited interfaces. To this end,
161
163
 
162
164
  Cortex provides built in functions to turn loosely formatted text output from the model API calls into structured objects for return to the application. Specifically, Cortex provides parsers for numbered lists of strings and numbered lists of objects. These are used in pathways like this:
163
165
  ```js
164
- module.exports = {
166
+ export default {
165
167
  temperature: 0,
166
168
  prompt: `{{text}}\n\nList the top {{count}} entities and their definitions for the above in the format {{format}}:`,
167
169
  format: `(name: definition)`,
@@ -179,44 +181,128 @@ The resolver property defines the function that processes the input and returns
179
181
 
180
182
  The core pathway `summary.js` below is implemented using custom pathway logic and a custom resolver to effectively target a specific summary length:
181
183
  ```js
182
- const { semanticTruncate } = require('../graphql/chunker');
183
- const { PathwayResolver } = require('../graphql/pathwayResolver');
184
- module.exports = {
185
- prompt: `{{{text}}}\n\nWrite a summary of the above text:\n\n`,
184
+ // summary.js
185
+ // Text summarization module with custom resolver
186
+ // This module exports a prompt that takes an input text and generates a summary using a custom resolver.
187
+
188
+ // Import required modules
189
+ import { semanticTruncate } from '../graphql/chunker.js';
190
+ import { PathwayResolver } from '../graphql/pathwayResolver.js';
191
+
192
+ export default {
193
+ // The main prompt function that takes the input text and asks to generate a summary.
194
+ prompt: `{{{text}}}\n\nWrite a summary of the above text. If the text is in a language other than english, make sure the summary is written in the same language:\n\n`,
195
+
196
+ // Define input parameters for the prompt, such as the target length of the summary.
186
197
  inputParameters: {
187
- targetLength: 500,
198
+ targetLength: 0,
188
199
  },
200
+
201
+ // Custom resolver to generate summaries by reprompting if they are too long or too short.
189
202
  resolver: async (parent, args, contextValue, info) => {
190
203
  const { config, pathway, requestState } = contextValue;
191
204
  const originalTargetLength = args.targetLength;
192
- const errorMargin = 0.2;
205
+
206
+ // If targetLength is not provided, execute the prompt once and return the result.
207
+ if (originalTargetLength === 0) {
208
+ let pathwayResolver = new PathwayResolver({ config, pathway, args, requestState });
209
+ return await pathwayResolver.resolve(args);
210
+ }
211
+
212
+ const errorMargin = 0.1;
193
213
  const lowTargetLength = originalTargetLength * (1 - errorMargin);
194
214
  const targetWords = Math.round(originalTargetLength / 6.6);
195
- // if the text is shorter than the summary length, just return the text
215
+
216
+ // If the text is shorter than the summary length, just return the text.
196
217
  if (args.text.length <= originalTargetLength) {
197
218
  return args.text;
198
219
  }
220
+
199
221
  const MAX_ITERATIONS = 5;
200
222
  let summary = '';
201
- let bestSummary = '';
202
- let pathwayResolver = new PathwayResolver({ config, pathway, requestState });
203
- // modify the prompt to be words-based instead of characters-based
204
- pathwayResolver.pathwayPrompt = `{{{text}}}\n\nWrite a summary of the above text in exactly ${targetWords} words:\n\n`
223
+ let pathwayResolver = new PathwayResolver({ config, pathway, args, requestState });
224
+
225
+ // Modify the prompt to be words-based instead of characters-based.
226
+ pathwayResolver.pathwayPrompt = `Write a summary of all of the text below. If the text is in a language other than english, make sure the summary is written in the same language. Your summary should be ${targetWords} words in length.\n\nText:\n\n{{{text}}}\n\nSummary:\n\n`
227
+
205
228
  let i = 0;
206
- // reprompt if summary is too long or too short
207
- while (((summary.length > originalTargetLength) || (summary.length < lowTargetLength)) && i < MAX_ITERATIONS) {
229
+ // Make sure it's long enough to start
230
+ while ((summary.length < lowTargetLength) && i < MAX_ITERATIONS) {
208
231
  summary = await pathwayResolver.resolve(args);
209
232
  i++;
210
233
  }
211
- // if the summary is still too long, truncate it
234
+
235
+ // If it's too long, it could be because the input text was chunked
236
+ // and now we have all the chunks together. We can summarize that
237
+ // to get a comprehensive summary.
238
+ if (summary.length > originalTargetLength) {
239
+ pathwayResolver.pathwayPrompt = `Write a summary of all of the text below. If the text is in a language other than english, make sure the summary is written in the same language. Your summary should be ${targetWords} words in length.\n\nText:\n\n${summary}\n\nSummary:\n\n`
240
+ summary = await pathwayResolver.resolve(args);
241
+ i++;
242
+
243
+ // Now make sure it's not too long
244
+ while ((summary.length > originalTargetLength) && i < MAX_ITERATIONS) {
245
+ pathwayResolver.pathwayPrompt = `${summary}\n\nIs that less than ${targetWords} words long? If not, try again using a length of no more than ${targetWords} words.\n\n`;
246
+ summary = await pathwayResolver.resolve(args);
247
+ i++;
248
+ }
249
+ }
250
+
251
+ // If the summary is still too long, truncate it.
212
252
  if (summary.length > originalTargetLength) {
213
253
  return semanticTruncate(summary, originalTargetLength);
214
254
  } else {
215
255
  return summary;
216
256
  }
217
257
  }
218
- }
258
+ };
219
259
  ```
260
+ ### LangChain.js Support
261
+ The ability to define a custom resolver function in Cortex pathways gives Cortex the flexibility to be able to cleanly incorporate alternate pipelines and technology stacks into the execution of a pathway. LangChain JS (https://github.com/hwchase17/langchainjs) is a very popular and well supported mechanism for wiring together models, tools, and logic to achieve some amazing results. We have developed specific functionality to support LangChain in the Cortex prompt execution framework and will continue to build features to fully integrate it with Cortex prompt execution contexts.
262
+
263
+ Below is an example pathway integrating with one of the example agents from the LangChain docs. You can see the seamless integration of Cortex's configuration and graphQL / REST interface logic.
264
+ ```js
265
+ // lc_test.js
266
+ // LangChain Cortex integration test
267
+
268
+ // Import required modules
269
+ import { OpenAI } from "langchain/llms";
270
+ import { initializeAgentExecutor } from "langchain/agents";
271
+ import { SerpAPI, Calculator } from "langchain/tools";
272
+
273
+ export default {
274
+
275
+ // Implement custom logic and interaction with Cortex
276
+ // in custom resolver.
277
+
278
+ resolver: async (parent, args, contextValue, info) => {
279
+
280
+ const { config } = contextValue;
281
+ const openAIApiKey = config.get('openaiApiKey');
282
+ const serpApiKey = config.get('serpApiKey');
283
+
284
+ const model = new OpenAI({ openAIApiKey: openAIApiKey, temperature: 0 });
285
+ const tools = [new SerpAPI( serpApiKey ), new Calculator()];
286
+
287
+ const executor = await initializeAgentExecutor(
288
+ tools,
289
+ model,
290
+ "zero-shot-react-description"
291
+ );
292
+
293
+ console.log(`====================`);
294
+ console.log("Loaded langchain agent.");
295
+ const input = args.text;
296
+ console.log(`Executing with input "${input}"...`);
297
+ const result = await executor.call({ input });
298
+ console.log(`Got output ${result.output}`);
299
+ console.log(`====================`);
300
+
301
+ return result?.output;
302
+ },
303
+ };
304
+ ```
305
+
220
306
  ### Building and Loading Pathways
221
307
 
222
308
  Pathways are loaded from modules in the `pathways` directory. The pathways are built and loaded to the `config` object using the `buildPathways` function. The `buildPathways` function loads the base pathway, the core pathways, and any custom pathways. It then creates a new object that contains all the pathways and adds it to the pathways property of the config object. The order of loading means that custom pathways will always override any core pathways that Cortext provides. While pathways are designed to be self-contained, you can override some pathway properties - including whether they're even available at all - in the `pathways` section of the config file.
@@ -236,7 +322,84 @@ Below are the default pathways provided with Cortex. These can be used as is, ov
236
322
  - `translate`: Translates text from one language to another
237
323
  ## Extensibility
238
324
 
239
- Cortex is designed to be highly extensible. This allows you to customize the API to fit your needs. You can add new features, modify existing features, and even add integrations with other APIs and models.
325
+ Cortex is designed to be highly extensible. This allows you to customize the API to fit your needs. You can add new features, modify existing features, and even add integrations with other APIs and models. Here's an example of what an extended project might look like:
326
+
327
+ ### Cortex Internal Implementation
328
+
329
+ - **config**
330
+ - default.json
331
+ - package-lock.json
332
+ - package.json
333
+ - **pathways**
334
+ - chat_code.js
335
+ - chat_context.js
336
+ - chat_persist.js
337
+ - expand_story.js
338
+ - ...whole bunch of custom pathways
339
+ - translate_gpt4.js
340
+ - translate_turbo.js
341
+ - start.js
342
+
343
+ Where `default.json` holds all of your specific configuration:
344
+ ```js
345
+ {
346
+ "defaultModelName": "oai-gpturbo",
347
+ "models": {
348
+ "oai-td3": {
349
+ "type": "OPENAI-COMPLETION",
350
+ "url": "https://api.openai.com/v1/completions",
351
+ "headers": {
352
+ "Authorization": "Bearer {{OPENAI_API_KEY}}",
353
+ "Content-Type": "application/json"
354
+ },
355
+ "params": {
356
+ "model": "text-davinci-003"
357
+ },
358
+ "requestsPerSecond": 10,
359
+ "maxTokenLength": 4096
360
+ },
361
+ "oai-gpturbo": {
362
+ "type": "OPENAI-CHAT",
363
+ "url": "https://api.openai.com/v1/chat/completions",
364
+ "headers": {
365
+ "Authorization": "Bearer {{OPENAI_API_KEY}}",
366
+ "Content-Type": "application/json"
367
+ },
368
+ "params": {
369
+ "model": "gpt-3.5-turbo"
370
+ },
371
+ "requestsPerSecond": 10,
372
+ "maxTokenLength": 8192
373
+ },
374
+ "oai-gpt4": {
375
+ "type": "OPENAI-CHAT",
376
+ "url": "https://api.openai.com/v1/chat/completions",
377
+ "headers": {
378
+ "Authorization": "Bearer {{OPENAI_API_KEY}}",
379
+ "Content-Type": "application/json"
380
+ },
381
+ "params": {
382
+ "model": "gpt-4"
383
+ },
384
+ "requestsPerSecond": 10,
385
+ "maxTokenLength": 8192
386
+ }
387
+ },
388
+ "enableCache": false,
389
+ "enableRestEndpoints": false
390
+ }
391
+ ```
392
+
393
+ ...and `start.js` is really simple:
394
+ ```js
395
+ import cortex from '@aj-archipelago/cortex';
396
+
397
+ (async () => {
398
+ const { startServer } = await cortex();
399
+ startServer && startServer();
400
+ })();
401
+ ```
402
+
240
403
  ## Configuration
241
404
  Configuration of Cortex is done via a [convict](https://github.com/mozilla/node-convict/tree/master) object called `config`. The `config` object is built by combining the default values and any values specified in a configuration file or environment variables. The environment variables take precedence over the values in the configuration file. Below are the configurable properties and their defaults:
242
405
 
@@ -280,5 +443,6 @@ Detailed documentation on Cortex's API can be found in the /graphql endpoint of
280
443
  ## Roadmap
281
444
  Cortex is a constantly evolving project, and the following features are coming soon:
282
445
 
446
+ * Prompt execution context preservation between calls (to enable interactive, multi-call integrations with LangChain and other technologies)
283
447
  * Model-specific cache key optimizations to increase hit rate and reduce cache size
284
448
  * Structured analytics and reporting on AI API call frequency, cost, cache hit rate, etc.
package/package.json CHANGED
@@ -1,6 +1,6 @@
1
1
  {
2
2
  "name": "@aj-archipelago/cortex",
3
- "version": "0.0.11",
3
+ "version": "1.0.0",
4
4
  "description": "Cortex is a GraphQL API for AI. It provides a simple, extensible interface for using AI services from OpenAI, Azure and others.",
5
5
  "repository": {
6
6
  "type": "git",