@llumiverse/core 0.8.1 → 0.8.2
This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
- package/README.md +12 -12
- package/package.json +8 -2
package/README.md
CHANGED
|
@@ -41,7 +41,7 @@ npm install @llumiverse/core @llumiverse/drivers
|
|
|
41
41
|
npm install @llumiverse/core
|
|
42
42
|
```
|
|
43
43
|
|
|
44
|
-
3. If you want to develop a new llumiverse driver for an
|
|
44
|
+
3. If you want to develop a new llumiverse driver for an unsupported LLM provider you only need to install `@llumiverse/core`
|
|
45
45
|
|
|
46
46
|
```
|
|
47
47
|
npm install @llumiverse/core
|
|
@@ -49,7 +49,7 @@ npm install @llumiverse/core
|
|
|
49
49
|
|
|
50
50
|
## Usage
|
|
51
51
|
|
|
52
|
-
First, you need to instantiate a driver instance for the target LLM platform you want to interact too. Each
|
|
52
|
+
First, you need to instantiate a driver instance for the target LLM platform you want to interact too. Each driver accepts its own set of parameters when instantiating.
|
|
53
53
|
|
|
54
54
|
### OpenAI driver
|
|
55
55
|
|
|
@@ -64,7 +64,7 @@ const driver = new OpenAIDriver({
|
|
|
64
64
|
|
|
65
65
|
### Bedrock driver
|
|
66
66
|
|
|
67
|
-
In this example we will instantiate the Bedrock driver using credentials from the Shared Credentials File (i.e. ~/.aws/credentials).
|
|
67
|
+
In this example, we will instantiate the Bedrock driver using credentials from the Shared Credentials File (i.e. ~/.aws/credentials).
|
|
68
68
|
Learn more on how to [setup AWS credentials in node](https://docs.aws.amazon.com/sdk-for-javascript/v3/developer-guide/setting-credentials-node.html).
|
|
69
69
|
|
|
70
70
|
```javascript
|
|
@@ -126,7 +126,7 @@ const driver = new HuggingFaceIEDriver({
|
|
|
126
126
|
|
|
127
127
|
### Listing available models
|
|
128
128
|
|
|
129
|
-
Once you instantiated a driver you can list the available models. Some drivers
|
|
129
|
+
Once you instantiated a driver you can list the available models. Some drivers accept an argument for the `listModel` method to search for matching models. Some drivers like for example `replicate` are listing a preselected set of models. To list other models you need to perform a search by giving a text query as an argument.
|
|
130
130
|
|
|
131
131
|
In the following example, we are assuming that we have already instantiated a driver, which is available as the `driver` variable.
|
|
132
132
|
|
|
@@ -137,7 +137,7 @@ import { AIModel } from "@llumiverse/core";
|
|
|
137
137
|
// instantiate the desired driver
|
|
138
138
|
const driver = createDriverInstance();
|
|
139
139
|
|
|
140
|
-
// list available models on the target LLM. (
|
|
140
|
+
// list available models on the target LLM. (some drivers may require a search parameter to discover more models)
|
|
141
141
|
const models: AIModel[] = await driver.listModels();
|
|
142
142
|
|
|
143
143
|
console.log('# Available Models:');
|
|
@@ -148,17 +148,17 @@ for (const model of models) {
|
|
|
148
148
|
|
|
149
149
|
### Execute a prompt
|
|
150
150
|
|
|
151
|
-
To execute a prompt we need to create prompt in the LLumiverse format and pass it to the driver `execute` method.
|
|
151
|
+
To execute a prompt we need to create a prompt in the LLumiverse format and pass it to the driver `execute` method.
|
|
152
152
|
|
|
153
|
-
The prompt format is very similar to the OpenAI prompt format.
|
|
153
|
+
The prompt format is very similar to the OpenAI prompt format. It is an array of messages with a `content` and a `role` property. The roles can be any of `"user" | "system" | "assistant" | "safety"`.
|
|
154
154
|
|
|
155
|
-
The `safety` role is similar to `system` but has a greater precedence over the other messages. Thus, it will override any `user` or `system` message
|
|
155
|
+
The `safety` role is similar to `system` but has a greater precedence over the other messages. Thus, it will override any `user` or `system` message that is saying something contrary to the `safety` message.
|
|
156
156
|
|
|
157
157
|
In order to execute a prompt we also need to specify a target model, given a model ID which is known by the target LLM. We may also specify execution options like `temperature`, `max_tokens` etc.
|
|
158
158
|
|
|
159
159
|
In the following example, we are again assuming that we have already instantiated a driver, which is available as the `driver` variable.
|
|
160
160
|
|
|
161
|
-
Also we are assuming the model ID we want to target is available as the `model` variable. To get a list of the existing models (and their IDs) you can list the model as we shown in the previous example
|
|
161
|
+
Also, we are assuming the model ID we want to target is available as the `model` variable. To get a list of the existing models (and their IDs) you can list the model as we shown in the previous example
|
|
162
162
|
|
|
163
163
|
Here is an example of model IDs depending on the driver type:
|
|
164
164
|
* OpenAI: `gpt-3.5-turbo`
|
|
@@ -187,7 +187,7 @@ const prompt: PromptSegment[] = [
|
|
|
187
187
|
|
|
188
188
|
// execute a model and wait for the response
|
|
189
189
|
console.log(`\n# Executing prompt on ${model} model: ${prompt}`);
|
|
190
|
-
const response = await
|
|
190
|
+
const response = await driver.execute(prompt, {
|
|
191
191
|
model,
|
|
192
192
|
temperature: 0.6,
|
|
193
193
|
max_tokens: 1024
|
|
@@ -201,9 +201,9 @@ console.log('# Token usage:', response.token_usage);
|
|
|
201
201
|
|
|
202
202
|
### Execute a prompt in streaming mode
|
|
203
203
|
|
|
204
|
-
In this example we will execute a prompt and will stream the result to display it on the console as it is returned by the target LLM platform.
|
|
204
|
+
In this example, we will execute a prompt and will stream the result to display it on the console as it is returned by the target LLM platform.
|
|
205
205
|
|
|
206
|
-
**Note** that some models
|
|
206
|
+
**Note** that some models don't support streaming. In that case, the driver will simulate a streaming using a single chunk of text corresponding to the entire response.
|
|
207
207
|
|
|
208
208
|
|
|
209
209
|
```javascript
|
package/package.json
CHANGED
|
@@ -1,6 +1,6 @@
|
|
|
1
1
|
{
|
|
2
2
|
"name": "@llumiverse/core",
|
|
3
|
-
"version": "0.8.
|
|
3
|
+
"version": "0.8.2",
|
|
4
4
|
"type": "module",
|
|
5
5
|
"description": "Provide an universal API to LLMs. Support for existing LLMs can be added by writing a driver.",
|
|
6
6
|
"files": [
|
|
@@ -19,7 +19,13 @@
|
|
|
19
19
|
"model",
|
|
20
20
|
"universal",
|
|
21
21
|
"api",
|
|
22
|
-
"chatgpt"
|
|
22
|
+
"chatgpt",
|
|
23
|
+
"openai",
|
|
24
|
+
"vertexai",
|
|
25
|
+
"bedrock",
|
|
26
|
+
"replicate",
|
|
27
|
+
"huggingface",
|
|
28
|
+
"togetherai"
|
|
23
29
|
],
|
|
24
30
|
"types": "./lib/types/index.d.ts",
|
|
25
31
|
"typesVersions": {
|