llmasaservice-client 0.0.1 → 0.0.2

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
package/CHANGELOG.md ADDED
@@ -0,0 +1,7 @@
1
+ # llmasaservice-client
2
+
3
+ ## 0.0.2
4
+
5
+ ### Patch Changes
6
+
7
+ - Added documentation and making the customer optional for calls
package/README.md CHANGED
@@ -1 +1,180 @@
1
- # LLMAsAService.io for React Clients
1
+ # LLMAsAService.io for Client Side Code
2
+ A product by CASEy, Inc ([heycasey.io](https://heycasey.io))
3
+
4
+ ## What is LLMAsAService.io?
5
+ Implementing AI LLM features into application is relatively easy using a vendors API. When we were building our product ([heycasey.io](https://heycasey.io)) we had a number of reliability and financial risk issues. We solved these, and decided to make our platform for managing LLM's available as a service.
6
+
7
+ The main features are -
8
+ - Streaming responses to give near immediate customer results and feedback
9
+ - Abort and Cancellation of streaming responses
10
+ - Multiple vendors with load-sharing and failover when (not if) they have outages
11
+ - Response caching for identical requests withing a given timeframe
12
+ - Secure storage of API keys from one portal
13
+ - Customer token budgeting and management (give trial users an amount of tokens to start with) - 0 to inhibit new customers
14
+ - Customer data tenancy - customer requests can be routed to certain LLM providers and regions based on their settings (Eg. EU customers)
15
+ - PII redaction. Clever tokenization of common PII data so no LLM vendor gets private information. Those tokens are replaced on return to the customer so the who redaction is transparent.
16
+ - Call analytics - request success or errors, count, and token usage
17
+ - Call analytics - country and region for security analysis (why are we getting a heap of requests from North Korea?)
18
+ - Load shedding - if traffic increases beyond allocated vendor capacity limits, some customers can be rejected based on their allocated "Tier"
19
+
20
+ There are two parts of using this service. The control panel at ([app.llmasaservice.io](https://app.llmasaservice.io)), and this library for connecting our service to your client side application.
21
+
22
+ Using all of your LLM services with the above features becomes as simple as -
23
+
24
+ ```typescript
25
+ import { useLLM } from 'useLLM';
26
+
27
+ ...
28
+ const {send, response, idle} = useLLM({project_id: "[your LLMAsAService project id]"}); // get the project_id from the Embed page in the control panel
29
+
30
+ ...
31
+ const handleChatClick = () => {
32
+ send("What is 1+1="); // calls the LLMs for a streaming response
33
+ };
34
+
35
+ // the response is streamed back and can be shown where needed
36
+ return (
37
+ <div>
38
+ <button onClick={handleChatClick} disabled={!idle}>
39
+ Call Chat
40
+ </button>
41
+
42
+ <div>{response}</div>
43
+ </div>);
44
+ ```
45
+
46
+ ## Step 1 - Register for a LLMAsAService account
47
+ 1. Register at ([app.llmasaservice.io](https://app.llmasaservice.io)) and confirm your email address
48
+ 2. You will be prompted after first login to create a project (and accept privacy and terms)
49
+
50
+ ## Step 2 - Create your LLM Service Providers
51
+ You can create any number of LLM Vendor service endpoints. The active services will be load balanced and used for failover if any vendor has an outage or is usage limited. You will ideally have at least two active at all times for reliable failover. You can also create a Chaos Monkey service that will randomly fail calls to prove that failover is effectively working.
52
+
53
+ The settings you need are the same settings you would pass to the vendor using their API. We make those calls on your behalf from our servers after confiming those requests are allowed.
54
+
55
+ After logging into the control panel, choose LLMServices from the right panel menu
56
+
57
+ ![LLM Service Page](images\LLMServicesPage.png)
58
+
59
+ 1. Click the Add Service box to create your first or another LLM Service Endpoint
60
+
61
+ ![Add LLM Service Page](images\AddLLMService.png)
62
+
63
+ 2. Name and choose your Vendor. We will boilerplate that vendors endpoint URL, header and body as a starting point. These will vary based on your needs and the vendor chosen (we are documenting each vendor now, until then email us if you have difficulty). Tip: We use the streaming features for each vendor. So, make sure that the streaming options are enabled. For example, for OpenAI add these at the end of the rest of the body JSON:
64
+
65
+ ```javascript
66
+ {
67
+ .... ,
68
+ "stream": true,
69
+ "stream_options": {"include_usage": true}
70
+ }
71
+ ```
72
+
73
+ 3. The inputs are validated. Main requirements is that the header and body are validated JSON inputs, and quotes for the parameters names and values are necessary with no ending comma before end brace characters eg.
74
+
75
+ Good:
76
+ ```javascript
77
+ {
78
+ "Content-Type": "application/json",
79
+ "Authorization": "Bearer {{API-KEY}}"
80
+ }
81
+ ```
82
+
83
+ Bad (no quotes on names, and trailing , before ending brace):
84
+ ```javascript
85
+ {
86
+ Content-Type: "application/json",
87
+ Authorization: "Bearer {{API-KEY}}",
88
+ }
89
+ ```
90
+ 4. The {{API-KEY}} should be used where the API key should go. DO NOT! DO NOT! DO NOT! hardcode your API keys into the LLM Service form. We will check that the placeholder is used.
91
+ 5. Click Save and Close. The next step is mandatory before these services will work.
92
+ 6. Click on the Edit button for the new service. The Add or Update API Key and Test buttons will now be enabled. Click **Add or Update API Key**
93
+ 7. We properly encrypt and save your API key (we cannot retrieve it for you, if lost, create a new key from your vendor). Get the API key from your LLM Vendor's developer control panel and paste it into the dialog box and click Save. (one of the advantages of using LLMAsAService is that safe key management is in one place, we found this convenient and safer than using command line tools and config files)
94
+
95
+ ![API Key dialog](images\APIKey.png)
96
+
97
+ 8. Make a test call. Click the **Test Call** button and confirm you get a response
98
+
99
+ ![Test Call](images\TestCall.png)
100
+
101
+ Repeat those steps for your other providers and configurations (one n north america, one in the EU, one for Azure or Anthropic, etc.)
102
+
103
+ ## Step 3 - Add the useLLM to your project code
104
+
105
+ To enable the LLM features in your application we provide a react HOC and a hook. These client side components connect your app to the LLMService backend.
106
+
107
+ 1. Import our client side library (UI components coming soon in a different package, this package has no dependencies and we want it that way)
108
+
109
+ ```command
110
+ npm i llmasaservice-client
111
+ ```
112
+ (or the yarn equivalent)
113
+
114
+ 2. Instantiate the hook and the HOC (optional)
115
+
116
+ ```typescript
117
+
118
+ /** used without HOC **/
119
+ import { useLLM } from 'useLLM';
120
+ const {send, response, idle} = useLLM({project_id: "[your LLMAsAService project id]"}); // get the project_id from the Embed page in the control panel
121
+
122
+
123
+ /** using the HOC **/
124
+ //Index.tsx or App.tsx or similar, enclose the App inside the LLMServiceProvider
125
+ import { LLMServiceProvider } from "useLLM";
126
+
127
+ <LLMServiceProvider project_id="[your LLMAsAService project id]">
128
+ <App />
129
+ </LLMServiceProvider>
130
+
131
+ // inside your component pages
132
+ import { useLLM } from 'useLLM';
133
+
134
+ const {send, response, idle} = useLLM();
135
+
136
+ ```
137
+
138
+ 3. Pass the customer making the call. If you wan to track and grant tokens to certain customers, you can pass them by a unique key (you choose, but must be encodedable in JSON, we use a UUID) and an customer identifying name. Tip: pass the customer at the level you want to track. A company id will allow all users for that company to be controlled as a group. We also don't want any PII. So, don't use an email address, we don't need it, and its another siurce of PII data leakage neither of us want.
139
+
140
+ ```typescript
141
+
142
+ /** used without HOC **/
143
+ import { useLLM } from 'useLLM';
144
+ const {send, response, idle} = useLLM(
145
+ {
146
+ project_id: "[your LLMAsAService project id]", // get this from the Embed page in the control panel
147
+ customer: {
148
+ customer_id: "[your unique customer identifier]", // don't use email please.
149
+ customer_name: "[a way of humans identifying this customer in the control panel]"
150
+ }
151
+ });
152
+
153
+
154
+ /** using the HOC **/
155
+ //Index.tsx or App.tsx or similar
156
+ import { LLMServiceProvider } from "useLLM";
157
+
158
+ <LLMServiceProvider
159
+ project_id ="[your LLMAsAService project id]", // get this from the Embed page in the control panel
160
+ customer = {
161
+ customer_id: "[your unique customer identifier]", // don't use email please.
162
+ customer_name: "[a way of humans identifying this customer in the control panel]"
163
+ }>
164
+ <App />
165
+ </LLMServiceProvider>
166
+
167
+ // inside your component pages
168
+ import { useLLM } from 'useLLM';
169
+
170
+ const {send, response, idle} = useLLM();
171
+
172
+ ```
173
+
174
+ ## Step 4 - Adding Chat features to your application
175
+
176
+ Calling **send** makes a secure call to LLMAsAService where a response is marshalled back from the providers. That response is in the **response** property.
177
+
178
+ We have pre-built UIs in the works, but for now, you can call send and display the response wherever needed. An additional property **idle"" can be used to disable the send buttons when a response is ongoing. It will be true when idle, false when busy.
179
+
180
+ We also accept Abort functionality, and are in the process of documenting that now. If you need it email help@heycasey.io and we'll sort you out.
package/dist/index.d.mts CHANGED
@@ -1,4 +1,4 @@
1
- import React from 'react';
1
+ import React, { ReactNode } from 'react';
2
2
 
3
3
  type LLMAsAServiceCustomer = {
4
4
  customer_id: string;
@@ -8,19 +8,14 @@ type LLMAsAServiceCustomer = {
8
8
  };
9
9
  interface LLMServiceType {
10
10
  project_id: string | undefined;
11
- customer: LLMAsAServiceCustomer;
11
+ customer?: LLMAsAServiceCustomer;
12
12
  }
13
13
  declare const LLMService: React.Context<LLMServiceType | undefined>;
14
+ interface UserProviderProps {
15
+ children: ReactNode;
16
+ project_id: string | undefined;
17
+ customer?: LLMAsAServiceCustomer;
18
+ }
19
+ declare const LLMServiceProvider: React.FC<UserProviderProps>;
14
20
 
15
- declare const useLLM: (options?: LLMServiceType) => {
16
- response: string;
17
- send: (prompt: string, messages?: {
18
- role: string;
19
- content: string;
20
- }[], stream?: boolean, abortController?: AbortController, service?: string | null) => Promise<ReadableStreamDefaultReader<any> | string | undefined>;
21
- stop: (controller: AbortController | null) => void;
22
- idle: boolean;
23
- error: string;
24
- };
25
-
26
- export { LLMService as LLMAsAService, useLLM };
21
+ export { LLMService, LLMServiceProvider, type LLMServiceType };
package/dist/index.d.ts CHANGED
@@ -1,4 +1,4 @@
1
- import React from 'react';
1
+ import React, { ReactNode } from 'react';
2
2
 
3
3
  type LLMAsAServiceCustomer = {
4
4
  customer_id: string;
@@ -8,19 +8,14 @@ type LLMAsAServiceCustomer = {
8
8
  };
9
9
  interface LLMServiceType {
10
10
  project_id: string | undefined;
11
- customer: LLMAsAServiceCustomer;
11
+ customer?: LLMAsAServiceCustomer;
12
12
  }
13
13
  declare const LLMService: React.Context<LLMServiceType | undefined>;
14
+ interface UserProviderProps {
15
+ children: ReactNode;
16
+ project_id: string | undefined;
17
+ customer?: LLMAsAServiceCustomer;
18
+ }
19
+ declare const LLMServiceProvider: React.FC<UserProviderProps>;
14
20
 
15
- declare const useLLM: (options?: LLMServiceType) => {
16
- response: string;
17
- send: (prompt: string, messages?: {
18
- role: string;
19
- content: string;
20
- }[], stream?: boolean, abortController?: AbortController, service?: string | null) => Promise<ReadableStreamDefaultReader<any> | string | undefined>;
21
- stop: (controller: AbortController | null) => void;
22
- idle: boolean;
23
- error: string;
24
- };
25
-
26
- export { LLMService as LLMAsAService, useLLM };
21
+ export { LLMService, LLMServiceProvider, type LLMServiceType };
package/dist/index.js CHANGED
@@ -26,32 +26,12 @@ var __toESM = (mod, isNodeMode, target) => (target = mod != null ? __create(__ge
26
26
  mod
27
27
  ));
28
28
  var __toCommonJS = (mod) => __copyProps(__defProp({}, "__esModule", { value: true }), mod);
29
- var __async = (__this, __arguments, generator) => {
30
- return new Promise((resolve, reject) => {
31
- var fulfilled = (value) => {
32
- try {
33
- step(generator.next(value));
34
- } catch (e) {
35
- reject(e);
36
- }
37
- };
38
- var rejected = (value) => {
39
- try {
40
- step(generator.throw(value));
41
- } catch (e) {
42
- reject(e);
43
- }
44
- };
45
- var step = (x) => x.done ? resolve(x.value) : Promise.resolve(x.value).then(fulfilled, rejected);
46
- step((generator = generator.apply(__this, __arguments)).next());
47
- });
48
- };
49
29
 
50
30
  // index.ts
51
31
  var useLLM_exports = {};
52
32
  __export(useLLM_exports, {
53
- LLMAsAService: () => LLMAsAService_default,
54
- useLLM: () => useLLM_default
33
+ LLMService: () => LLMService,
34
+ LLMServiceProvider: () => LLMServiceProvider
55
35
  });
56
36
  module.exports = __toCommonJS(useLLM_exports);
57
37
 
@@ -61,126 +41,15 @@ var import_react2 = require("react");
61
41
  // src/LLMAsAService.tsx
62
42
  var import_react = __toESM(require("react"));
63
43
  var LLMService = (0, import_react.createContext)(void 0);
64
- var LLMAsAService_default = LLMService;
65
-
66
- // src/useLLM.ts
67
- var useLLM = (options) => {
68
- var _a;
69
- const [response, setResponse] = (0, import_react2.useState)("");
70
- const [idle, setIdle] = (0, import_react2.useState)(true);
71
- const [error, setError] = (0, import_react2.useState)("");
72
- const context = (_a = (0, import_react2.useContext)(LLMService)) != null ? _a : options;
73
- if (!context) {
74
- throw new Error(
75
- "useLLM must be used within a LLMServiceProvider or constructed with options in your useLLM() call."
76
- );
77
- }
78
- const stop = (controller) => {
79
- if (controller) controller.abort();
80
- setIdle(true);
81
- };
82
- function send(_0) {
83
- return __async(this, arguments, function* (prompt, messages = [{ role: "system", content: "You are a useful assistant." }], stream = true, abortController = new AbortController(), service = null) {
84
- var _a2, _b, _c;
85
- setResponse("");
86
- setIdle(false);
87
- let errorInFetch = "";
88
- const url = "https://chat.llmasaservice.io/";
89
- const responseBody = JSON.stringify({
90
- projectId: (_a2 = context == null ? void 0 : context.project_id) != null ? _a2 : "",
91
- serviceId: service,
92
- prompt,
93
- messages,
94
- customer: (_b = context == null ? void 0 : context.customer) != null ? _b : {}
95
- });
96
- const options2 = {
97
- method: "POST",
98
- signal: abortController.signal,
99
- mode: "cors",
100
- headers: {
101
- "Content-Type": "text/plain"
102
- //"x-Amz-Content-Sha256": sha256.create().update(responseBody).hex(),
103
- //"x-Amz-Content-Sha256": "UNSIGNED-PAYLOAD",
104
- },
105
- body: responseBody
106
- };
107
- try {
108
- const response2 = yield fetch(url, options2);
109
- if (!response2.ok) {
110
- errorInFetch = `Error: Network error for service. (${response2.status} ${response2.statusText})`;
111
- } else {
112
- const reader = (_c = response2 == null ? void 0 : response2.body) == null ? void 0 : _c.getReader();
113
- const decoder = new TextDecoder("utf-8");
114
- setIdle(false);
115
- if (!stream) {
116
- setResponse(
117
- yield readStream(reader, decoder, stream, {
118
- signal: options2.signal
119
- })
120
- );
121
- } else {
122
- readStream(reader, decoder, stream, {
123
- signal: options2.signal
124
- });
125
- return reader;
126
- }
127
- }
128
- } catch (errorObject) {
129
- errorInFetch = `Error: Having trouble connecting to chat service. (${errorObject.message})`;
130
- }
131
- if (errorInFetch !== "") {
132
- setError(errorInFetch);
133
- console.error(`Error: Error in fetch. (${errorInFetch})`);
134
- }
135
- });
136
- }
137
- function readStream(_0, _1) {
138
- return __async(this, arguments, function* (reader, decoder, stream = true, { signal }) {
139
- let errorInRead = "";
140
- let result = "";
141
- while (true) {
142
- try {
143
- if (signal.aborted) {
144
- reader.cancel();
145
- setIdle(true);
146
- break;
147
- }
148
- const { value, done } = yield reader.read();
149
- if (decoder.decode(value).startsWith("Error:")) {
150
- errorInRead = decoder.decode(value).substring(6);
151
- break;
152
- }
153
- if (done) {
154
- setIdle(true);
155
- break;
156
- }
157
- result += decoder.decode(value);
158
- if (stream) setResponse((prevState) => result);
159
- } catch (error2) {
160
- if (error2.name === "AbortError") {
161
- break;
162
- }
163
- errorInRead = `Reading error ${error2.message}`;
164
- break;
165
- } finally {
166
- if (signal.aborted) {
167
- reader.releaseLock();
168
- }
169
- }
170
- }
171
- if (errorInRead !== "") {
172
- setError(errorInRead);
173
- reader.cancel();
174
- setIdle(true);
175
- }
176
- return result;
177
- });
178
- }
179
- return { response, send, stop, idle, error };
44
+ var LLMServiceProvider = ({
45
+ children,
46
+ project_id,
47
+ customer
48
+ }) => {
49
+ return /* @__PURE__ */ import_react.default.createElement(LLMService.Provider, { value: { project_id, customer } }, children);
180
50
  };
181
- var useLLM_default = useLLM;
182
51
  // Annotate the CommonJS export names for ESM import in node:
183
52
  0 && (module.exports = {
184
- LLMAsAService,
185
- useLLM
53
+ LLMService,
54
+ LLMServiceProvider
186
55
  });
package/dist/index.mjs CHANGED
@@ -1,149 +1,17 @@
1
- var __async = (__this, __arguments, generator) => {
2
- return new Promise((resolve, reject) => {
3
- var fulfilled = (value) => {
4
- try {
5
- step(generator.next(value));
6
- } catch (e) {
7
- reject(e);
8
- }
9
- };
10
- var rejected = (value) => {
11
- try {
12
- step(generator.throw(value));
13
- } catch (e) {
14
- reject(e);
15
- }
16
- };
17
- var step = (x) => x.done ? resolve(x.value) : Promise.resolve(x.value).then(fulfilled, rejected);
18
- step((generator = generator.apply(__this, __arguments)).next());
19
- });
20
- };
21
-
22
1
  // src/useLLM.ts
23
2
  import { useContext, useState } from "react";
24
3
 
25
4
  // src/LLMAsAService.tsx
26
5
  import React, { createContext } from "react";
27
6
  var LLMService = createContext(void 0);
28
- var LLMAsAService_default = LLMService;
29
-
30
- // src/useLLM.ts
31
- var useLLM = (options) => {
32
- var _a;
33
- const [response, setResponse] = useState("");
34
- const [idle, setIdle] = useState(true);
35
- const [error, setError] = useState("");
36
- const context = (_a = useContext(LLMService)) != null ? _a : options;
37
- if (!context) {
38
- throw new Error(
39
- "useLLM must be used within a LLMServiceProvider or constructed with options in your useLLM() call."
40
- );
41
- }
42
- const stop = (controller) => {
43
- if (controller) controller.abort();
44
- setIdle(true);
45
- };
46
- function send(_0) {
47
- return __async(this, arguments, function* (prompt, messages = [{ role: "system", content: "You are a useful assistant." }], stream = true, abortController = new AbortController(), service = null) {
48
- var _a2, _b, _c;
49
- setResponse("");
50
- setIdle(false);
51
- let errorInFetch = "";
52
- const url = "https://chat.llmasaservice.io/";
53
- const responseBody = JSON.stringify({
54
- projectId: (_a2 = context == null ? void 0 : context.project_id) != null ? _a2 : "",
55
- serviceId: service,
56
- prompt,
57
- messages,
58
- customer: (_b = context == null ? void 0 : context.customer) != null ? _b : {}
59
- });
60
- const options2 = {
61
- method: "POST",
62
- signal: abortController.signal,
63
- mode: "cors",
64
- headers: {
65
- "Content-Type": "text/plain"
66
- //"x-Amz-Content-Sha256": sha256.create().update(responseBody).hex(),
67
- //"x-Amz-Content-Sha256": "UNSIGNED-PAYLOAD",
68
- },
69
- body: responseBody
70
- };
71
- try {
72
- const response2 = yield fetch(url, options2);
73
- if (!response2.ok) {
74
- errorInFetch = `Error: Network error for service. (${response2.status} ${response2.statusText})`;
75
- } else {
76
- const reader = (_c = response2 == null ? void 0 : response2.body) == null ? void 0 : _c.getReader();
77
- const decoder = new TextDecoder("utf-8");
78
- setIdle(false);
79
- if (!stream) {
80
- setResponse(
81
- yield readStream(reader, decoder, stream, {
82
- signal: options2.signal
83
- })
84
- );
85
- } else {
86
- readStream(reader, decoder, stream, {
87
- signal: options2.signal
88
- });
89
- return reader;
90
- }
91
- }
92
- } catch (errorObject) {
93
- errorInFetch = `Error: Having trouble connecting to chat service. (${errorObject.message})`;
94
- }
95
- if (errorInFetch !== "") {
96
- setError(errorInFetch);
97
- console.error(`Error: Error in fetch. (${errorInFetch})`);
98
- }
99
- });
100
- }
101
- function readStream(_0, _1) {
102
- return __async(this, arguments, function* (reader, decoder, stream = true, { signal }) {
103
- let errorInRead = "";
104
- let result = "";
105
- while (true) {
106
- try {
107
- if (signal.aborted) {
108
- reader.cancel();
109
- setIdle(true);
110
- break;
111
- }
112
- const { value, done } = yield reader.read();
113
- if (decoder.decode(value).startsWith("Error:")) {
114
- errorInRead = decoder.decode(value).substring(6);
115
- break;
116
- }
117
- if (done) {
118
- setIdle(true);
119
- break;
120
- }
121
- result += decoder.decode(value);
122
- if (stream) setResponse((prevState) => result);
123
- } catch (error2) {
124
- if (error2.name === "AbortError") {
125
- break;
126
- }
127
- errorInRead = `Reading error ${error2.message}`;
128
- break;
129
- } finally {
130
- if (signal.aborted) {
131
- reader.releaseLock();
132
- }
133
- }
134
- }
135
- if (errorInRead !== "") {
136
- setError(errorInRead);
137
- reader.cancel();
138
- setIdle(true);
139
- }
140
- return result;
141
- });
142
- }
143
- return { response, send, stop, idle, error };
7
+ var LLMServiceProvider = ({
8
+ children,
9
+ project_id,
10
+ customer
11
+ }) => {
12
+ return /* @__PURE__ */ React.createElement(LLMService.Provider, { value: { project_id, customer } }, children);
144
13
  };
145
- var useLLM_default = useLLM;
146
14
  export {
147
- LLMAsAService_default as LLMAsAService,
148
- useLLM_default as useLLM
15
+ LLMService,
16
+ LLMServiceProvider
149
17
  };
Binary file
Binary file
Binary file
Binary file
package/index.ts CHANGED
@@ -1,2 +1,2 @@
1
- export { default as useLLM } from './src/useLLM';
2
- export { default as LLMAsAService } from './src/LLMAsAService';
1
+ export * from './src/useLLM';
2
+ export * from './src/LLMAsAService';
package/package.json CHANGED
@@ -1,7 +1,7 @@
1
1
  {
2
2
  "name": "llmasaservice-client",
3
3
  "license": "MIT",
4
- "version": "0.0.1",
4
+ "version": "0.0.2",
5
5
  "main": "dist/index.js",
6
6
  "module": "dist/index.mjs",
7
7
  "types": "dist/index.d.ts",
@@ -22,5 +22,13 @@
22
22
  "typescript": "^5.5.3"
23
23
  },
24
24
  "description": "HOC and hook to use the LLMAsAService.io LLM load balancer and firewall",
25
- "author": "CASEY, Inc. <support@heycasey.io>"
25
+ "author": "CASEY, Inc. <help@heycasey.io>",
26
+ "keywords": [
27
+ "react",
28
+ "llmasaservice",
29
+ "llm",
30
+ "openAI",
31
+ "chat"
32
+ ],
33
+ "homepage": "https://llmasaservice.io"
26
34
  }
@@ -1,6 +1,6 @@
1
1
  import React, { createContext, ReactNode } from "react";
2
2
 
3
- export type LLMAsAServiceCustomer = {
3
+ type LLMAsAServiceCustomer = {
4
4
  customer_id: string;
5
5
  customer_name?: string;
6
6
  customer_user_id?: string;
@@ -9,7 +9,7 @@ export type LLMAsAServiceCustomer = {
9
9
 
10
10
  export interface LLMServiceType {
11
11
  project_id: string | undefined;
12
- customer: LLMAsAServiceCustomer;
12
+ customer?: LLMAsAServiceCustomer;
13
13
  }
14
14
 
15
15
  export const LLMService = createContext<LLMServiceType | undefined>(undefined);
@@ -17,7 +17,7 @@ export const LLMService = createContext<LLMServiceType | undefined>(undefined);
17
17
  interface UserProviderProps {
18
18
  children: ReactNode;
19
19
  project_id: string | undefined;
20
- customer: LLMAsAServiceCustomer;
20
+ customer?: LLMAsAServiceCustomer;
21
21
  }
22
22
 
23
23
  export const LLMServiceProvider: React.FC<UserProviderProps> = ({
@@ -31,5 +31,3 @@ export const LLMServiceProvider: React.FC<UserProviderProps> = ({
31
31
  </LLMService.Provider>
32
32
  );
33
33
  };
34
-
35
- export default LLMService;
package/src/useLLM.ts CHANGED
@@ -6,7 +6,11 @@ const useLLM = (options?: LLMServiceType) => {
6
6
  const [idle, setIdle] = useState<boolean>(true);
7
7
  const [error, setError] = useState<string>("");
8
8
 
9
- const context = useContext(LLMService) ?? options;
9
+ let context = useContext(LLMService);
10
+ if (!context) {
11
+ context = options;
12
+ }
13
+
10
14
  if (!context) {
11
15
  throw new Error(
12
16
  "useLLM must be used within a LLMServiceProvider or constructed with options in your useLLM() call."
@@ -51,7 +55,7 @@ const useLLM = (options?: LLMServiceType) => {
51
55
  serviceId: service,
52
56
  prompt: prompt,
53
57
  messages: messages,
54
- customer: context?.customer ?? {},
58
+ customer: context?.customer ?? {}, // if no customer, use the projectId as the customer_id
55
59
  });
56
60
 
57
61
  // trying to get cloudfront oac going. posts need to be signed, but when i add this the call fails...