nuxt-chatgpt 0.1.9 → 0.2.1

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
package/README.md CHANGED
@@ -14,13 +14,13 @@
14
14
 
15
15
  ## About the module
16
16
 
17
- This user-friendly module boasts of an easy integration process that enables seamless implementation into any [Nuxt 3](https://nuxt.com) project. With type-safe integration, you can integrate [ChatGPT](https://openai.com/) into your [Nuxt 3](https://nuxt.com) project without breaking a <b>sweat</b>. Enjoy easy access to the `send` method through the `useChatgpt()` composable. Additionally, the module guarantees <b><i>security</i></b> as requests are routed through a [Nitro Server](https://nuxt.com/docs/guide/concepts/server-engine), thus preventing the exposure of your <b>API Key</b>.
17
+ This user-friendly module boasts of an easy integration process that enables seamless implementation into any [Nuxt 3](https://nuxt.com) project. With type-safe integration, you can integrate [ChatGPT](https://openai.com/) into your [Nuxt 3](https://nuxt.com) project without breaking a <b>sweat</b>. Enjoy easy access to the `chat`, and `chatCompletion` methods through the `useChatgpt()` composable. Additionally, the module guarantees <b><i>security</i></b> as requests are routed through a [Nitro Server](https://nuxt.com/docs/guide/concepts/server-engine), thus preventing the exposure of your <b>API Key</b>. The module use [openai](https://github.com/openai/openai-node) library version 4.0.0 behind the scene.
18
18
 
19
19
  ## Features
20
20
 
21
21
  - đŸ’Ē &nbsp; Easy implementation into any [Nuxt 3](https://nuxt.com) project.
22
22
  - 👉 &nbsp; Type-safe integration of Chatgpt into your [Nuxt 3](https://nuxt.com) project.
23
- - đŸ•šī¸ &nbsp; Provides a `useChatgpt()` composable that grants easy access to the `send` method.
23
+ - đŸ•šī¸ &nbsp; Provides a `useChatgpt()` composable that grants easy access to the `chat`, and `chatCompletion` methods.
24
24
  - đŸ”Ĩ &nbsp; Ensures security by routing requests through a [Nitro Server](https://nuxt.com/docs/guide/concepts/server-engine), preventing the <b>API Key</b> from being exposed.
25
25
  - 🧱 &nbsp; It is lightweight and performs well.
26
26
 
@@ -53,21 +53,45 @@ export default defineNuxtConfig({
53
53
  ```
54
54
  That's it! You can now use Nuxt Chatgpt in your Nuxt app đŸ”Ĩ
55
55
 
56
- ## Example & Usage
56
+ ## Usage & Examples
57
57
 
58
- To access the `send` method in the nuxt-chatgpt module, you can use the `useChatgpt()` composable, which provides easy access to the method. The send method requires two parameters:
58
+ To access the `chat`, and `chatCompletion` methods in the nuxt-chatgpt module, you can use the `useChatgpt()` composable, which provides easy access to them. The `chat`, and `chatCompletion` methods requires three parameters:
59
59
 
60
- - `message`: a string representing the text message that you want to send to the GPT-3 model for processing.
61
- - `options`: an optional object that specifies any additional options you want to pass to the API request, such as the GPT-3 model ID, the number of responses to generate, and the maximum length of each response.
60
+
61
+ | Name | Type | Default | Description |
62
+ |--|--|--|--|
63
+ |**message**|`String`||A string representing the text message that you want to send to the GPT model for processing.
64
+ |**model**|`String`|`text-davinci-003` for `chat()` and `gpt-3.5-turbo` for `chatCompletion()`|Represent certain model for different types of natural language processing tasks.
65
+ |**options**|`Object`|`{ temperature: 0.5, max_tokens: 2048, top_p: 1 frequency_penalty: 0, presence_penalty: 0 }`|An optional object that specifies any additional options you want to pass to the API request, such as the number of responses to generate, and the maximum length of each response.
66
+
67
+ Available models for `chat`
68
+
69
+ - text-davinci-003
70
+ - text-davinci-002
71
+
72
+ Available models for `chatCompletion`
73
+
74
+ - gpt-3.5-turbo
75
+ - gpt-3.5-turbo-0301
76
+
77
+ You need to join waitlist to use gpt-4 models within `chatCompletion` method
78
+
79
+ - gpt-4
80
+ - gpt-4-0314
81
+ - gpt-4-32k
82
+ - gpt-4-32k-0314
83
+
84
+ ### Simple `chat` usage
85
+ In the following example, the model is unspecified, and the text-davinci-003 model will be used by default.
62
86
 
63
87
  ```js
64
- const { send } = useChatgpt()
88
+ const { chat } = useChatgpt()
65
89
 
66
90
  const data = ref('')
67
91
  const message = ref('')
68
92
 
69
93
  async function sendMessage() {
70
- const response = await send(message.value)
94
+ const response = await chat(message.value)
71
95
  data.value = response
72
96
  }
73
97
 
@@ -86,6 +110,98 @@ async function sendMessage() {
86
110
  </template>
87
111
  ```
88
112
 
113
+ ### Usage of `chat` with different model
114
+
115
+ ```js
116
+ const { chat } = useChatgpt()
117
+
118
+ const data = ref('')
119
+ const message = ref('')
120
+
121
+ async function sendMessage() {
122
+ const response = await chat(message.value, 'text-davinci-002')
123
+ data.value = response
124
+ }
125
+
126
+ ```
127
+
128
+ ```html
129
+ <template>
130
+ <div>
131
+ <input v-model="message">
132
+ <button
133
+ @click="sendMessage"
134
+ v-text="'Send'"
135
+ />
136
+ <div>{{ data }}</div>
137
+ </div>
138
+ </template>
139
+ ```
140
+
141
+ ### Simple `chatCompletion` usage
142
+ In the following example, the model is unspecified, and the gpt-3.5-turbo model will be used by default.
143
+
144
+ ```js
145
+ const { chatCompletion } = useChatgpt()
146
+
147
+ const data = ref('')
148
+ const message = ref('')
149
+
150
+ async function sendMessage() {
151
+ const response = await chatCompletion(message.value)
152
+ data.value = response
153
+ }
154
+
155
+ ```
156
+
157
+ ```html
158
+ <template>
159
+ <div>
160
+ <input v-model="message">
161
+ <button
162
+ @click="sendMessage"
163
+ v-text="'Send'"
164
+ />
165
+ <div>{{ data }}</div>
166
+ </div>
167
+ </template>
168
+ ```
169
+
170
+ ### Usage of `chatCompletion` with different model
171
+
172
+ ```js
173
+ const { chatCompletion } = useChatgpt()
174
+
175
+ const data = ref('')
176
+ const message = ref('')
177
+
178
+ async function sendMessage() {
179
+ const response = await chatCompletion(message.value, 'gpt-3.5-turbo-0301')
180
+ data.value = response
181
+ }
182
+
183
+ ```
184
+
185
+ ```html
186
+ <template>
187
+ <div>
188
+ <input v-model="message">
189
+ <button
190
+ @click="sendMessage"
191
+ v-text="'Send'"
192
+ />
193
+ <div>{{ data }}</div>
194
+ </div>
195
+ </template>
196
+ ```
197
+
198
+ ## chat vs chatCompletion
199
+
200
+ The `chat` method allows the user to send a prompt to the OpenAI API and receive a response. You can use this endpoint to build conversational interfaces that can interact with users in a natural way. For example, you could use the chat method to build a chatbot that can answer customer service questions or provide information about a product or service.
201
+
202
+ The `chatCompletion` method is similar to the `chat` method, but it provides additional functionality for generating longer, more complex responses. Specifically, the chatCompletion method allows you to provide a conversation history as input, which the API can use to generate a response that is consistent with the context of the conversation. This makes it possible to build chatbots that can engage in longer, more natural conversations with users.
203
+
204
+
89
205
  ## Module Options
90
206
 
91
207
  | Name | Type | Default | Description |
package/dist/module.json CHANGED
@@ -4,5 +4,5 @@
4
4
  "compatibility": {
5
5
  "nuxt": "^3.0.0"
6
6
  },
7
- "version": "0.1.9"
7
+ "version": "0.2.1"
8
8
  }
package/dist/module.mjs CHANGED
@@ -31,11 +31,20 @@ const module = defineNuxtModule({
31
31
  apiKey: options.apiKey
32
32
  });
33
33
  addImportsDir(resolve("./runtime/composables"));
34
- addServerHandler({
35
- route: "/api/openai",
36
- method: "post",
37
- handler: resolve(runtimeDir, "server/api/openai")
38
- });
34
+ addServerHandler(
35
+ {
36
+ route: "/api/chat",
37
+ method: "post",
38
+ handler: resolve(runtimeDir, "server/api/chat")
39
+ }
40
+ );
41
+ addServerHandler(
42
+ {
43
+ route: "/api/chat-completion",
44
+ method: "post",
45
+ handler: resolve(runtimeDir, "server/api/chat-completion")
46
+ }
47
+ );
39
48
  nuxt.options.build.transpile.push(runtimeDir);
40
49
  }
41
50
  });
@@ -1,10 +1,14 @@
1
1
  import { createError } from "h3";
2
2
  export const useChatgpt = () => {
3
- const send = async (message) => {
3
+ const chat = async (message, model, options) => {
4
4
  try {
5
- return await $fetch("/api/openai", {
5
+ return await $fetch("/api/chat", {
6
6
  method: "POST",
7
- body: message
7
+ body: {
8
+ message,
9
+ model,
10
+ options
11
+ }
8
12
  });
9
13
  } catch (error) {
10
14
  throw createError({
@@ -13,5 +17,22 @@ export const useChatgpt = () => {
13
17
  });
14
18
  }
15
19
  };
16
- return { send };
20
+ const chatCompletion = async (message, model, options) => {
21
+ try {
22
+ return await $fetch("/api/chat-completion", {
23
+ method: "POST",
24
+ body: {
25
+ message,
26
+ model,
27
+ options
28
+ }
29
+ });
30
+ } catch (error) {
31
+ throw createError({
32
+ statusCode: 500,
33
+ message: "Failed to forward request to server"
34
+ });
35
+ }
36
+ };
37
+ return { chat, chatCompletion };
17
38
  };
@@ -0,0 +1,8 @@
1
+ export declare const MODEL_TEXT_DAVINCI_003: string;
2
+ export declare const MODEL_TEXT_DAVINCI_002: string;
3
+ export declare const MODEL_GPT_TURBO_3_5: string;
4
+ export declare const MODEL_GPT_TURBO_3_5_0301: string;
5
+ export declare const MODEL_GPT_4: string;
6
+ export declare const MODEL_GPT_4_0314: string;
7
+ export declare const MODEL_GPT_4_32k: string;
8
+ export declare const MODEL_GPT_4_32k_0314: string;
@@ -0,0 +1,8 @@
1
+ export const MODEL_TEXT_DAVINCI_003 = "text-davinci-003";
2
+ export const MODEL_TEXT_DAVINCI_002 = "text-davinci-002";
3
+ export const MODEL_GPT_TURBO_3_5 = "gpt-3.5-turbo";
4
+ export const MODEL_GPT_TURBO_3_5_0301 = "gpt-3.5-turbo-0301";
5
+ export const MODEL_GPT_4 = "gpt-4";
6
+ export const MODEL_GPT_4_0314 = "gpt-4-0314";
7
+ export const MODEL_GPT_4_32k = "gpt-4-32k";
8
+ export const MODEL_GPT_4_32k_0314 = "gpt-4-32k-0314";
@@ -0,0 +1,7 @@
1
+ export declare const defaultOptions: {
2
+ temperature: number;
3
+ max_tokens: number;
4
+ top_p: number;
5
+ frequency_penalty: number;
6
+ presence_penalty: number;
7
+ };
@@ -0,0 +1,7 @@
1
+ export const defaultOptions = {
2
+ temperature: 0.5,
3
+ max_tokens: 2048,
4
+ top_p: 1,
5
+ frequency_penalty: 0,
6
+ presence_penalty: 0
7
+ };
@@ -0,0 +1,2 @@
1
+ declare const _default: import("h3").EventHandler<string>;
2
+ export default _default;
@@ -0,0 +1,32 @@
1
+ import OpenAI from "openai";
2
+ import { createError, defineEventHandler, readBody } from "h3";
3
+ import { defaultOptions } from "../../constants/options.mjs";
4
+ import { MODEL_GPT_TURBO_3_5 } from "../../constants/models.mjs";
5
+ import { modelMap } from "../../utils/model-map.mjs";
6
+ import { useRuntimeConfig } from "#imports";
7
+ export default defineEventHandler(async (event) => {
8
+ const { message, model, options } = await readBody(event);
9
+ if (!useRuntimeConfig().chatgpt.apiKey) {
10
+ throw createError({
11
+ statusCode: 403,
12
+ message: "Missing OpenAI API Key"
13
+ });
14
+ }
15
+ const openai = new OpenAI({
16
+ apiKey: useRuntimeConfig().chatgpt.apiKey
17
+ });
18
+ const requestOptions = {
19
+ messages: [{ role: "user", content: message }],
20
+ model: !model ? modelMap[MODEL_GPT_TURBO_3_5] : modelMap[model],
21
+ ...options || defaultOptions
22
+ };
23
+ try {
24
+ const chatCompletion = await openai.chat.completions.create(requestOptions);
25
+ return chatCompletion.choices[0].message?.content;
26
+ } catch (error) {
27
+ throw createError({
28
+ statusCode: 500,
29
+ message: "Failed to forward request to OpenAI API"
30
+ });
31
+ }
32
+ });
@@ -0,0 +1,2 @@
1
+ declare const _default: import("h3").EventHandler<string>;
2
+ export default _default;
@@ -0,0 +1,31 @@
1
+ import OpenAI from "openai";
2
+ import { createError, defineEventHandler, readBody } from "h3";
3
+ import { defaultOptions } from "../../constants/options.mjs";
4
+ import { modelMap } from "../../utils/model-map.mjs";
5
+ import { useRuntimeConfig } from "#imports";
6
+ export default defineEventHandler(async (event) => {
7
+ const { message, model, options } = await readBody(event);
8
+ if (!useRuntimeConfig().chatgpt.apiKey) {
9
+ throw createError({
10
+ statusCode: 403,
11
+ message: "Missing OpenAI API Key"
12
+ });
13
+ }
14
+ const openai = new OpenAI({
15
+ apiKey: useRuntimeConfig().chatgpt.apiKey
16
+ });
17
+ const requestOptions = {
18
+ prompt: message,
19
+ model: !model ? modelMap.default : modelMap[model],
20
+ ...options || defaultOptions
21
+ };
22
+ try {
23
+ const completion = await openai.completions.create(requestOptions);
24
+ return completion.choices[0].text?.slice(2);
25
+ } catch (error) {
26
+ throw createError({
27
+ statusCode: 500,
28
+ message: "Failed to forward request to OpenAI API"
29
+ });
30
+ }
31
+ });
@@ -1,8 +1,20 @@
1
1
  export interface IChatgptClient {
2
- send ( IMessage ) : Promise
2
+ chat(IMessage): Promise,
3
+ chatCompletion(IMessage): Promise
3
4
  }
4
5
 
5
6
  export interface IMessage {
6
7
  message: string
7
8
  }
8
9
 
10
+ export interface IModel {
11
+ model: string
12
+ }
13
+
14
+ export interface IOptions {
15
+ temperature: number,
16
+ max_tokens: number,
17
+ top_p: number,
18
+ frequency_penalty: number,
19
+ presence_penalty: number,
20
+ }
@@ -0,0 +1,4 @@
1
+ export declare const modelMap: {
2
+ [x: string]: string;
3
+ default: string;
4
+ };
@@ -0,0 +1,21 @@
1
+ import {
2
+ MODEL_TEXT_DAVINCI_003,
3
+ MODEL_TEXT_DAVINCI_002,
4
+ MODEL_GPT_TURBO_3_5,
5
+ MODEL_GPT_TURBO_3_5_0301,
6
+ MODEL_GPT_4,
7
+ MODEL_GPT_4_0314,
8
+ MODEL_GPT_4_32k,
9
+ MODEL_GPT_4_32k_0314
10
+ } from "../constants/models.mjs";
11
+ export const modelMap = {
12
+ [MODEL_TEXT_DAVINCI_003]: MODEL_TEXT_DAVINCI_003,
13
+ [MODEL_TEXT_DAVINCI_002]: MODEL_TEXT_DAVINCI_002,
14
+ [MODEL_GPT_TURBO_3_5]: MODEL_GPT_TURBO_3_5,
15
+ [MODEL_GPT_TURBO_3_5_0301]: MODEL_GPT_TURBO_3_5_0301,
16
+ [MODEL_GPT_4]: MODEL_GPT_4,
17
+ [MODEL_GPT_4_0314]: MODEL_GPT_4_0314,
18
+ [MODEL_GPT_4_32k]: MODEL_GPT_4_32k,
19
+ [MODEL_GPT_4_32k_0314]: MODEL_GPT_4_32k_0314,
20
+ default: MODEL_TEXT_DAVINCI_003
21
+ };
package/package.json CHANGED
@@ -1,6 +1,6 @@
1
1
  {
2
2
  "name": "nuxt-chatgpt",
3
- "version": "0.1.9",
3
+ "version": "0.2.1",
4
4
  "description": "ChatGPT integration for Nuxt 3",
5
5
  "license": "MIT",
6
6
  "type": "module",
@@ -54,8 +54,8 @@
54
54
  },
55
55
  "dependencies": {
56
56
  "@nuxt/kit": "^3.1.1",
57
- "openai": "^3.2.1",
58
- "defu": "^6.1.2"
57
+ "defu": "^6.1.2",
58
+ "openai": "^4.0.0"
59
59
  },
60
60
  "devDependencies": {
61
61
  "@nuxt/eslint-config": "^0.1.1",
@@ -1,2 +0,0 @@
1
- declare const _default: import("h3").EventHandler<string | undefined>;
2
- export default _default;
@@ -1,27 +0,0 @@
1
- import { createError, defineEventHandler, readBody } from "h3";
2
- import { Configuration, OpenAIApi } from "openai";
3
- import { useRuntimeConfig } from "#imports";
4
- export default defineEventHandler(async (event) => {
5
- const prompt = await readBody(event);
6
- const configuration = new Configuration({
7
- apiKey: useRuntimeConfig().chatgpt.apiKey
8
- });
9
- const openai = new OpenAIApi(configuration);
10
- try {
11
- const response = await openai.createCompletion({
12
- model: "text-davinci-003",
13
- prompt,
14
- temperature: 0.5,
15
- max_tokens: 2048,
16
- top_p: 1,
17
- frequency_penalty: 0,
18
- presence_penalty: 0
19
- });
20
- return response.data.choices[0].text?.slice(2);
21
- } catch (error) {
22
- throw createError({
23
- statusCode: 500,
24
- message: "Failed to forward request to OpenAI API"
25
- });
26
- }
27
- });