nuxt-chatgpt 0.2.3 → 0.2.5

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
package/README.md CHANGED
@@ -1,10 +1,15 @@
1
1
  <!-- PROJECT LOGO -->
2
2
  <br />
3
3
  <div>
4
- <h1>Nuxt Chatgpt</h3>
5
- <img src="images/chatgpt-logo.png" alt="Logo">
4
+ <div>
5
+ <h1>Nuxt Chatgpt + Image Generator<a href="https://nuxtchatgpt.com" target="_blank">đŸ”Ĩ(VIEW DEMO)đŸ”Ĩ</a></h3>
6
+
7
+ </div>
8
+ <div style="display:flex; width:100%; justify-content:center">
9
+ <img src="images/logo.png" alt="Logo">
10
+ </div>
6
11
 
7
- > [ChatGPT](https://openai.com/) integration for [Nuxt 3](https://nuxt.com).
12
+ > [ChatGPT](https://nuxtchatgpt.com) integration for [Nuxt 3](https://nuxt.com).
8
13
 
9
14
  [![npm version][npm-version-src]][npm-version-href]
10
15
  [![npm downloads][npm-downloads-src]][npm-downloads-href]
@@ -12,6 +17,10 @@
12
17
  </div>
13
18
  <br />
14
19
 
20
+ ## About the project
21
+
22
+ [Nuxt ChatGPT](https://nuxtchatgpt.com) is a project built to showcase the capabilities of the Nuxt3 ChatGPT module. It functions as a ChatGPT clone with enhanced features, including the ability to organize and sort created documents into folders, offering an improved user experience for managing conversations and outputs.
23
+
15
24
  ## About the module
16
25
 
17
26
  This user-friendly module boasts of an easy integration process that enables seamless implementation into any [Nuxt 3](https://nuxt.com) project. With type-safe integration, you can integrate [ChatGPT](https://openai.com/) into your [Nuxt 3](https://nuxt.com) project without breaking a <b>sweat</b>. Enjoy easy access to the `chat`, and `chatCompletion` methods through the `useChatgpt()` composable. Additionally, the module guarantees <b><i>security</i></b> as requests are routed through a [Nitro Server](https://nuxt.com/docs/guide/concepts/server-engine), thus preventing the exposure of your <b>API Key</b>. The module use [openai](https://github.com/openai/openai-node) library version 4.0.0 behind the scene.
@@ -20,7 +29,7 @@ This user-friendly module boasts of an easy integration process that enables sea
20
29
 
21
30
  - đŸ’Ē &nbsp; Easy implementation into any [Nuxt 3](https://nuxt.com) project.
22
31
  - 👉 &nbsp; Type-safe integration of Chatgpt into your [Nuxt 3](https://nuxt.com) project.
23
- - đŸ•šī¸ &nbsp; Provides a `useChatgpt()` composable that grants easy access to the `chat`, and `chatCompletion` methods.
32
+ - đŸ•šī¸ &nbsp; Provides a `useChatgpt()` composable that grants easy access to the `chat`, and `chatCompletion`, and `generateImage` methods.
24
33
  - đŸ”Ĩ &nbsp; Ensures security by routing requests through a [Nitro Server](https://nuxt.com/docs/guide/concepts/server-engine), preventing the <b>API Key</b> from being exposed.
25
34
  - 🧱 &nbsp; It is lightweight and performs well.
26
35
 
@@ -55,15 +64,24 @@ That's it! You can now use Nuxt Chatgpt in your Nuxt app đŸ”Ĩ
55
64
 
56
65
  ## Usage & Examples
57
66
 
58
- To access the `chat`, and `chatCompletion` methods in the nuxt-chatgpt module, you can use the `useChatgpt()` composable, which provides easy access to them. The `chat`, and `chatCompletion` methods requires three parameters:
67
+ To access the `chat`, `chatCompletion`, and `generateImage` methods in the nuxt-chatgpt module, you can use the `useChatgpt()` composable, which provides easy access to them.
59
68
 
69
+ The `chat`, and `chatCompletion` methods requires three parameters:
60
70
 
61
71
  | Name | Type | Default | Description |
62
72
  |--|--|--|--|
63
73
  |**message**|`String`|available only for `chat()`|A string representing the text message that you want to send to the GPT model for processing.
64
74
  |**messages**|`Array`|available only for `chatCompletion()`|An array of objects that contains `role` and `content`
65
- |**model**|`String`|`text-davinci-003` for `chat()` and `gpt-3.5-turbo` for `chatCompletion()`|Represent certain model for different types of natural language processing tasks.
66
- |**options**|`Object`|`{ temperature: 0.5, max_tokens: 2048, top_p: 1 frequency_penalty: 0, presence_penalty: 0 }`|An optional object that specifies any additional options you want to pass to the API request, such as the number of responses to generate, and the maximum length of each response.
75
+ |**model**|`String`|`gpt-4o-mini` for `chat()` and `gpt-4o-mini` for `chatCompletion()`|Represent certain model for different types of natural language processing tasks.
76
+ |**options**|`Object`|`{ temperature: 0.5, max_tokens: 2048, top_p: 1 frequency_penalty: 0, presence_penalty: 0 }`|An optional object that specifies any additional options you want to pass to the API request, such as, the number of responses to generate, and the maximum length of each response.
77
+
78
+ The `generateImage` method requires one parameters:
79
+
80
+ | Name | Type | Default | Description |
81
+ |--|--|--|--|
82
+ |**message**|`String`| A text description of the desired image(s). The maximum length is 1000 characters.
83
+ |**model**|`String`|`dall-e-2` or `dall-e-3`| The model to use for image generation. Only dall-e-2 is supported at this time.
84
+ |**options**|`Object`|`{ n: 1, quality: 'standard', response_format: 'url', size: '1024x1024', style: 'natural' }`|An optional object that specifies any additional options you want to pass to the API request, such as, the number of images to generate, quality, size and style of the generated images.
67
85
 
68
86
  Available models:
69
87
 
@@ -73,17 +91,20 @@ Available models:
73
91
  - gpt-3.5-turbo-0301
74
92
  - gpt-3.5-turbo-1106
75
93
  - gpt-4
94
+ - gpt-4o
95
+ - gpt-4o-mini
96
+ - gpt-4-turbo
76
97
  - gpt-4-1106-preview
77
98
  - gpt-4-0314
78
99
  - gpt-4-0613
79
100
  - gpt-4-32k
80
101
  - gpt-4-32k-0314
81
102
  - gpt-4-32k-0613
82
-
83
- You need to join waitlist to use gpt-4 models within `chatCompletion` method
103
+ - dall-e-2
104
+ - dall-e-3
84
105
 
85
106
  ### Simple `chat` usage
86
- In the following example, the model is unspecified, and the text-davinci-003 model will be used by default.
107
+ In the following example, the model is unspecified, and the gpt-4o-mini model will be used by default.
87
108
 
88
109
  ```js
89
110
  const { chat } = useChatgpt()
@@ -125,7 +146,7 @@ const inputData = ref('')
125
146
 
126
147
  async function sendMessage() {
127
148
  try {
128
- const response = await chat(inputData.value, 'gpt-3.5-turbo')
149
+ const response = await chat(inputData.value, 'gpt-4o-mini')
129
150
  data.value = response
130
151
  } catch(error) {
131
152
  alert(`Join the waiting list if you want to use GPT-4 models: ${error}`)
@@ -148,7 +169,7 @@ async function sendMessage() {
148
169
  ```
149
170
 
150
171
  ### Simple `chatCompletion` usage
151
- In the following example, the model is unspecified, and the gpt-3.5-turbo model will be used by default.
172
+ In the following example, the model is unspecified, and the gpt-4o-mini model will be used by default.
152
173
 
153
174
  ```js
154
175
  const { chatCompletion } = useChatgpt()
@@ -218,7 +239,7 @@ async function sendMessage() {
218
239
 
219
240
  chatTree.value.push(message)
220
241
 
221
- const response = await chatCompletion(chatTree.value, 'gpt-3.5-turbo-0301')
242
+ const response = await chatCompletion(chatTree.value, 'gpt-4o-mini')
222
243
 
223
244
  const responseMessage = {
224
245
  role: response[0].message.role,
@@ -254,6 +275,90 @@ async function sendMessage() {
254
275
  </template>
255
276
  ```
256
277
 
278
+ ### Simple `generateImage` usage
279
+ In the following example, the model is unspecified, and the dall-e-2 model will be used by default.
280
+
281
+ ```js
282
+ const { generateImage } = useChatgpt()
283
+
284
+ const images = ref([])
285
+ const inputData = ref('')
286
+ const loading = ref(false)
287
+
288
+ async function sendPrompt() {
289
+ loading.value = true
290
+ try {
291
+ images.value = await generateImage(inputData.value)
292
+ } catch (error) {
293
+ alert(`Error: ${error}`)
294
+ }
295
+ loading.value = false
296
+ }
297
+
298
+ ```
299
+
300
+ ```html
301
+ <template>
302
+ <div>
303
+ <div v-if="!loading && !images.length">
304
+ <input v-model="inputData">
305
+ <button
306
+ @click="sendPrompt"
307
+ v-text="'Send Prompt'"
308
+ />
309
+ </div>
310
+ <div v-else-if="loading">Generating, please wait ...</div>
311
+ <div v-if="images && !loading" >
312
+ <img v-for="image in images" :key="image.url" :src="image.url" alt="generated-image"/>
313
+ </div>
314
+ </div>
315
+ </template>
316
+ ```
317
+
318
+ ### Usage of `generateImage` with different model, and options
319
+
320
+ ```js
321
+ const { generateImage } = useChatgpt()
322
+
323
+ const images = ref([])
324
+ const inputData = ref('')
325
+ const loading = ref(false)
326
+
327
+ async function sendPrompt() {
328
+ loading.value = true
329
+ try {
330
+ images.value = await generateImage(inputData.value, 'dall-e-2', {
331
+ n: 1,
332
+ quality: 'standard',
333
+ response_format: 'url',
334
+ size: '1024x1024',
335
+ style: 'natural'
336
+ })
337
+ } catch (error) {
338
+ alert(`Error: ${error}`)
339
+ }
340
+ loading.value = false
341
+ }
342
+ ```
343
+
344
+ ```html
345
+ <template>
346
+ <div>
347
+ <div v-if="!loading && !images.length">
348
+ <input v-model="inputData">
349
+ <button
350
+ @click="sendPrompt"
351
+ v-text="'Send Prompt'"
352
+ />
353
+ </div>
354
+ <div v-else-if="loading">Generating, please wait ...</div>
355
+ <div v-if="images && !loading" >
356
+ <img v-for="image in images" :key="image.url" :src="image.url" alt="generated-image"/>
357
+ </div>
358
+ </div>
359
+ </template>
360
+ ```
361
+
257
362
  ## chat vs chatCompletion
258
363
 
259
364
  The `chat` method allows the user to send a prompt to the OpenAI API and receive a response. You can use this endpoint to build conversational interfaces that can interact with users in a natural way. For example, you could use the chat method to build a chatbot that can answer customer service questions or provide information about a product or service.
@@ -293,7 +398,7 @@ Distributed under the MIT License. See `LICENSE.txt` for more information.
293
398
 
294
399
  Oliver Trajceski - [LinkedIn](https://mk.linkedin.com/in/oliver-trajceski-8a28b070) - oliver@akrinum.com
295
400
 
296
- Project Link: [https://vuemadness.com/vuehub/nuxt-chatgpt/](https://vuemadness.com/vuehub/nuxt-chatgpt/)
401
+ Project Link: [https://nuxtchatgpt.com](https://nuxtchatgpt.com)
297
402
 
298
403
  ## Development
299
404
 
package/dist/module.json CHANGED
@@ -4,5 +4,5 @@
4
4
  "compatibility": {
5
5
  "nuxt": "^3.0.0"
6
6
  },
7
- "version": "0.2.3"
7
+ "version": "0.2.5"
8
8
  }
package/dist/module.mjs CHANGED
@@ -45,6 +45,13 @@ const module = defineNuxtModule({
45
45
  handler: resolve(runtimeDir, "server/api/chat-completion")
46
46
  }
47
47
  );
48
+ addServerHandler(
49
+ {
50
+ route: "/api/image-generate",
51
+ method: "post",
52
+ handler: resolve(runtimeDir, "server/api/image-generate")
53
+ }
54
+ );
48
55
  nuxt.options.build.transpile.push(runtimeDir);
49
56
  }
50
57
  });
@@ -34,5 +34,22 @@ export const useChatgpt = () => {
34
34
  });
35
35
  }
36
36
  };
37
- return { chat, chatCompletion };
37
+ const generateImage = async (message, model, options) => {
38
+ try {
39
+ return await $fetch("/api/image-generate", {
40
+ method: "POST",
41
+ body: {
42
+ message,
43
+ model,
44
+ options
45
+ }
46
+ });
47
+ } catch (error) {
48
+ throw createError({
49
+ statusCode: 500,
50
+ message: "Failed to forward request to server"
51
+ });
52
+ }
53
+ };
54
+ return { chat, chatCompletion, generateImage };
38
55
  };
@@ -4,9 +4,14 @@ export declare const MODEL_GPT_TURBO_3_5: string;
4
4
  export declare const MODEL_GPT_TURBO_3_5_0301: string;
5
5
  export declare const MODEL_GPT_TURBO_3_5_1106 = "gpt-3.5-turbo-1106";
6
6
  export declare const MODEL_GPT_4: string;
7
+ export declare const MODEL_GPT_4_O: string;
8
+ export declare const MODEL_GPT_4_MINI: string;
9
+ export declare const MODEL_GPT_4_TURBO: string;
7
10
  export declare const MODEL_GPT_4_1106_PREVIEW = "gpt-4-1106-preview";
8
11
  export declare const MODEL_GPT_4_0314: string;
9
12
  export declare const MODEL_GPT_4_0613 = "gpt-4-0613";
10
13
  export declare const MODEL_GPT_4_32k: string;
11
14
  export declare const MODEL_GPT_4_32k_0314: string;
12
15
  export declare const MODEL_GPT_4_32k_0613 = "gpt-4-32k-0613";
16
+ export declare const MODEL_GPT_DALL_E_2 = "dall-e-2";
17
+ export declare const MODEL_GPT_DALL_E_3 = "dall-e-3";
@@ -4,9 +4,14 @@ export const MODEL_GPT_TURBO_3_5 = "gpt-3.5-turbo";
4
4
  export const MODEL_GPT_TURBO_3_5_0301 = "gpt-3.5-turbo-0301";
5
5
  export const MODEL_GPT_TURBO_3_5_1106 = "gpt-3.5-turbo-1106";
6
6
  export const MODEL_GPT_4 = "gpt-4";
7
+ export const MODEL_GPT_4_O = "gpt-4o";
8
+ export const MODEL_GPT_4_MINI = "gpt-4o-mini";
9
+ export const MODEL_GPT_4_TURBO = "gpt-4-turbo";
7
10
  export const MODEL_GPT_4_1106_PREVIEW = "gpt-4-1106-preview";
8
11
  export const MODEL_GPT_4_0314 = "gpt-4-0314";
9
12
  export const MODEL_GPT_4_0613 = "gpt-4-0613";
10
13
  export const MODEL_GPT_4_32k = "gpt-4-32k";
11
14
  export const MODEL_GPT_4_32k_0314 = "gpt-4-32k-0314";
12
15
  export const MODEL_GPT_4_32k_0613 = "gpt-4-32k-0613";
16
+ export const MODEL_GPT_DALL_E_2 = "dall-e-2";
17
+ export const MODEL_GPT_DALL_E_3 = "dall-e-3";
@@ -5,3 +5,10 @@ export declare const defaultOptions: {
5
5
  frequency_penalty: number;
6
6
  presence_penalty: number;
7
7
  };
8
+ export declare const defaultDaleOptions: {
9
+ n: number;
10
+ quality: string;
11
+ response_format: string;
12
+ size: string;
13
+ style: string;
14
+ };
@@ -5,3 +5,10 @@ export const defaultOptions = {
5
5
  frequency_penalty: 0,
6
6
  presence_penalty: 0
7
7
  };
8
+ export const defaultDaleOptions = {
9
+ n: 1,
10
+ quality: "standard",
11
+ response_format: "url",
12
+ size: "1024x1024",
13
+ style: "natural"
14
+ };
@@ -1,7 +1,7 @@
1
1
  import OpenAI from "openai";
2
2
  import { createError, defineEventHandler, readBody } from "h3";
3
3
  import { defaultOptions } from "../../constants/options.mjs";
4
- import { MODEL_GPT_TURBO_3_5 } from "../../constants/models.mjs";
4
+ import { MODEL_GPT_4_MINI } from "../../constants/models.mjs";
5
5
  import { modelMap } from "../../utils/model-map.mjs";
6
6
  import { useRuntimeConfig } from "#imports";
7
7
  export default defineEventHandler(async (event) => {
@@ -17,7 +17,7 @@ export default defineEventHandler(async (event) => {
17
17
  });
18
18
  const requestOptions = {
19
19
  messages,
20
- model: !model ? modelMap[MODEL_GPT_TURBO_3_5] : modelMap[model],
20
+ model: !model ? modelMap[MODEL_GPT_4_MINI] : modelMap[model],
21
21
  ...options || defaultOptions
22
22
  };
23
23
  try {
@@ -1,7 +1,7 @@
1
1
  import OpenAI from "openai";
2
2
  import { createError, defineEventHandler, readBody } from "h3";
3
3
  import { defaultOptions } from "../../constants/options.mjs";
4
- import { MODEL_GPT_TURBO_3_5 } from "../../constants/models.mjs";
4
+ import { MODEL_GPT_4_MINI } from "../../constants/models.mjs";
5
5
  import { modelMap } from "../../utils/model-map.mjs";
6
6
  import { useRuntimeConfig } from "#imports";
7
7
  export default defineEventHandler(async (event) => {
@@ -17,7 +17,7 @@ export default defineEventHandler(async (event) => {
17
17
  });
18
18
  const requestOptions = {
19
19
  messages: [{ role: "user", content: message }],
20
- model: !model ? modelMap[MODEL_GPT_TURBO_3_5] : modelMap[model],
20
+ model: !model ? modelMap[MODEL_GPT_4_MINI] : modelMap[model],
21
21
  ...options || defaultOptions
22
22
  };
23
23
  try {
@@ -0,0 +1,3 @@
1
+ import OpenAI from 'openai';
2
+ declare const _default: import("h3").EventHandler<import("h3").EventHandlerRequest, Promise<OpenAI.Images.Image[]>>;
3
+ export default _default;
@@ -0,0 +1,32 @@
1
+ import OpenAI from "openai";
2
+ import { createError, defineEventHandler, readBody } from "h3";
3
+ import { defaultDaleOptions } from "../../constants/options.mjs";
4
+ import { MODEL_GPT_DALL_E_2 } from "../../constants/models.mjs";
5
+ import { modelMap } from "../../utils/model-map.mjs";
6
+ import { useRuntimeConfig } from "#imports";
7
+ export default defineEventHandler(async (event) => {
8
+ const { message, model, options } = await readBody(event);
9
+ if (!useRuntimeConfig().chatgpt.apiKey) {
10
+ throw createError({
11
+ statusCode: 403,
12
+ message: "Missing OpenAI API Key"
13
+ });
14
+ }
15
+ const openai = new OpenAI({
16
+ apiKey: useRuntimeConfig().chatgpt.apiKey
17
+ });
18
+ const requestOptions = {
19
+ prompt: message,
20
+ model: !model ? modelMap[MODEL_GPT_DALL_E_2] : modelMap[model],
21
+ ...options || defaultDaleOptions
22
+ };
23
+ try {
24
+ const response = await openai.images.generate(requestOptions);
25
+ return response.data;
26
+ } catch (error) {
27
+ throw createError({
28
+ statusCode: 500,
29
+ message: "Failed to forward request to OpenAI API"
30
+ });
31
+ }
32
+ });
@@ -1,6 +1,7 @@
1
1
  export interface IChatgptClient {
2
2
  chat(IMessage): Promise,
3
3
  chatCompletion(IMessage): Promise
4
+ generateImage(IMessage): Promise
4
5
  }
5
6
 
6
7
  export interface IMessage {
@@ -4,5 +4,7 @@ export declare const modelMap: {
4
4
  "gpt-4-1106-preview": string;
5
5
  "gpt-4-0613": string;
6
6
  "gpt-4-32k-0613": string;
7
+ "dall-e-2": string;
8
+ "dall-e-3": string;
7
9
  default: string;
8
10
  };
@@ -5,12 +5,17 @@ import {
5
5
  MODEL_GPT_TURBO_3_5_0301,
6
6
  MODEL_GPT_TURBO_3_5_1106,
7
7
  MODEL_GPT_4,
8
+ MODEL_GPT_4_O,
9
+ MODEL_GPT_4_MINI,
10
+ MODEL_GPT_4_TURBO,
8
11
  MODEL_GPT_4_1106_PREVIEW,
9
12
  MODEL_GPT_4_0314,
10
13
  MODEL_GPT_4_0613,
11
14
  MODEL_GPT_4_32k,
12
15
  MODEL_GPT_4_32k_0314,
13
- MODEL_GPT_4_32k_0613
16
+ MODEL_GPT_4_32k_0613,
17
+ MODEL_GPT_DALL_E_2,
18
+ MODEL_GPT_DALL_E_3
14
19
  } from "../constants/models.mjs";
15
20
  export const modelMap = {
16
21
  [MODEL_TEXT_DAVINCI_003]: MODEL_TEXT_DAVINCI_003,
@@ -19,11 +24,16 @@ export const modelMap = {
19
24
  [MODEL_GPT_TURBO_3_5_0301]: MODEL_GPT_TURBO_3_5_0301,
20
25
  [MODEL_GPT_TURBO_3_5_1106]: MODEL_GPT_TURBO_3_5_1106,
21
26
  [MODEL_GPT_4]: MODEL_GPT_4,
27
+ [MODEL_GPT_4_O]: MODEL_GPT_4_O,
28
+ [MODEL_GPT_4_MINI]: MODEL_GPT_4_MINI,
29
+ [MODEL_GPT_4_TURBO]: MODEL_GPT_4_TURBO,
22
30
  [MODEL_GPT_4_1106_PREVIEW]: MODEL_GPT_4_1106_PREVIEW,
23
31
  [MODEL_GPT_4_0314]: MODEL_GPT_4_0314,
24
32
  [MODEL_GPT_4_0613]: MODEL_GPT_4_0613,
25
33
  [MODEL_GPT_4_32k]: MODEL_GPT_4_32k,
26
34
  [MODEL_GPT_4_32k_0314]: MODEL_GPT_4_32k_0314,
27
35
  [MODEL_GPT_4_32k_0613]: MODEL_GPT_4_32k_0613,
28
- default: MODEL_GPT_TURBO_3_5
36
+ [MODEL_GPT_DALL_E_2]: MODEL_GPT_DALL_E_2,
37
+ [MODEL_GPT_DALL_E_3]: MODEL_GPT_DALL_E_3,
38
+ default: MODEL_GPT_4_MINI
29
39
  };
package/package.json CHANGED
@@ -1,6 +1,6 @@
1
1
  {
2
2
  "name": "nuxt-chatgpt",
3
- "version": "0.2.3",
3
+ "version": "0.2.5",
4
4
  "description": "ChatGPT integration for Nuxt 3",
5
5
  "license": "MIT",
6
6
  "type": "module",
@@ -26,7 +26,9 @@
26
26
  "nuxt3",
27
27
  "nuxt",
28
28
  "nuxt.js",
29
- "nuxt-chatgpt"
29
+ "nuxt-chatgpt",
30
+ "image",
31
+ "image-generator"
30
32
  ],
31
33
  "exports": {
32
34
  ".": {
@@ -47,7 +49,7 @@
47
49
  "dev:generate": "nuxi generate playground",
48
50
  "dev:prepare": "nuxt-module-build --stub && nuxi prepare playground",
49
51
  "dev:preview": "nuxi preview playground",
50
- "release": "npm run lint && npm run test && npm run prepack && changelogen --release && npm publish && git push --follow-tags",
52
+ "release": "npm run lint && npm run test && npm run prepack && changelogen --release --minor && npm publish && git push --follow-tags",
51
53
  "lint": "eslint .",
52
54
  "test": "vitest run",
53
55
  "test:watch": "vitest watch"