nuxt-chatgpt 0.3.0 → 0.4.0

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
package/README.md CHANGED
@@ -2,8 +2,9 @@
2
2
  <br />
3
3
  <div>
4
4
  <div>
5
- <h1>Hausly + Image Generator<a href="https://hausly.io" target="_blank">đŸ”Ĩ(IMAGE DEMO)đŸ”Ĩ</a></h2></h1>
6
- <h2>Nuxt Chatgpt + Image Generator<a href="https://nuxtchatgpt.com" target="_blank">đŸ”Ĩ(CHATGPT DEMO)đŸ”Ĩ</a></h2>
5
+ <h1>Hausly + Image Generator<a href="https://hausly.io" target="_blank">đŸ”Ĩ(Hausly)đŸ”Ĩ</a></h1>
6
+ <h2>ReplyGuard + Generate Email Reply<a href="https://replyguard.ai" target="_blank">đŸ”Ĩ(ReplyGuard)đŸ”Ĩ</a></h2>
7
+ <h2>Nuxt Chatgpt + Image Generator<a href="https://nuxtchatgpt.com" target="_blank">đŸ”Ĩ(Nuxt ChatGPT)đŸ”Ĩ</a></h2>
7
8
 
8
9
  </div>
9
10
  <div style="display:flex; width:100%; justify-content:center">
@@ -31,10 +32,10 @@ This user-friendly module boasts of an easy integration process that enables sea
31
32
  - đŸ’Ē &nbsp; Easy implementation into any [Nuxt 3](https://nuxt.com) project.
32
33
  - 👉 &nbsp; Type-safe integration of Chatgpt into your [Nuxt 3](https://nuxt.com) project.
33
34
  - đŸ•šī¸ &nbsp; Provides a `useChatgpt()` composable that grants easy access to the `chat`, and `chatCompletion`, and `generateImage` methods.
35
+ - đŸ•šī¸ &nbsp; Provides `chatCompletionStream` for real-time streamed responses (SSE).
34
36
  - đŸ”Ĩ &nbsp; Ensures security by routing requests through a [Nitro Server](https://nuxt.com/docs/guide/concepts/server-engine), preventing the <b>API Key</b> from being exposed.
35
37
  - 🧱 &nbsp; It is lightweight and performs well.
36
38
 
37
-
38
39
  ## Recommended Node Version
39
40
 
40
41
  ### min `v18.20.5 or higher`
@@ -71,54 +72,56 @@ That's it! You can now use Nuxt Chatgpt in your Nuxt app đŸ”Ĩ
71
72
 
72
73
  ## Usage & Examples
73
74
 
74
- To access the `chat`, `chatCompletion`, and `generateImage` methods in the nuxt-chatgpt module, you can use the `useChatgpt()` composable, which provides easy access to them.
75
+ To access the `chat`, `chatCompletion`, `chatCompletionStream`, and `generateImage` methods in the nuxt-chatgpt module, you can use the `useChatgpt()` composable, which provides easy access to them.
75
76
 
76
77
  The `chat`, and `chatCompletion` methods requires three parameters:
77
78
 
78
- | Name | Type | Default | Description |
79
- |--|--|--|--|
80
- |**message**|`String`|available only for `chat()`|A string representing the text message that you want to send to the GPT model for processing.
81
- |**messages**|`Array`|available only for `chatCompletion()`|An array of objects that contains `role` and `content`
82
- |**model**|`String`|`gpt-5-mini` for `chat()` and `gpt-5-mini` for `chatCompletion()`|Represent certain model for different types of natural language processing tasks.
83
- |**options**|`Object`|`{ temperature: 0.5, max_tokens: 2048, top_p: 1 frequency_penalty: 0, presence_penalty: 0 }`|An optional object that specifies any additional options you want to pass to the API request, such as, the number of responses to generate, and the maximum length of each response.
79
+ | Name | Type | Default | Description |
80
+ | ------------ | -------- | -------------------------------------------------------------------------------------------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ |
81
+ | **message** | `String` | available only for `chat()` | A string representing the text message that you want to send to the GPT model for processing. |
82
+ | **messages** | `Array` | available only for `chatCompletion()` and `chatCompletionStream()` | An array of objects that contains `role` and `content` |
83
+ | **model** | `String` | `gpt-5-mini` for `chat()` and `gpt-5-mini` for `chatCompletion()` | Represent certain model for different types of natural language processing tasks. |
84
+ | **options** | `Object` | `{ temperature: 0.5, max_tokens: 2048, top_p: 1 frequency_penalty: 0, presence_penalty: 0 }` | An optional object that specifies any additional options you want to pass to the API request, such as, the number of responses to generate, and the maximum length of each response. |
84
85
 
85
86
  The `generateImage` method requires one parameters:
86
87
 
87
- | Name | Type | Default | Description |
88
- |--|--|--|--|
89
- |**message**|`String`| A text description of the desired image(s). The maximum length is 1000 characters.
90
- |**model**|`String`|`gpt-image-1-mini`| The model to use for image generation.
91
- |**options**|`Object`|`{ n: 1, quality: 'standard', response_format: 'url', size: '1024x1024', style: 'natural' }`|An optional object that specifies any additional options you want to pass to the API request, such as, the number of images to generate, quality, size and style of the generated images.
88
+ | Name | Type | Default | Description |
89
+ | ----------- | -------- | -------------------------------------------------------------------------------------------- | ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
90
+ | **message** | `String` | A text description of the desired image(s). The maximum length is 1000 characters. | |
91
+ | **model** | `String` | `gpt-image-1-mini` | The model to use for image generation. |
92
+ | **options** | `Object` | `{ n: 1, quality: 'standard', response_format: 'url', size: '1024x1024', style: 'natural' }` | An optional object that specifies any additional options you want to pass to the API request, such as, the number of images to generate, quality, size and style of the generated images. |
92
93
 
93
94
  Available models:
94
95
 
95
- - text-davinci-002
96
- - text-davinci-003
97
- - gpt-3.5-turbo
98
- - gpt-3.5-turbo-0301
99
- - gpt-3.5-turbo-1106
100
- - gpt-4
101
- - gpt-4o
102
- - gpt-4o-mini
103
- - gpt-4-turbo
104
- - gpt-4-1106-preview
105
- - gpt-4-0314
106
- - gpt-4-0613
107
- - gpt-4-32k
108
- - gpt-4-32k-0314
109
- - gpt-4-32k-0613
110
- - gpt-5-nano
111
- - gpt-5-mini
112
- - gpt-5-pro
113
- - gpt-5.1
114
- - gpt-5.2-pro
115
- - gpt-5.2
116
- - dall-e-3
117
- - gpt-image-1
118
- - gpt-image-1-mini
119
- - gpt-image-1.5
120
-
121
- ### Simple `chat` usage
96
+ * text-davinci-002
97
+ * text-davinci-003
98
+ * gpt-3.5-turbo
99
+ * gpt-3.5-turbo-0301
100
+ * gpt-3.5-turbo-1106
101
+ * gpt-4
102
+ * gpt-4o
103
+ * gpt-4o-mini
104
+ * gpt-4-turbo
105
+ * gpt-4-1106-preview
106
+ * gpt-4-0314
107
+ * gpt-4-0314
108
+ * gpt-4-0613
109
+ * gpt-4-32k
110
+ * gpt-4-32k-0314
111
+ * gpt-4-32k-0613
112
+ * gpt-5-nano
113
+ * gpt-5-mini
114
+ * gpt-5-pro
115
+ * gpt-5.1
116
+ * gpt-5.2-pro
117
+ * gpt-5.2
118
+ * dall-e-3
119
+ * gpt-image-1
120
+ * gpt-image-1-mini
121
+ * gpt-image-1.5
122
+
123
+ ### Simple `chat` usage
124
+
122
125
  In the following example, the model is unspecified, and the gpt-4o-mini model will be used by default.
123
126
 
124
127
  ```js
@@ -183,7 +186,8 @@ async function sendMessage() {
183
186
  </template>
184
187
  ```
185
188
 
186
- ### Simple `chatCompletion` usage
189
+ ### Simple `chatCompletion` usage
190
+
187
191
  In the following example, the model is unspecified, and the gpt-4o-mini model will be used by default.
188
192
 
189
193
  ```js
@@ -290,7 +294,76 @@ async function sendMessage() {
290
294
  </template>
291
295
  ```
292
296
 
293
- ### Simple `generateImage` usage
297
+ ### Simple `chatCompletionStream` usage (streaming)
298
+
299
+ In the following example, the model is unspecified, and the gpt-4o-mini model will be used by default.
300
+
301
+ ```js
302
+ const { chatCompletionStream } = useChatgpt()
303
+
304
+ const chatTree = ref([])
305
+ const inputData = ref('')
306
+
307
+ async function sendStreamedMessage() {
308
+ try {
309
+ const message = {
310
+ role: 'user',
311
+ content: `${inputData.value}`,
312
+ }
313
+
314
+ chatTree.value.push(message)
315
+
316
+ const assistantMessage = {
317
+ role: 'assistant',
318
+ content: ''
319
+ }
320
+
321
+ chatTree.value.push(assistantMessage)
322
+
323
+ // IMPORTANT: do not send the placeholder assistant message to the server
324
+ const payloadMessages = chatTree.value.slice(0, -1)
325
+
326
+ await chatCompletionStream(payloadMessages, undefined, undefined, {
327
+ onToken(token) {
328
+ assistantMessage.content += token
329
+ },
330
+ onDone() {
331
+ // streaming finished
332
+ },
333
+ onError(err) {
334
+ alert(`Stream error: ${typeof err === "string" ? err : err?.message || "Unknown"}`)
335
+ }
336
+ })
337
+ } catch(error) {
338
+ alert(`Verify your organization if you want to use GPT-5 models: ${error}`)
339
+ }
340
+ }
341
+
342
+ ```
343
+
344
+ ```html
345
+ <template>
346
+ <div>
347
+ <input v-model="inputData">
348
+ <button
349
+ @click="sendStreamedMessage"
350
+ v-text="'Send Streamed'"
351
+ />
352
+ <div>
353
+ <div
354
+ v-for="chat in chatTree"
355
+ :key="chat"
356
+ >
357
+ <strong>{{ chat.role }} :</strong>
358
+ <div>{{ chat.content }} </div>
359
+ </div>
360
+ </div>
361
+ </div>
362
+ </template>
363
+ ```
364
+
365
+ ### Simple `generateImage` usage
366
+
294
367
  In the following example, the model is unspecified, and the `gpt-image-1-mini` model will be used by default.
295
368
 
296
369
  ```js
@@ -389,15 +462,19 @@ The `chat` method allows the user to send a prompt to the OpenAI API and receive
389
462
 
390
463
  The `chatCompletion` method is similar to the `chat` method, but it provides additional functionality for generating longer, more complex responses. Specifically, the chatCompletion method allows you to provide a conversation history as input, which the API can use to generate a response that is consistent with the context of the conversation. This makes it possible to build chatbots that can engage in longer, more natural conversations with users.
391
464
 
465
+ ## chatCompletionStream vs chatCompletion
466
+
467
+ The `chatCompletionStream` method returns the assistant response **as a stream** (token-by-token). This is useful when you want to build a ChatGPT-like UI where the answer is displayed while it's being generated, instead of waiting for the full message.
392
468
 
393
469
  ## Module Options
394
470
 
395
- | Name | Type | Default | Description |
396
- |--|--|--|--|
397
- |**apiKey**|`String`|`xxxxxx`|Your apiKey here goes here
398
- |**isEnabled**|`Boolean`|`true`| Enable or disable the module. `True` by default.
471
+ | Name | Type | Default | Description |
472
+ | ------------- | --------- | -------- | ------------------------------------------------ |
473
+ | **apiKey** | `String` | `xxxxxx` | Your apiKey here goes here |
474
+ | **isEnabled** | `Boolean` | `true` | Enable or disable the module. `True` by default. |
399
475
 
400
476
  <!-- CONTRIBUTING -->
477
+
401
478
  ## Contributing
402
479
 
403
480
  Contributions are what make the open source community such an amazing place to learn, inspire, and create. Any contributions you make are **greatly appreciated**.
@@ -411,16 +488,17 @@ Don't forget to give the project a star! Thanks again!
411
488
  4. Push to the Branch (`git push origin feature/AmazingFeature`)
412
489
  5. Open a Pull Request
413
490
 
414
-
415
491
  <!-- LICENSE -->
492
+
416
493
  ## License
417
494
 
418
495
  Distributed under the MIT License. See `LICENSE.txt` for more information.
419
496
 
420
497
  <!-- CONTACT -->
498
+
421
499
  ## Contact
422
500
 
423
- Oliver Trajceski - [LinkedIn](https://mk.linkedin.com/in/oliver-trajceski-8a28b070) - oliver@akrinum.com
501
+ Oliver Trajceski - [LinkedIn](https://mk.linkedin.com/in/oliver-trajceski-8a28b070) - [oliver@akrinum.com](mailto:oliver@akrinum.com)
424
502
 
425
503
  Project Link: [https://nuxtchatgpt.com](https://nuxtchatgpt.com)
426
504
 
@@ -482,4 +560,4 @@ Use this space to list resources you find helpful and would like to give credit
482
560
  [npm-downloads-src]: https://img.shields.io/npm/dm/nuxt-chatgpt.svg?style=flat&colorA=18181B&colorB=28CF8D
483
561
  [npm-downloads-href]: https://npmjs.com/package/nuxt-chatgpt
484
562
  [license-src]: https://img.shields.io/npm/l/nuxt-chatgpt.svg?style=flat&colorA=18181B&colorB=28CF8D
485
- [license-href]: https://npmjs.com/package/nuxt-chatgpt
563
+ [license-href]: https://npmjs.com/package/nuxt-chatgpt
package/dist/module.json CHANGED
@@ -4,7 +4,7 @@
4
4
  "compatibility": {
5
5
  "nuxt": ">=3.0.0 <5.0.0"
6
6
  },
7
- "version": "0.3.0",
7
+ "version": "0.4.0",
8
8
  "builder": {
9
9
  "@nuxt/module-builder": "1.0.2",
10
10
  "unbuild": "3.6.1"
package/dist/module.mjs CHANGED
@@ -45,6 +45,13 @@ const module$1 = defineNuxtModule({
45
45
  handler: resolve(runtimeDir, "server/api/chat-completion")
46
46
  }
47
47
  );
48
+ addServerHandler(
49
+ {
50
+ route: "/api/chat-completion-stream",
51
+ method: "post",
52
+ handler: resolve(runtimeDir, "server/api/chat-completion-stream")
53
+ }
54
+ );
48
55
  addServerHandler(
49
56
  {
50
57
  route: "/api/image-generate",
@@ -1,2 +1,10 @@
1
- import type { IChatgptClient } from "../types/index.js";
2
- export declare const useChatgpt: () => IChatgptClient;
1
+ import type { IChatgptClient, IModel, IOptions } from "../types/index.js";
2
+ type StreamHandlers = {
3
+ onToken?: (token: string) => void;
4
+ onDone?: () => void;
5
+ onError?: (err: any) => void;
6
+ };
7
+ export declare const useChatgpt: () => IChatgptClient & {
8
+ chatCompletionStream: (messages?: [], model?: IModel, options?: IOptions, handlers?: StreamHandlers, abortSignal?: AbortSignal) => Promise<void>;
9
+ };
10
+ export {};
@@ -4,11 +4,7 @@ export const useChatgpt = () => {
4
4
  try {
5
5
  return await $fetch("/api/chat", {
6
6
  method: "POST",
7
- body: {
8
- message,
9
- model,
10
- options
11
- }
7
+ body: { message, model, options }
12
8
  });
13
9
  } catch (error) {
14
10
  throw createError({
@@ -21,11 +17,7 @@ export const useChatgpt = () => {
21
17
  try {
22
18
  return await $fetch("/api/chat-completion", {
23
19
  method: "POST",
24
- body: {
25
- messages,
26
- model,
27
- options
28
- }
20
+ body: { messages, model, options }
29
21
  });
30
22
  } catch (error) {
31
23
  throw createError({
@@ -38,12 +30,68 @@ export const useChatgpt = () => {
38
30
  try {
39
31
  return await $fetch("/api/image-generate", {
40
32
  method: "POST",
41
- body: {
42
- message,
43
- model,
44
- options
45
- }
33
+ body: { message, model, options }
34
+ });
35
+ } catch (error) {
36
+ throw createError({
37
+ statusCode: 500,
38
+ message: "Failed to forward request to server"
46
39
  });
40
+ }
41
+ };
42
+ const chatCompletionStream = async (messages, model, options, handlers = {}, abortSignal) => {
43
+ try {
44
+ const res = await fetch("/api/chat-completion-stream", {
45
+ method: "POST",
46
+ headers: { "Content-Type": "application/json" },
47
+ body: JSON.stringify({ messages, model, options }),
48
+ signal: abortSignal
49
+ });
50
+ if (!res.ok || !res.body) {
51
+ const text = await res.text().catch(() => "");
52
+ throw new Error(
53
+ `Stream failed (${res.status}): ${text || res.statusText}`
54
+ );
55
+ }
56
+ const reader = res.body.getReader();
57
+ const decoder = new TextDecoder("utf-8");
58
+ let buffer = "";
59
+ const emit = (eventName, data) => {
60
+ if (eventName === "token") handlers.onToken?.(data);
61
+ else if (eventName === "done") handlers.onDone?.();
62
+ else if (eventName === "error") {
63
+ try {
64
+ handlers.onError?.(JSON.parse(data));
65
+ } catch {
66
+ handlers.onError?.(data);
67
+ }
68
+ }
69
+ };
70
+ while (true) {
71
+ const { value, done } = await reader.read();
72
+ if (done) break;
73
+ buffer += decoder.decode(value, { stream: true });
74
+ let idx;
75
+ while ((idx = buffer.indexOf("\n\n")) !== -1) {
76
+ const frame = buffer.slice(0, idx);
77
+ buffer = buffer.slice(idx + 2);
78
+ let eventName = "message";
79
+ const dataLines = [];
80
+ for (const line of frame.split("\n")) {
81
+ if (line.startsWith("event:")) eventName = line.slice(6).trim();
82
+ if (line.startsWith("data:"))
83
+ dataLines.push(line.slice(5).trimStart());
84
+ }
85
+ const data = dataLines.join("\n");
86
+ if (eventName === "done") {
87
+ emit("done", data);
88
+ return;
89
+ }
90
+ if (eventName === "token" || eventName === "error") {
91
+ emit(eventName, data);
92
+ }
93
+ }
94
+ }
47
95
  } catch (error) {
48
96
  throw createError({
49
97
  statusCode: 500,
@@ -51,5 +99,5 @@ export const useChatgpt = () => {
51
99
  });
52
100
  }
53
101
  };
54
- return { chat, chatCompletion, generateImage };
102
+ return { chat, chatCompletion, generateImage, chatCompletionStream };
55
103
  };
@@ -0,0 +1,2 @@
1
+ declare const _default: import("h3").EventHandler<import("h3").EventHandlerRequest, Promise<void>>;
2
+ export default _default;
@@ -0,0 +1,62 @@
1
+ import OpenAI from "openai";
2
+ import { createError, defineEventHandler, readBody, setHeader } from "h3";
3
+ import { defaultOptions } from "../../constants/options.js";
4
+ import { MODEL_GPT_5_MINI } from "../../constants/models.js";
5
+ import { modelMap } from "../../utils/model-map.js";
6
+ import { useRuntimeConfig } from "#imports";
7
+ export default defineEventHandler(async (event) => {
8
+ const { messages, model, options } = await readBody(event);
9
+ if (!useRuntimeConfig().chatgpt.apiKey) {
10
+ throw createError({
11
+ statusCode: 403,
12
+ message: "Missing OpenAI API Key"
13
+ });
14
+ }
15
+ const openai = new OpenAI({
16
+ apiKey: useRuntimeConfig().chatgpt.apiKey
17
+ });
18
+ const requestOptions = {
19
+ messages,
20
+ model: !model ? modelMap[MODEL_GPT_5_MINI] : modelMap[model],
21
+ ...options || defaultOptions,
22
+ stream: true
23
+ };
24
+ setHeader(event, "Content-Type", "text/event-stream; charset=utf-8");
25
+ setHeader(event, "Cache-Control", "no-cache, no-transform");
26
+ setHeader(event, "Connection", "keep-alive");
27
+ event.node.res.flushHeaders?.();
28
+ const res = event.node.res;
29
+ const abort = new AbortController();
30
+ event.node.req.on("close", () => abort.abort());
31
+ const writeEvent = (eventName, data) => {
32
+ res.write(`event: ${eventName}
33
+ `);
34
+ const lines = String(data).split("\n");
35
+ for (const line of lines) res.write(`data: ${line}
36
+ `);
37
+ res.write("\n");
38
+ };
39
+ try {
40
+ writeEvent("meta", JSON.stringify({ ok: true }));
41
+ const stream = await openai.chat.completions.create(
42
+ requestOptions,
43
+ { signal: abort.signal }
44
+ );
45
+ for await (const chunk of stream) {
46
+ const delta = chunk?.choices?.[0]?.delta?.content;
47
+ if (typeof delta === "string" && delta.length) {
48
+ writeEvent("token", delta);
49
+ }
50
+ }
51
+ writeEvent("done", "[DONE]");
52
+ res.end();
53
+ } catch (error) {
54
+ writeEvent(
55
+ "error",
56
+ JSON.stringify({
57
+ message: error?.message ?? String(error)
58
+ })
59
+ );
60
+ res.end();
61
+ }
62
+ });
package/package.json CHANGED
@@ -1,72 +1,72 @@
1
- {
2
- "name": "nuxt-chatgpt",
3
- "version": "0.3.0",
4
- "description": "ChatGPT integration for Nuxt 3",
5
- "license": "MIT",
6
- "type": "module",
7
- "homepage": "https://vuemadness.com/nuxt-chatgpt",
8
- "bugs": {
9
- "url": "https://github.com/schnapsterdog/nuxt-chatgpt/issues"
10
- },
11
- "repository": {
12
- "type": "git",
13
- "url": "git+https://github.com/schnapsterdog/nuxt-chatgpt"
14
- },
15
- "contributors": [
16
- {
17
- "name": "Oliver Trajceski (@schnapsterdog)"
18
- }
19
- ],
20
- "author": {
21
- "name": "Oliver Trajceski",
22
- "email": "oliver@akrinum.com"
23
- },
24
- "keywords": [
25
- "vue3",
26
- "nuxt3",
27
- "nuxt",
28
- "nuxt.js",
29
- "nuxt-chatgpt",
30
- "image",
31
- "image-generator"
32
- ],
33
- "exports": {
34
- ".": {
35
- "types": "./dist/types.d.mts",
36
- "import": "./dist/module.mjs",
37
- "default": "./dist/module.mjs"
38
- }
39
- },
40
- "main": "./dist/module.mjs",
41
- "types": "./dist/types.d.mts",
42
- "files": [
43
- "dist"
44
- ],
45
- "scripts": {
46
- "prepack": "nuxt-module-build",
47
- "dev": "nuxi dev playground",
48
- "dev:build": "nuxi build playground",
49
- "dev:generate": "nuxi generate playground",
50
- "dev:prepare": "nuxt-module-build --stub && nuxi prepare playground",
51
- "dev:preview": "nuxi preview playground",
52
- "release": "npm run lint && npm run test && npm run prepack && changelogen --release --minor && npm publish && git push --follow-tags",
53
- "lint": "eslint .",
54
- "test": "vitest run",
55
- "test:watch": "vitest watch"
56
- },
57
- "dependencies": {
58
- "@nuxt/kit": "latest",
59
- "defu": "latest",
60
- "openai": "^4.96.2"
61
- },
62
- "devDependencies": {
63
- "@nuxt/eslint-config": "latest",
64
- "@nuxt/module-builder": "^1.0.1",
65
- "@nuxt/schema": "latest",
66
- "@nuxt/test-utils": "^3.18.0",
67
- "changelogen": "latest",
68
- "eslint": "latest",
69
- "nuxt": "^3.16.2",
70
- "vitest": "latest"
71
- }
1
+ {
2
+ "name": "nuxt-chatgpt",
3
+ "version": "0.4.0",
4
+ "description": "ChatGPT integration for Nuxt 3",
5
+ "license": "MIT",
6
+ "type": "module",
7
+ "homepage": "https://vuemadness.com/nuxt-chatgpt",
8
+ "bugs": {
9
+ "url": "https://github.com/schnapsterdog/nuxt-chatgpt/issues"
10
+ },
11
+ "repository": {
12
+ "type": "git",
13
+ "url": "git+https://github.com/schnapsterdog/nuxt-chatgpt"
14
+ },
15
+ "contributors": [
16
+ {
17
+ "name": "Oliver Trajceski (@schnapsterdog)"
18
+ }
19
+ ],
20
+ "author": {
21
+ "name": "Oliver Trajceski",
22
+ "email": "oliver@akrinum.com"
23
+ },
24
+ "keywords": [
25
+ "vue3",
26
+ "nuxt3",
27
+ "nuxt",
28
+ "nuxt.js",
29
+ "nuxt-chatgpt",
30
+ "image",
31
+ "image-generator"
32
+ ],
33
+ "exports": {
34
+ ".": {
35
+ "types": "./dist/types.d.mts",
36
+ "import": "./dist/module.mjs",
37
+ "default": "./dist/module.mjs"
38
+ }
39
+ },
40
+ "main": "./dist/module.mjs",
41
+ "types": "./dist/types.d.mts",
42
+ "files": [
43
+ "dist"
44
+ ],
45
+ "scripts": {
46
+ "prepack": "nuxt-module-build",
47
+ "dev": "nuxi dev playground",
48
+ "dev:build": "nuxi build playground",
49
+ "dev:generate": "nuxi generate playground",
50
+ "dev:prepare": "nuxt-module-build --stub && nuxi prepare playground",
51
+ "dev:preview": "nuxi preview playground",
52
+ "release": "npm run lint && npm run test && npm run prepack && changelogen --release --minor && npm publish && git push --follow-tags",
53
+ "lint": "eslint .",
54
+ "test": "vitest run",
55
+ "test:watch": "vitest watch"
56
+ },
57
+ "dependencies": {
58
+ "@nuxt/kit": "latest",
59
+ "defu": "latest",
60
+ "openai": "^4.96.2"
61
+ },
62
+ "devDependencies": {
63
+ "@nuxt/eslint-config": "latest",
64
+ "@nuxt/module-builder": "^1.0.1",
65
+ "@nuxt/schema": "latest",
66
+ "@nuxt/test-utils": "^3.18.0",
67
+ "changelogen": "latest",
68
+ "eslint": "latest",
69
+ "nuxt": "^3.16.2",
70
+ "vitest": "latest"
71
+ }
72
72
  }