nuxt-local-model 0.1.1

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
package/README.md ADDED
@@ -0,0 +1,332 @@
1
+ # Nuxt Local Model
2
+
3
+ [![npm version][npm-version-src]][npm-version-href]
4
+ [![npm downloads][npm-downloads-src]][npm-downloads-href]
5
+ [![License][license-src]][license-href]
6
+
7
+ ## Scalable local inference for Nuxt
8
+
9
+ <img width="1280" height="640" alt="Nuxt Local Model banner" src="./assets/module-banner.svg" />
10
+
11
+ Note: This package is under active development. Please open issues if you run into anything unclear.
12
+
13
+ - [✨ &nbsp;Release Notes](/CHANGELOG.md)
14
+ - [📖 &nbsp;Documentation](https://github.com/Aft1n/nuxt-local-model)
15
+
16
+ A Nuxt module for easily integrating local Hugging Face transformer models into your Nuxt 4 application.
17
+
18
+ ## Features
19
+
20
+ - Easily use local models in your Nuxt app
21
+ - Supports any Hugging Face task and model you want to configure
22
+ - Auto-imported composable, `useLocalModel()` by default for frontend Vue code
23
+ - Server-safe helper, `getLocalModel()` for `server/api` and utilities
24
+ - Fully configurable via `nuxt.config.ts`
25
+ - Supports changing model names, tasks, and settings per usage
26
+ - Optional worker-backed execution on the server or in the browser
27
+ - Server runtime support for Node, Bun, and Deno
28
+ - Works across macOS, Linux, Windows, and Docker
29
+ - Supports persistent model cache directories so models are not re-downloaded on every deploy
30
+
31
+ ## Quick Setup
32
+
33
+ Install the module into your Nuxt application with one command:
34
+
35
+ ```bash
36
+ npx nuxi module add nuxt-local-model
37
+ ```
38
+
39
+ ## Manual Installation
40
+
41
+ If you prefer to install manually, run:
42
+
43
+ ```bash
44
+ # Using npm
45
+ npm install nuxt-local-model
46
+
47
+ # Using yarn
48
+ yarn add nuxt-local-model
49
+
50
+ # Using pnpm
51
+ pnpm add nuxt-local-model
52
+
53
+ # Using bun
54
+ bun add nuxt-local-model
55
+ ```
56
+
57
+ Then, add it to your Nuxt config:
58
+
59
+ ```ts
60
+ export default defineNuxtConfig({
61
+ modules: ["nuxt-local-model"],
62
+ })
63
+ ```
64
+
65
+ ## Usage
66
+
67
+ Once installed, you can use `useLocalModel()` in your Vue app code.
68
+
69
+ For server routes and utilities, use `getLocalModel()`.
70
+
71
+ ### Basic Example
72
+
73
+ ```vue
74
+ <script setup lang="ts">
75
+ const embedder = await useLocalModel("embedding")
76
+ const output = await embedder("Nuxt local model example")
77
+ </script>
78
+ ```
79
+
80
+ ### Server Example
81
+
82
+ ```ts
83
+ // server/api/demo/search.get.ts
84
+ import { getLocalModel } from "nuxt-local-model/server"
85
+
86
+ export default defineEventHandler(async () => {
87
+ const embedder = await getLocalModel("embedding")
88
+ return await embedder("hello world")
89
+ })
90
+ ```
91
+
92
+ ### Defining Models in `nuxt.config.ts`
93
+
94
+ ```ts
95
+ import { defineLocalModelConfig } from "nuxt-local-model"
96
+
97
+ export default defineNuxtConfig({
98
+ modules: ["nuxt-local-model"],
99
+ localModel: defineLocalModelConfig({
100
+ runtime: "auto", // auto-detect Node, Bun, or Deno on the server
101
+ cacheDir: "./.ai-models", // one cache folder for downloads and reuse
102
+ allowRemoteModels: true, // allow fetching missing models from Hugging Face
103
+ allowLocalModels: true, // allow reusing cached / mounted model files
104
+ defaultTask: "feature-extraction", // default pipeline type when a model entry does not override it
105
+ serverWorker: false, // run inference in a server worker thread on Node, Bun, or Deno
106
+ browserWorker: false, // run inference in a browser Web Worker; avoid this for very large models
107
+ models: {
108
+ embedding: {
109
+ task: "feature-extraction", // the pipeline type for this alias
110
+ model: "Xenova/all-MiniLM-L6-v2", // the Hugging Face model id
111
+ options: {
112
+ dtype: "q8", // model loading option passed through to Transformers.js
113
+ },
114
+ },
115
+ },
116
+ }),
117
+ })
118
+ ```
119
+
120
+ Tip: `defineLocalModelConfig()` keeps your alias keys as literal types, so you can reuse them
121
+ with `LocalModelAliases<typeof localModel>` if you want exact autocomplete elsewhere in your app.
122
+ If you are writing server routes, import `getLocalModel()` from `nuxt-local-model/server`.
123
+ In Vue app code, `useLocalModel()` is auto-imported once the module is installed.
124
+
125
+ ### Overriding Settings at the Call Site
126
+
127
+ You can still provide the options for the model call where it is used:
128
+
129
+ ```vue
130
+ <script setup lang="ts">
131
+ const model = await useLocalModel("embedding", {
132
+ pooling: "mean",
133
+ normalize: true,
134
+ })
135
+ </script>
136
+ ```
137
+
138
+ ## Configuration Options
139
+
140
+ You can configure the module in your `nuxt.config.ts`:
141
+
142
+ ```ts
143
+ import { defineLocalModelConfig } from "nuxt-local-model"
144
+
145
+ export default defineNuxtConfig({
146
+ modules: ["nuxt-local-model"],
147
+ localModel: defineLocalModelConfig({
148
+ runtime: "auto", // or "node", "bun", or "deno"
149
+ cacheDir: "./.ai-models", // persistent cache folder for downloaded model assets
150
+ allowRemoteModels: true, // download from Hugging Face if not yet cached
151
+ allowLocalModels: true, // reuse local cache or mounted volume contents
152
+ defaultTask: "feature-extraction", // default for aliases that do not override task
153
+ serverWorker: true, // use a server worker thread so inference does not block the main server thread
154
+ browserWorker: false, // enable only if you intentionally want browser-side inference
155
+ models: {
156
+ embedding: {
157
+ task: "feature-extraction", // embeddings usually use feature-extraction
158
+ model: "Xenova/all-MiniLM-L6-v2", // any Hugging Face model id you choose
159
+ options: {
160
+ dtype: "q8", // loading/config option forwarded to Transformers.js
161
+ },
162
+ },
163
+ },
164
+ }),
165
+ })
166
+ ```
167
+
168
+ If `onnxruntime-node` is not available in your server runtime, the module now falls back to the default Transformers.js backend instead of crashing during startup.
169
+
170
+ ### Cache Directory
171
+
172
+ The cache directory controls where downloaded model files are stored and reused.
173
+
174
+ Recommended defaults:
175
+
176
+ - local development: `./.ai-models`
177
+ - Docker: mount a persistent volume to the same path
178
+
179
+ Important:
180
+
181
+ - the cache path in `nuxt.config.ts` must match the path inside the Docker container
182
+ - the folder name on your laptop does not have to match the Docker folder name
183
+ - what matters in production is the path the app reads inside the container
184
+
185
+ Example Docker runtime setup:
186
+
187
+ ```bash
188
+ docker run \
189
+ -e NUXT_LOCAL_MODEL_CACHE_DIR=/data/local-models \
190
+ -v local-models:/data/local-models \
191
+ your-image:latest
192
+ ```
193
+
194
+ This ensures the model files stay available across redeploys and container restarts.
195
+
196
+ What this does:
197
+
198
+ - `NUXT_LOCAL_MODEL_CACHE_DIR=/data/local-models` tells the app which folder to use for model caching
199
+ - `-v local-models:/data/local-models` mounts a persistent Docker volume at that same folder
200
+ - the first container start downloads missing models into the mounted cache folder
201
+ - later starts reuse the models already stored there
202
+
203
+ You can rename the host-facing volume however you want. What matters is that the path inside
204
+ the container matches the cache path used by the module.
205
+
206
+ In Docker, the environment variable and volume path point the app to the mounted folder:
207
+
208
+ ```dockerfile
209
+ ENV NUXT_LOCAL_MODEL_CACHE_DIR=/models-cache
210
+ VOLUME ["/models-cache"]
211
+ ```
212
+
213
+ That means the Nuxt app will use `/models-cache` inside the container, and Docker will
214
+ attach a persistent volume there when you run the container with `-v`.
215
+
216
+ ### Docker Volume Cache Example
217
+
218
+ If you want Docker to download model files on first launch and reuse them on later redeploys,
219
+ mount a persistent volume at the same cache path the app uses.
220
+
221
+ The build does not need to copy model files manually. The first container start writes them
222
+ into the mounted volume, and subsequent starts reuse whatever is already there.
223
+
224
+ ```dockerfile
225
+ FROM node:22-alpine AS deps
226
+ WORKDIR /app
227
+
228
+ COPY package.json pnpm-lock.yaml ./
229
+ RUN corepack enable && pnpm install --frozen-lockfile
230
+
231
+ FROM deps AS build
232
+ WORKDIR /app
233
+
234
+ COPY . .
235
+
236
+ ENV NUXT_LOCAL_MODEL_CACHE_DIR=/models-cache
237
+ RUN pnpm run build
238
+
239
+ FROM node:22-alpine
240
+ WORKDIR /app
241
+
242
+ ENV NUXT_LOCAL_MODEL_CACHE_DIR=/models-cache
243
+ VOLUME ["/models-cache"]
244
+
245
+ COPY --from=build /app/.output ./.output
246
+ COPY --from=deps /app/node_modules ./node_modules
247
+
248
+ CMD ["node", ".output/server/index.mjs"]
249
+ ```
250
+
251
+ Use this as a template in your Nuxt Docker build if you want a persistent cache path.
252
+ At runtime, the mounted volume should be attached to `/models-cache`, and the app will
253
+ download missing models into that volume the first time it runs.
254
+
255
+ In other words:
256
+
257
+ - your local dev cache can be `./.ai-models`
258
+ - your Docker cache can be `/models-cache`
259
+ - both are fine as long as the app config matches the environment it runs in
260
+
261
+ ### Naming Rule
262
+
263
+ - `useLocalModel()` is for frontend Vue components, pages, and composables
264
+ - `getLocalModel()` is for `server/api` routes and Nitro utilities
265
+
266
+ Both use the same underlying model-loading logic, so the runtime behavior stays consistent.
267
+
268
+ ### Worker Mode
269
+
270
+ You can choose where the model runs:
271
+
272
+ - `serverWorker: true` runs model inference in a Node worker thread on your Nuxt server
273
+ - `browserWorker: true` runs model inference in a browser Web Worker
274
+
275
+ This is useful if you want to keep heavy inference off the main request or UI thread.
276
+
277
+ Be careful with `browserWorker` and large models:
278
+
279
+ - the model must be downloaded into the user’s browser
280
+ - 100s of MB models can be slow or impractical for client delivery
281
+ - server worker mode is usually the better default for large models
282
+
283
+ ### Server Worker vs Browser Worker
284
+
285
+ | Mode | Where it runs | Best for | Tradeoff |
286
+ | --------------- | -------------------------------- | ----------------------------------------------------------- | ----------------------------------------- |
287
+ | `serverWorker` | Nuxt server / Node worker thread | Large models, shared cache, server-rendered apps | Uses server CPU and memory |
288
+ | `browserWorker` | User’s browser Web Worker | Small client-side models, privacy-sensitive local inference | Model must be downloaded into the browser |
289
+
290
+ ## Transformers.js Docs
291
+
292
+ For model/task behavior and runtime options, see the official Transformers.js docs:
293
+
294
+ - [Transformers.js docs](https://huggingface.co/docs/transformers.js/main)
295
+ - [Environment settings](https://huggingface.co/docs/transformers.js/main/api/env)
296
+ - [Pipeline behavior](https://huggingface.co/docs/transformers/en/main_classes/pipelines)
297
+
298
+ ## Playground
299
+
300
+ This package includes a minimal playground app with an embedding example inside `playground/`.
301
+
302
+ The playground keeps the note list in the page and uses server routes for embeddings and search, so it demonstrates the server-backed flow end to end without a database.
303
+
304
+ Run it with:
305
+
306
+ ```bash
307
+ npm run dev
308
+ ```
309
+
310
+ ## Publishing
311
+
312
+ If you want to publish this module to GitHub and npm:
313
+
314
+ 1. `cd nuxt-local-model`
315
+ 2. `git init`
316
+ 3. commit the files
317
+ 4. create a GitHub repository
318
+ 5. connect the remote and push
319
+ 6. run `npm login`
320
+ 7. publish with `npm publish --access public`
321
+
322
+ ## Notes
323
+
324
+ - This module is intentionally generic and does not ship opinionated preset models.
325
+ - The example playground shows how to wire an embedding model, but you can register any task/model combination supported by `@huggingface/transformers`.
326
+
327
+ [npm-version-src]: https://img.shields.io/npm/v/nuxt-local-model?style=flat-square
328
+ [npm-version-href]: https://www.npmjs.com/package/nuxt-local-model
329
+ [npm-downloads-src]: https://img.shields.io/npm/dm/nuxt-local-model?style=flat-square
330
+ [npm-downloads-href]: https://www.npmjs.com/package/nuxt-local-model
331
+ [license-src]: https://img.shields.io/npm/l/nuxt-local-model?style=flat-square
332
+ [license-href]: https://opensource.org/licenses/MIT
@@ -0,0 +1,18 @@
1
+ import * as nuxt_schema from 'nuxt/schema';
2
+ import { LocalModelRuntimeConfig } from '../dist/runtime/types.js';
3
+
4
+ interface NuxtLlmModuleOptions extends LocalModelRuntimeConfig {
5
+ }
6
+ declare const _default: nuxt_schema.NuxtModule<NuxtLlmModuleOptions, {
7
+ runtime: "auto";
8
+ cacheDir: string;
9
+ allowRemoteModels: true;
10
+ allowLocalModels: true;
11
+ defaultTask: "feature-extraction";
12
+ serverWorker: false;
13
+ browserWorker: false;
14
+ models: {};
15
+ }, true>;
16
+
17
+ export { _default as default };
18
+ export type { NuxtLlmModuleOptions };
@@ -0,0 +1,9 @@
1
+ {
2
+ "name": "nuxt-local-model",
3
+ "configKey": "localModel",
4
+ "version": "0.1.1",
5
+ "builder": {
6
+ "@nuxt/module-builder": "1.0.2",
7
+ "unbuild": "unknown"
8
+ }
9
+ }
@@ -0,0 +1,68 @@
1
+ import { defineNuxtModule, createResolver, addImports, addPlugin } from '@nuxt/kit';
2
+ import { existsSync } from 'node:fs';
3
+ import { setLocalModelRuntimeConfig } from '../dist/runtime/shared/local-model.js';
4
+
5
+ const module$1 = defineNuxtModule().with({
6
+ meta: {
7
+ name: "nuxt-local-model",
8
+ configKey: "localModel"
9
+ },
10
+ defaults: {
11
+ runtime: "auto",
12
+ cacheDir: "./.ai-models",
13
+ allowRemoteModels: true,
14
+ allowLocalModels: true,
15
+ defaultTask: "feature-extraction",
16
+ serverWorker: false,
17
+ browserWorker: false,
18
+ models: {}
19
+ },
20
+ setup(options, nuxt) {
21
+ const { resolve } = createResolver(import.meta.url);
22
+ const serverWorkerJs = resolve("./runtime/server/worker.js");
23
+ const serverWorkerTs = resolve("./runtime/server/worker.ts");
24
+ const serverWorkerEntry = existsSync(serverWorkerJs) ? serverWorkerJs : serverWorkerTs;
25
+ setLocalModelRuntimeConfig({
26
+ ...options,
27
+ serverWorkerEntry
28
+ });
29
+ nuxt.options.runtimeConfig.public ||= {};
30
+ nuxt.options.runtimeConfig.public.localModel = {
31
+ cacheDir: options.cacheDir,
32
+ allowRemoteModels: options.allowRemoteModels,
33
+ allowLocalModels: options.allowLocalModels,
34
+ runtime: options.runtime,
35
+ defaultTask: options.defaultTask,
36
+ serverWorker: options.serverWorker,
37
+ serverWorkerEntry,
38
+ browserWorker: options.browserWorker,
39
+ models: options.models
40
+ };
41
+ addImports({
42
+ name: "useLocalModel",
43
+ from: resolve("./runtime/composables/useLocalModel")
44
+ });
45
+ addPlugin({
46
+ src: resolve("./runtime/plugins/hf-transformers.server")
47
+ });
48
+ addPlugin({
49
+ src: resolve("./runtime/plugins/hf-transformers.client"),
50
+ mode: "client"
51
+ });
52
+ nuxt.hook("ready", async () => {
53
+ const modelNames = Object.keys(options.models || {});
54
+ if (modelNames.length === 0) return;
55
+ const { loadLocalModel } = await import('../dist/runtime/shared/local-model.js');
56
+ const results = await Promise.allSettled(modelNames.map((name) => loadLocalModel(name, options)));
57
+ results.forEach((result, index) => {
58
+ if (result.status === "rejected") {
59
+ const name = modelNames[index];
60
+ const reason = result.reason instanceof Error ? result.reason.message : String(result.reason);
61
+ console.warn(`[nuxt-local-model] failed to warm model "${name}" during startup: ${reason}`);
62
+ }
63
+ });
64
+ });
65
+ }
66
+ });
67
+
68
+ export { module$1 as default };
@@ -0,0 +1,2 @@
1
+ import type { LocalModelRunner, LocalModelPipelineOptions } from "../types.js";
2
+ export declare function useLocalModel(name: string, callOptions?: LocalModelPipelineOptions): Promise<import("@huggingface/transformers").Pipeline | LocalModelRunner>;
@@ -0,0 +1,105 @@
1
+ import { useRuntimeConfig } from "nuxt/app";
2
+ import { getLocalModel, loadLocalModel } from "../shared/local-model.js";
3
+ import { resolveModelDefinition, resolveRuntimeConfig } from "../utils.js";
4
+ const workerCache = /* @__PURE__ */ new Map();
5
+ export async function useLocalModel(name, callOptions = {}) {
6
+ const runtimeConfig = resolveRuntimeConfig(useRuntimeConfig().public.localModel);
7
+ const cacheDir = runtimeConfig.cacheDir;
8
+ const useBrowserWorker = runtimeConfig.browserWorker ?? false;
9
+ const definition = resolveModelDefinition(name, runtimeConfig);
10
+ const key = [name, definition.task, definition.model, cacheDir].join("::");
11
+ const pipelineOptions = definition.options || {};
12
+ if (process.client && useBrowserWorker) {
13
+ if (!workerCache.has(key)) {
14
+ workerCache.set(
15
+ key,
16
+ createBrowserWorkerRunner(
17
+ key,
18
+ definition.task,
19
+ definition.model,
20
+ pipelineOptions,
21
+ cacheDir,
22
+ callOptions,
23
+ runtimeConfig
24
+ ).catch((error) => {
25
+ workerCache.delete(key);
26
+ throw error;
27
+ })
28
+ );
29
+ }
30
+ return workerCache.get(key);
31
+ }
32
+ if (process.server) {
33
+ return getLocalModel(name, callOptions);
34
+ }
35
+ return loadLocalModel(name, runtimeConfig, callOptions);
36
+ }
37
+ function createBrowserWorkerRunner(id, task, model, options, cacheDir, callOptions, runtimeConfig) {
38
+ return new Promise((resolve, reject) => {
39
+ const worker = new Worker(new URL("../worker/model.worker.ts", import.meta.url), { type: "module" });
40
+ const pendingRuns = /* @__PURE__ */ new Map();
41
+ let settled = false;
42
+ const failPendingRuns = (reason) => {
43
+ for (const [, pending] of pendingRuns) {
44
+ pending.reject(new Error(reason));
45
+ }
46
+ pendingRuns.clear();
47
+ };
48
+ const runner = Object.assign(
49
+ async (...args) => new Promise((runResolve, runReject) => {
50
+ const requestId = `${id}:${Date.now()}:${Math.random().toString(16).slice(2)}`;
51
+ pendingRuns.set(requestId, { resolve: runResolve, reject: runReject });
52
+ worker.postMessage({ type: "run", id, requestId, args: [args[0], callOptions] });
53
+ }),
54
+ {
55
+ dispose: () => {
56
+ failPendingRuns("Browser worker disposed");
57
+ workerCache.delete(id);
58
+ worker.removeEventListener("message", handleRun);
59
+ worker.postMessage({ type: "dispose", id });
60
+ worker.terminate();
61
+ }
62
+ }
63
+ );
64
+ const handleInit = (event) => {
65
+ if (event.data.id !== id || event.data.type !== "init") return;
66
+ worker.removeEventListener("message", handleInit);
67
+ if (!event.data.ok) {
68
+ settled = true;
69
+ workerCache.delete(id);
70
+ reject(new Error(event.data.error || "Worker model initialization failed"));
71
+ return;
72
+ }
73
+ settled = true;
74
+ resolve(runner);
75
+ };
76
+ worker.addEventListener("message", handleInit);
77
+ worker.onerror = (error) => {
78
+ failPendingRuns("Browser worker crashed");
79
+ workerCache.delete(id);
80
+ if (!settled) {
81
+ settled = true;
82
+ reject(error);
83
+ }
84
+ };
85
+ const handleRun = (event) => {
86
+ if (event.data.id !== id || event.data.type !== "run" || !event.data.requestId) return;
87
+ const pending = pendingRuns.get(event.data.requestId);
88
+ if (!pending) return;
89
+ pendingRuns.delete(event.data.requestId);
90
+ if (event.data.ok) pending.resolve(event.data.result);
91
+ else pending.reject(new Error(event.data.error || "Worker model execution failed"));
92
+ };
93
+ worker.addEventListener("message", handleRun);
94
+ worker.postMessage({
95
+ type: "init",
96
+ id,
97
+ task,
98
+ model,
99
+ options,
100
+ cacheDir,
101
+ allowRemoteModels: runtimeConfig.allowRemoteModels,
102
+ allowLocalModels: runtimeConfig.allowLocalModels
103
+ });
104
+ });
105
+ }
@@ -0,0 +1,3 @@
1
+ import type { LocalModelConfig } from "./types.js";
2
+ export declare function defineLocalModelConfig<const T extends LocalModelConfig>(config: T): T;
3
+ export type { LocalModelAliases } from "./types.js";
@@ -0,0 +1,3 @@
1
+ export function defineLocalModelConfig(config) {
2
+ return config;
3
+ }
@@ -0,0 +1,19 @@
1
+ import type { LocalModelRuntimeConfig } from "./types"
2
+
3
+ declare module "@nuxt/schema" {
4
+ interface NuxtConfig {
5
+ localModel?: LocalModelRuntimeConfig
6
+ }
7
+
8
+ interface NuxtOptions {
9
+ localModel?: LocalModelRuntimeConfig
10
+ }
11
+
12
+ interface RuntimeConfig {
13
+ public: {
14
+ localModel?: LocalModelRuntimeConfig
15
+ }
16
+ }
17
+ }
18
+
19
+ export {}
@@ -0,0 +1,2 @@
1
+ declare const _default: import("nuxt/app").Plugin<Record<string, unknown>> & import("nuxt/app").ObjectPlugin<Record<string, unknown>>;
2
+ export default _default;
@@ -0,0 +1,11 @@
1
+ import { defineNuxtPlugin, useRuntimeConfig } from "nuxt/app";
2
+ import { applyLocalModelEnvironment, resolveRuntimeConfig } from "../utils.js";
3
+ export default defineNuxtPlugin(() => {
4
+ const runtimeConfig = useRuntimeConfig();
5
+ const localModel = resolveRuntimeConfig(runtimeConfig.public.localModel);
6
+ applyLocalModelEnvironment({
7
+ cacheDir: localModel.cacheDir,
8
+ allowRemoteModels: localModel.allowRemoteModels,
9
+ allowLocalModels: false
10
+ });
11
+ });
@@ -0,0 +1,2 @@
1
+ declare const _default: import("nuxt/app").Plugin<Record<string, unknown>> & import("nuxt/app").ObjectPlugin<Record<string, unknown>>;
2
+ export default _default;
@@ -0,0 +1,27 @@
1
+ import { defineNuxtPlugin, useRuntimeConfig } from "nuxt/app";
2
+ import { readdir, stat } from "node:fs/promises";
3
+ import { setLocalModelRuntimeConfig } from "../shared/local-model.js";
4
+ import { applyLocalModelEnvironment, resolveRuntimeConfig } from "../utils.js";
5
+ async function countCachedEntries(cacheDir) {
6
+ try {
7
+ const entries = await readdir(cacheDir, { withFileTypes: true });
8
+ return entries.filter((entry) => entry.isDirectory() || entry.isFile()).length;
9
+ } catch {
10
+ return 0;
11
+ }
12
+ }
13
+ export default defineNuxtPlugin(async () => {
14
+ const runtimeConfig = useRuntimeConfig();
15
+ const localModel = resolveRuntimeConfig(runtimeConfig.public.localModel);
16
+ applyLocalModelEnvironment(localModel);
17
+ setLocalModelRuntimeConfig(localModel);
18
+ const modelNames = Object.keys(localModel.models || {});
19
+ const cacheStatBefore = await stat(localModel.cacheDir).catch(() => null);
20
+ const cachedEntriesBefore = cacheStatBefore?.isDirectory() ? await countCachedEntries(localModel.cacheDir) : 0;
21
+ const cacheStatus = cachedEntriesBefore > 0 ? `\u2705 found ${cachedEntriesBefore} cached model entr${cachedEntriesBefore === 1 ? "y" : "ies"} at ${localModel.cacheDir}` : `\u2B07\uFE0F no cached models found yet at ${localModel.cacheDir}; missing models may download now`;
22
+ const cacheStat = await stat(localModel.cacheDir).catch(() => null);
23
+ const cachedEntries = cacheStat?.isDirectory() ? await countCachedEntries(localModel.cacheDir) : 0;
24
+ console.info(
25
+ `\u{1F916} [nuxt-local-model] ${cacheStatus} \u2022 \u{1F680} ${modelNames.length} configured model(s) will load when Nuxt finishes starting \u2022 \u{1F4E6} cache now has ${cachedEntries} entr${cachedEntries === 1 ? "y" : "ies"}`
26
+ );
27
+ });
@@ -0,0 +1 @@
1
+ export { getLocalModel } from "../shared/local-model.js";
@@ -0,0 +1 @@
1
+ export { getLocalModel } from "../shared/local-model.js";
@@ -0,0 +1 @@
1
+ export {};
@@ -0,0 +1,34 @@
1
+ import { parentPort, workerData } from "node:worker_threads";
2
+ import { isMainThread, threadId } from "node:worker_threads";
3
+ import { pipeline } from "@huggingface/transformers";
4
+ import { serializeWorkerResult } from "../shared/serialize.js";
5
+ import { applyLocalModelEnvironment } from "../utils.js";
6
+ const { id, task, model, options, cacheDir, allowRemoteModels, allowLocalModels } = workerData;
7
+ applyLocalModelEnvironment({
8
+ cacheDir: cacheDir || "./.ai-models",
9
+ allowRemoteModels: allowRemoteModels ?? true,
10
+ allowLocalModels: allowLocalModels ?? true
11
+ });
12
+ console.info(
13
+ `\u{1F9F5} [nuxt-local-model] server worker ready threadId=${threadId} mainThread=${isMainThread} cacheDir=${cacheDir || "./.ai-models"}`
14
+ );
15
+ const modelPromise = pipeline(task, model, options || {});
16
+ parentPort?.on("message", async (message) => {
17
+ try {
18
+ if (message.type === "dispose") {
19
+ parentPort?.postMessage({ id, ok: true, type: "dispose" });
20
+ return;
21
+ }
22
+ const runner = await modelPromise;
23
+ const result = await runner(...message.args);
24
+ parentPort?.postMessage({ id, requestId: message.requestId, ok: true, type: "run", result: serializeWorkerResult(result) });
25
+ } catch (error) {
26
+ parentPort?.postMessage({
27
+ id,
28
+ requestId: message.type === "run" ? message.requestId : void 0,
29
+ ok: false,
30
+ type: message.type,
31
+ error: error instanceof Error ? error.message : String(error)
32
+ });
33
+ }
34
+ });
@@ -0,0 +1,6 @@
1
+ import type { LocalModelPipelineOptions, LocalModelRuntimeConfig } from "../types.js";
2
+ import { type InternalLocalModelRuntimeConfig } from "../utils.js";
3
+ export declare function isLocalModelRuntimeConfig(value: unknown): value is LocalModelRuntimeConfig;
4
+ export declare function setLocalModelRuntimeConfig(config: InternalLocalModelRuntimeConfig | null | undefined): void;
5
+ export declare function loadLocalModel(name: string, runtimeConfig: LocalModelRuntimeConfig, callOptions?: LocalModelPipelineOptions): Promise<import("@huggingface/transformers").Pipeline>;
6
+ export declare function getLocalModel(name: string, callOptions?: LocalModelPipelineOptions): Promise<import("@huggingface/transformers").Pipeline>;
@@ -0,0 +1,212 @@
1
+ import { pipeline } from "@huggingface/transformers";
2
+ import {
3
+ applyLocalModelEnvironment,
4
+ canUseServerWorkerForRuntime,
5
+ resolveModelDefinition,
6
+ resolveRuntimeConfig
7
+ } from "../utils.js";
8
+ const modelCache = /* @__PURE__ */ new Map();
9
+ const serverWorkerCache = /* @__PURE__ */ new Map();
10
+ const warnedMessages = /* @__PURE__ */ new Set();
11
+ let onnxBackendPromise = null;
12
+ const runtimeConfigSymbol = Symbol.for("nuxt-local-model:runtime-config");
13
+ const pipelineFactory = pipeline;
14
+ function getRuntimeConfigStore() {
15
+ return globalThis[runtimeConfigSymbol] || null;
16
+ }
17
+ function setRuntimeConfigStore(config) {
18
+ ;
19
+ globalThis[runtimeConfigSymbol] = config;
20
+ }
21
+ function warnOnce(message) {
22
+ if (warnedMessages.has(message)) return;
23
+ warnedMessages.add(message);
24
+ console.warn(message);
25
+ }
26
+ function cacheKey(name, task, model, cacheDir) {
27
+ return [name, task, model, cacheDir].join("::");
28
+ }
29
+ function createPipelineRunner(task, model, options) {
30
+ return pipelineFactory(task, model, options);
31
+ }
32
+ async function resolveServerWorkerEntry() {
33
+ const [{ existsSync }, pathMod, urlMod] = await Promise.all([
34
+ import("node:fs"),
35
+ import("node:path"),
36
+ import("node:url")
37
+ ]);
38
+ const baseDir = pathMod.dirname(urlMod.fileURLToPath(import.meta.url));
39
+ const jsEntry = pathMod.resolve(baseDir, "../server/worker.js");
40
+ if (existsSync(jsEntry)) return jsEntry;
41
+ return pathMod.resolve(baseDir, "../server/worker.ts");
42
+ }
43
+ async function canUseServerWorkerRuntime(config) {
44
+ if (typeof window !== "undefined") return false;
45
+ if (!canUseServerWorkerForRuntime(config.runtime)) return false;
46
+ try {
47
+ const mod = await import("node:worker_threads");
48
+ return typeof mod.Worker === "function";
49
+ } catch {
50
+ return false;
51
+ }
52
+ }
53
+ async function ensurePreferredOnnxBackend(config) {
54
+ if (typeof window !== "undefined") return;
55
+ if (onnxBackendPromise) return onnxBackendPromise;
56
+ onnxBackendPromise = (async () => {
57
+ try {
58
+ const ort = await import("onnxruntime-node");
59
+ const backend = ort.default ?? ort;
60
+ const symbol = Symbol.for("onnxruntime");
61
+ if (!(symbol in globalThis)) {
62
+ ;
63
+ globalThis[symbol] = backend;
64
+ }
65
+ } catch {
66
+ warnOnce(
67
+ `[nuxt-local-model] onnxruntime-node is not available in ${config.runtime}; falling back to the default Transformers.js backend.`
68
+ );
69
+ }
70
+ })();
71
+ return onnxBackendPromise;
72
+ }
73
+ export function isLocalModelRuntimeConfig(value) {
74
+ return !!value && typeof value === "object" && ("models" in value || "cacheDir" in value || "runtime" in value);
75
+ }
76
+ export function setLocalModelRuntimeConfig(config) {
77
+ if (config) {
78
+ setRuntimeConfigStore(resolveRuntimeConfig(config));
79
+ }
80
+ }
81
+ function createServerWorkerRunner(name, task, model, options, cacheDir, runtimeConfig) {
82
+ const key = cacheKey(name, task, model, cacheDir);
83
+ if (!serverWorkerCache.has(key)) {
84
+ const workerPromise = (async () => {
85
+ const workerEntry = runtimeConfig.serverWorkerEntry || await resolveServerWorkerEntry();
86
+ console.info(`\u{1F9F5} [nuxt-local-model] using server worker for "${name}" (${task} -> ${model}) at ${workerEntry}`);
87
+ const { Worker } = await import("node:worker_threads");
88
+ const worker = new Worker(workerEntry, {
89
+ workerData: {
90
+ id: key,
91
+ task,
92
+ model,
93
+ options,
94
+ cacheDir,
95
+ allowRemoteModels: runtimeConfig.allowRemoteModels,
96
+ allowLocalModels: runtimeConfig.allowLocalModels
97
+ }
98
+ });
99
+ const pendingRuns = /* @__PURE__ */ new Map();
100
+ const failPendingRuns = (reason) => {
101
+ for (const [, pending] of pendingRuns) {
102
+ clearTimeout(pending.timeout);
103
+ pending.reject(new Error(reason));
104
+ }
105
+ pendingRuns.clear();
106
+ };
107
+ const handleMessage = (message) => {
108
+ if (message.id !== key || message.type !== "run") return;
109
+ const pending = pendingRuns.get(message.requestId);
110
+ if (!pending) return;
111
+ clearTimeout(pending.timeout);
112
+ pendingRuns.delete(message.requestId);
113
+ if (message.ok) {
114
+ pending.resolve(message.result);
115
+ return;
116
+ }
117
+ pending.reject(new Error(message.error || "Server worker model execution failed"));
118
+ };
119
+ worker.on("message", handleMessage);
120
+ worker.once("error", (error) => {
121
+ worker.off("message", handleMessage);
122
+ serverWorkerCache.delete(key);
123
+ failPendingRuns(error instanceof Error ? error.message : "Server worker crashed");
124
+ });
125
+ worker.once("exit", (code) => {
126
+ worker.off("message", handleMessage);
127
+ serverWorkerCache.delete(key);
128
+ if (code !== 0) {
129
+ failPendingRuns(`Server worker exited with code ${code}`);
130
+ }
131
+ });
132
+ return Object.assign(
133
+ async (...args) => new Promise((runResolve, runReject) => {
134
+ const requestId = `${key}:${Date.now()}:${Math.random().toString(16).slice(2)}`;
135
+ const timeout = setTimeout(() => {
136
+ pendingRuns.delete(requestId);
137
+ runReject(new Error(`Server worker timed out for "${name}"`));
138
+ }, 12e4);
139
+ pendingRuns.set(requestId, { resolve: runResolve, reject: runReject, timeout });
140
+ worker.postMessage({ type: "run", requestId, args });
141
+ }),
142
+ {
143
+ dispose: async () => {
144
+ failPendingRuns("Server worker disposed");
145
+ worker.off("message", handleMessage);
146
+ serverWorkerCache.delete(key);
147
+ await worker.terminate();
148
+ }
149
+ }
150
+ );
151
+ })().catch((error) => {
152
+ serverWorkerCache.delete(key);
153
+ throw error;
154
+ });
155
+ serverWorkerCache.set(key, workerPromise);
156
+ }
157
+ return serverWorkerCache.get(key);
158
+ }
159
+ export async function loadLocalModel(name, runtimeConfig, callOptions = {}) {
160
+ const resolvedConfig = resolveRuntimeConfig(runtimeConfig);
161
+ await ensurePreferredOnnxBackend(resolvedConfig);
162
+ const definition = resolveModelDefinition(name, resolvedConfig);
163
+ const cacheDir = resolvedConfig.cacheDir;
164
+ if (typeof window === "undefined" && resolvedConfig.serverWorker) {
165
+ if (!await canUseServerWorkerRuntime(resolvedConfig)) {
166
+ warnOnce(
167
+ `[nuxt-local-model] serverWorker is enabled for ${resolvedConfig.runtime}, but worker threads are not available. Falling back to the main server thread.`
168
+ );
169
+ } else {
170
+ return createServerWorkerRunner(
171
+ name,
172
+ definition.task,
173
+ definition.model,
174
+ definition.options || {},
175
+ cacheDir,
176
+ resolvedConfig
177
+ );
178
+ }
179
+ }
180
+ applyLocalModelEnvironment({
181
+ cacheDir,
182
+ allowRemoteModels: resolvedConfig.allowRemoteModels,
183
+ allowLocalModels: resolvedConfig.allowLocalModels
184
+ });
185
+ const key = cacheKey(name, definition.task, definition.model, cacheDir);
186
+ if (!modelCache.has(key)) {
187
+ const modelPromise = (async () => {
188
+ const loaded = await createPipelineRunner(definition.task, definition.model, definition.options || {});
189
+ return Object.assign(
190
+ async (...args) => {
191
+ const input = args[0];
192
+ return loaded(input, callOptions);
193
+ },
194
+ {
195
+ dispose: async () => {
196
+ await loaded.dispose?.();
197
+ modelCache.delete(key);
198
+ }
199
+ }
200
+ );
201
+ })().catch((error) => {
202
+ modelCache.delete(key);
203
+ throw error;
204
+ });
205
+ modelCache.set(key, modelPromise);
206
+ }
207
+ return modelCache.get(key);
208
+ }
209
+ export async function getLocalModel(name, callOptions = {}) {
210
+ const runtimeConfig = getRuntimeConfigStore() || resolveRuntimeConfig(void 0);
211
+ return loadLocalModel(name, runtimeConfig, callOptions);
212
+ }
@@ -0,0 +1 @@
1
+ export declare function serializeWorkerResult(value: unknown): unknown;
@@ -0,0 +1,42 @@
1
+ export function serializeWorkerResult(value) {
2
+ if (value === null || value === void 0) return value;
3
+ if (Array.isArray(value)) {
4
+ return value.map(serializeWorkerResult);
5
+ }
6
+ if (ArrayBuffer.isView(value)) {
7
+ return Array.from(value).map(Number);
8
+ }
9
+ if (value instanceof Date) {
10
+ return value.toISOString();
11
+ }
12
+ if (value instanceof Map) {
13
+ return Object.fromEntries(Array.from(value.entries()).map(([key, entry]) => [String(key), serializeWorkerResult(entry)]));
14
+ }
15
+ if (value instanceof Set) {
16
+ return Array.from(value.values()).map(serializeWorkerResult);
17
+ }
18
+ if (typeof value === "object") {
19
+ const candidate = value;
20
+ if (typeof candidate.tolist === "function") {
21
+ return serializeWorkerResult(candidate.tolist());
22
+ }
23
+ if (typeof candidate.toJSON === "function") {
24
+ const json = candidate.toJSON();
25
+ if (json !== value) return serializeWorkerResult(json);
26
+ }
27
+ if (candidate.data !== void 0) {
28
+ return {
29
+ data: serializeWorkerResult(candidate.data),
30
+ shape: serializeWorkerResult(candidate.shape),
31
+ dims: serializeWorkerResult(candidate.dims),
32
+ type: candidate.type
33
+ };
34
+ }
35
+ const plain = {};
36
+ for (const [key, entry] of Object.entries(candidate)) {
37
+ plain[key] = serializeWorkerResult(entry);
38
+ }
39
+ return plain;
40
+ }
41
+ return value;
42
+ }
@@ -0,0 +1,29 @@
1
+ import { pipeline } from "@huggingface/transformers";
2
+ import type { Pipeline } from "@huggingface/transformers";
3
+ export type LocalModelSupportedRuntime = "node" | "bun" | "deno";
4
+ export type LocalModelRuntime = "auto" | LocalModelSupportedRuntime;
5
+ export type LocalModelTask = "feature-extraction" | "text-classification" | "text-generation" | "fill-mask" | "automatic-speech-recognition" | (string & {});
6
+ export type LocalModelPipeline = Pipeline;
7
+ export type LocalModelPipelineOptions = NonNullable<Parameters<Pipeline>[1]>;
8
+ export type LocalModelPipelineLoadOptions = NonNullable<Parameters<typeof pipeline>[2]>;
9
+ export interface LocalModelDefinition {
10
+ task: LocalModelTask;
11
+ model: string;
12
+ options?: LocalModelPipelineLoadOptions;
13
+ }
14
+ export type LocalModelModelRegistry = Record<string, LocalModelDefinition>;
15
+ export type LocalModelAliases<T extends Pick<LocalModelRuntimeConfig, "models">> = keyof NonNullable<T["models"]> & string;
16
+ export interface LocalModelRuntimeConfig<TModels extends LocalModelModelRegistry = LocalModelModelRegistry> {
17
+ runtime?: LocalModelRuntime;
18
+ cacheDir?: string;
19
+ allowRemoteModels?: boolean;
20
+ allowLocalModels?: boolean;
21
+ defaultTask?: LocalModelTask;
22
+ serverWorker?: boolean;
23
+ browserWorker?: boolean;
24
+ models?: TModels;
25
+ }
26
+ export type LocalModelConfig<TModels extends LocalModelModelRegistry = LocalModelModelRegistry> = LocalModelRuntimeConfig<TModels>;
27
+ export type LocalModelRunner = ((...args: any[]) => Promise<unknown>) & {
28
+ dispose?: () => Promise<void> | void;
29
+ };
File without changes
@@ -0,0 +1,17 @@
1
+ import type { LocalModelDefinition, LocalModelModelRegistry, LocalModelPipelineLoadOptions, LocalModelRuntime, LocalModelRuntimeConfig, LocalModelSupportedRuntime } from "./types.js";
2
+ export interface InternalLocalModelRuntimeConfig extends LocalModelRuntimeConfig {
3
+ serverWorkerEntry?: string;
4
+ }
5
+ export interface ResolvedLocalModelRuntimeConfig extends Omit<InternalLocalModelRuntimeConfig, "runtime" | "models"> {
6
+ runtime: LocalModelSupportedRuntime;
7
+ models: LocalModelModelRegistry;
8
+ cacheDir: string;
9
+ allowRemoteModels: boolean;
10
+ allowLocalModels: boolean;
11
+ }
12
+ export declare function detectLocalModelRuntime(runtime?: LocalModelRuntime): LocalModelSupportedRuntime;
13
+ export declare function resolveCacheDir(cacheDir?: string): string;
14
+ export declare function resolveRuntimeConfig(config?: InternalLocalModelRuntimeConfig): ResolvedLocalModelRuntimeConfig;
15
+ export declare function applyLocalModelEnvironment(config: Pick<ResolvedLocalModelRuntimeConfig, "cacheDir" | "allowRemoteModels" | "allowLocalModels">): void;
16
+ export declare function canUseServerWorkerForRuntime(runtime: LocalModelSupportedRuntime): runtime is "node" | "bun" | "deno";
17
+ export declare function resolveModelDefinition(name: string, runtimeConfig: InternalLocalModelRuntimeConfig | undefined, loadOptions?: LocalModelPipelineLoadOptions): LocalModelDefinition;
@@ -0,0 +1,71 @@
1
+ import { env } from "@huggingface/transformers";
2
+ function readEnvValue(name) {
3
+ const processValue = globalThis.process?.env?.[name];
4
+ if (typeof processValue === "string" && processValue.trim()) {
5
+ return processValue.trim();
6
+ }
7
+ const deno = globalThis;
8
+ try {
9
+ const denoValue = deno.Deno?.env?.get?.(name);
10
+ if (typeof denoValue === "string" && denoValue.trim()) {
11
+ return denoValue.trim();
12
+ }
13
+ } catch {
14
+ }
15
+ return void 0;
16
+ }
17
+ export function detectLocalModelRuntime(runtime = "auto") {
18
+ if (runtime !== "auto") {
19
+ return runtime;
20
+ }
21
+ const bun = globalThis;
22
+ if (bun.Bun || bun.process?.versions?.bun) {
23
+ return "bun";
24
+ }
25
+ const deno = globalThis;
26
+ if (deno.Deno?.version?.deno) {
27
+ return "deno";
28
+ }
29
+ return "node";
30
+ }
31
+ export function resolveCacheDir(cacheDir) {
32
+ if (cacheDir?.trim()) return cacheDir.trim();
33
+ return readEnvValue("NUXT_LOCAL_MODEL_CACHE_DIR") || "./.ai-models";
34
+ }
35
+ export function resolveRuntimeConfig(config) {
36
+ return {
37
+ runtime: detectLocalModelRuntime(config?.runtime ?? "auto"),
38
+ allowRemoteModels: config?.allowRemoteModels ?? true,
39
+ allowLocalModels: config?.allowLocalModels ?? true,
40
+ cacheDir: resolveCacheDir(config?.cacheDir),
41
+ defaultTask: config?.defaultTask ?? "feature-extraction",
42
+ serverWorker: config?.serverWorker ?? false,
43
+ serverWorkerEntry: config?.serverWorkerEntry,
44
+ browserWorker: config?.browserWorker ?? false,
45
+ models: config?.models ?? {}
46
+ };
47
+ }
48
+ export function applyLocalModelEnvironment(config) {
49
+ env.cacheDir = resolveCacheDir(config.cacheDir);
50
+ env.localModelPath = env.cacheDir;
51
+ env.allowRemoteModels = config.allowRemoteModels;
52
+ env.allowLocalModels = config.allowLocalModels;
53
+ }
54
+ export function canUseServerWorkerForRuntime(runtime) {
55
+ return runtime === "node" || runtime === "bun" || runtime === "deno";
56
+ }
57
+ export function resolveModelDefinition(name, runtimeConfig, loadOptions) {
58
+ const resolved = resolveRuntimeConfig(runtimeConfig);
59
+ const registryEntry = resolved.models?.[name];
60
+ if (!registryEntry) {
61
+ throw new Error(`Local model "${name}" is not defined in nuxt.config.`);
62
+ }
63
+ return {
64
+ task: registryEntry?.task || resolved.defaultTask || "feature-extraction",
65
+ model: registryEntry?.model || name,
66
+ options: {
67
+ ...registryEntry?.options || {},
68
+ ...loadOptions
69
+ }
70
+ };
71
+ }
@@ -0,0 +1 @@
1
+ export {};
@@ -0,0 +1,39 @@
1
+ import { pipeline } from "@huggingface/transformers";
2
+ import { serializeWorkerResult } from "../shared/serialize.js";
3
+ import { applyLocalModelEnvironment } from "../utils.js";
4
+ const pipelines = /* @__PURE__ */ new Map();
5
+ self.onmessage = async (event) => {
6
+ const message = event.data;
7
+ try {
8
+ if (message.type === "init") {
9
+ applyLocalModelEnvironment({
10
+ cacheDir: message.cacheDir || "./.ai-models",
11
+ allowRemoteModels: message.allowRemoteModels ?? true,
12
+ allowLocalModels: message.allowLocalModels ?? false
13
+ });
14
+ if (!pipelines.has(message.id)) {
15
+ pipelines.set(message.id, pipeline(message.task, message.model, message.options || {}));
16
+ }
17
+ self.postMessage({ id: message.id, ok: true, type: "init" });
18
+ return;
19
+ }
20
+ if (message.type === "dispose") {
21
+ pipelines.delete(message.id);
22
+ self.postMessage({ id: message.id, ok: true, type: "dispose" });
23
+ return;
24
+ }
25
+ const model = pipelines.get(message.id);
26
+ if (!model) throw new Error("Model pipeline is not initialized.");
27
+ const resolved = await model;
28
+ const result = await resolved(...message.args);
29
+ self.postMessage({ id: message.id, requestId: message.requestId, ok: true, type: "run", result: serializeWorkerResult(result) });
30
+ } catch (error) {
31
+ self.postMessage({
32
+ id: message.id,
33
+ requestId: message.type === "run" ? message.requestId : void 0,
34
+ ok: false,
35
+ type: message.type,
36
+ error: error instanceof Error ? error.message : String(error)
37
+ });
38
+ }
39
+ };
@@ -0,0 +1,9 @@
1
+ import type { NuxtModule } from '@nuxt/schema'
2
+
3
+ import type { default as Module } from './module.mjs'
4
+
5
+ export type ModuleOptions = typeof Module extends NuxtModule<infer O> ? Partial<O> : Record<string, any>
6
+
7
+ export { default } from './module.mjs'
8
+
9
+ export { type NuxtLlmModuleOptions } from './module.mjs'
package/package.json ADDED
@@ -0,0 +1,60 @@
1
+ {
2
+ "name": "nuxt-local-model",
3
+ "version": "0.1.1",
4
+ "description": "A Nuxt module for local in-app Hugging Face transformer models.",
5
+ "license": "MIT",
6
+ "repository": "https://github.com/Aft1n/nuxt-local-model",
7
+ "files": [
8
+ "dist",
9
+ "README.md"
10
+ ],
11
+ "type": "module",
12
+ "main": "./dist/module.mjs",
13
+ "module": "./dist/module.mjs",
14
+ "types": "./dist/module.d.mts",
15
+ "exports": {
16
+ ".": {
17
+ "types": "./dist/module.d.mts",
18
+ "import": "./dist/module.mjs"
19
+ },
20
+ "./client": {
21
+ "types": "./dist/runtime/composables/useLocalModel.d.ts",
22
+ "import": "./dist/runtime/composables/useLocalModel.js"
23
+ },
24
+ "./server": {
25
+ "types": "./dist/runtime/server/index.d.ts",
26
+ "import": "./dist/runtime/server/index.js"
27
+ }
28
+ },
29
+ "publishConfig": {
30
+ "access": "public"
31
+ },
32
+ "scripts": {
33
+ "dev": "nuxi dev playground",
34
+ "build": "nuxt build-module",
35
+ "prepare": "nuxt build-module --prepare",
36
+ "test": "vitest run",
37
+ "test:watch": "vitest"
38
+ },
39
+ "dependencies": {
40
+ "@huggingface/transformers": "^3.8.1",
41
+ "onnxruntime-node": "^1.21.0"
42
+ },
43
+ "devDependencies": {
44
+ "@nuxt/kit": "^4.3.0",
45
+ "@nuxt/module-builder": "^1.0.2",
46
+ "@nuxt/test-utils": "^4.0.0",
47
+ "@types/node": "^24.0.0",
48
+ "nuxt": "^4.3.0",
49
+ "typescript": "^5.9.3",
50
+ "vitest": "^4.0.18"
51
+ },
52
+ "peerDependencies": {
53
+ "nuxt": "^4.3.0"
54
+ },
55
+ "trustedDependencies": [
56
+ "@parcel/watcher",
57
+ "onnxruntime-node",
58
+ "protobufjs"
59
+ ]
60
+ }