localm-web 0.2.0 → 0.4.0
This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
- package/CHANGELOG.md +88 -0
- package/README.md +15 -3
- package/dist/assets/{inference.worker-CwvQtobb.js → inference.worker-DZbXKJZY.js} +49 -5
- package/dist/assets/inference.worker-DZbXKJZY.js.map +1 -0
- package/dist/index.d.ts +362 -5
- package/dist/index.js +381 -7
- package/dist/index.js.map +1 -1
- package/package.json +8 -1
- package/dist/assets/inference.worker-CwvQtobb.js.map +0 -1
package/CHANGELOG.md
CHANGED
|
@@ -7,6 +7,94 @@ and this project adheres to [Semantic Versioning](https://semver.org/spec/v2.0.0
|
|
|
7
7
|
|
|
8
8
|
## [Unreleased]
|
|
9
9
|
|
|
10
|
+
## [0.4.0] - 2026-05-10
|
|
11
|
+
|
|
12
|
+
### Added
|
|
13
|
+
|
|
14
|
+
- **Structured output (v0.4)** — JSON mode and JSON Schema constrained
|
|
15
|
+
decoding via WebLLM's `response_format` / xgrammar.
|
|
16
|
+
- `GenerationOptions.json: boolean` — when `true`, the engine is forced
|
|
17
|
+
to emit a string parseable as JSON (free-form shape).
|
|
18
|
+
`GenerationOptions.jsonSchema?: object` — when set, takes priority
|
|
19
|
+
over `json` and constrains decoding so the output matches the schema.
|
|
20
|
+
- `ChatReply.json<T>()` and `CompletionResult.json<T>()` parse the
|
|
21
|
+
generated text and return it cast to `T`. No runtime validation of
|
|
22
|
+
the schema is performed; pair with Ajv / Zod on the call site if you
|
|
23
|
+
need it.
|
|
24
|
+
- `StructuredOutputError` (extends `LocalmWebError`) wraps the
|
|
25
|
+
underlying `SyntaxError` from `JSON.parse`, so consumers can
|
|
26
|
+
distinguish SDK-issued failures from unrelated runtime exceptions.
|
|
27
|
+
- `src/structured/json-schema.ts` exposes `assertJsonSchema`,
|
|
28
|
+
`serializeJsonSchema`, and `parseStructuredOutput<T>` re-exported
|
|
29
|
+
from `localm-web`.
|
|
30
|
+
- `WebLLMEngine.generate` / `stream` / `complete` / `streamCompletion`
|
|
31
|
+
forward `response_format` to WebLLM. Worker engine inherits the
|
|
32
|
+
behavior without changes (the worker protocol already passes
|
|
33
|
+
`GenerationOptions` through `postMessage`; only `signal` is stripped).
|
|
34
|
+
- 15 unit tests in `test/structured-output.test.ts` covering schema
|
|
35
|
+
assertion (accept / reject paths), schema serialization, JSON parsing
|
|
36
|
+
of objects / arrays / primitives / invalid input, error chaining via
|
|
37
|
+
`cause`, and the `.json()` helpers on `ChatReply` and
|
|
38
|
+
`CompletionResult`.
|
|
39
|
+
|
|
40
|
+
## [0.3.0] - 2026-05-10
|
|
41
|
+
|
|
42
|
+
### Changed
|
|
43
|
+
|
|
44
|
+
- **`LMTaskCreateOptions.inWorker` default flipped from `false` to `true`.**
|
|
45
|
+
`Chat.create()` and `Completion.create()` now spawn a Web Worker by
|
|
46
|
+
default, isolating tokenization and WebGPU dispatches from the UI
|
|
47
|
+
thread. Pass `inWorker: false` explicitly to revert to main-thread
|
|
48
|
+
inference (useful in environments without `Worker` support or when
|
|
49
|
+
debugging the runtime). The fast path for opting out is unchanged
|
|
50
|
+
in shape — only the default differs. Pre-1.0 SDK; consumers
|
|
51
|
+
upgrading from v0.2 will silently move inference off the main
|
|
52
|
+
thread, which is desirable for almost every app.
|
|
53
|
+
|
|
54
|
+
### Added
|
|
55
|
+
|
|
56
|
+
- `Embeddings` task in `src/tasks/embeddings.ts` — sentence embeddings
|
|
57
|
+
via `@huggingface/transformers`. `Embeddings.create(modelId, options?)`
|
|
58
|
+
returns an instance; `embed(texts: string[], options?)` returns
|
|
59
|
+
`number[][]`; `embedSingle(text)` returns `number[]`. Empty input
|
|
60
|
+
yields `[]` (per project convention — no NotFoundError on empty).
|
|
61
|
+
Default pooling `"mean"`, default `normalize: true`.
|
|
62
|
+
- `EMBEDDING_PRESETS` registry with `bge-small-en-v1.5` (384-dim) and
|
|
63
|
+
`bge-base-en-v1.5` (768-dim). `resolveEmbeddingPreset(id)` and
|
|
64
|
+
`listSupportedEmbeddingModels()` helpers.
|
|
65
|
+
- Public types: `EmbeddingPreset`, `EmbeddingsCreateOptions`,
|
|
66
|
+
`EmbedOptions`, `EmbedPipeline` (DI hook for tests).
|
|
67
|
+
- `Reranker` task in `src/tasks/reranker.ts` — cross-encoder reranking
|
|
68
|
+
via `@huggingface/transformers`. `Reranker.create(modelId, options?)`
|
|
69
|
+
returns an instance; `score(query, docs, options?)` returns
|
|
70
|
+
`number[]` (raw logits, or sigmoid-mapped to `[0, 1]` when
|
|
71
|
+
`sigmoid: true`); `rank(query, docs, options?)` returns
|
|
72
|
+
`RankedDocument[]` sorted descending by score with the original
|
|
73
|
+
index preserved. Empty `docs` yields `[]`.
|
|
74
|
+
- `RERANKER_PRESETS` registry with `bge-reranker-base`.
|
|
75
|
+
`resolveRerankerPreset(id)` and `listSupportedRerankerModels()`
|
|
76
|
+
helpers. Public type `RerankerPreset`.
|
|
77
|
+
- Public types: `RerankerCreateOptions`, `RerankOptions`,
|
|
78
|
+
`RerankPipeline`, `RankedDocument`.
|
|
79
|
+
- `peerDependenciesMeta` marks `@huggingface/transformers` as
|
|
80
|
+
optional — Chat / Completion users do not need to install it.
|
|
81
|
+
- 10 unit tests in `test/embeddings.test.ts` covering registry
|
|
82
|
+
resolution, batch + single embedding, empty input short-circuit,
|
|
83
|
+
pooling / normalize defaults and overrides, unload delegation,
|
|
84
|
+
graceful unload when pipeline omits `unload()`.
|
|
85
|
+
- 10 unit tests in `test/reranker.test.ts` covering registry
|
|
86
|
+
resolution, score order preservation, empty input short-circuit,
|
|
87
|
+
sigmoid normalization, default raw-logit output, descending sort
|
|
88
|
+
in `rank()`, unload delegation, graceful unload without
|
|
89
|
+
`unload()`.
|
|
90
|
+
- `docs/getting-started.md` v0.3 update — new sections covering
|
|
91
|
+
Embeddings, Reranker, the retrieve-then-rerank pattern, the
|
|
92
|
+
embedding / reranker registries, and the new Web-Worker-by-default
|
|
93
|
+
behavior. Existing sections (model registry, downloads, cache,
|
|
94
|
+
troubleshooting) carry over unchanged.
|
|
95
|
+
|
|
96
|
+
## [0.2.0] - 2026-05-10
|
|
97
|
+
|
|
10
98
|
### Added
|
|
11
99
|
|
|
12
100
|
- `docs/getting-started.md` — end-to-end guide covering prerequisites,
|
package/README.md
CHANGED
|
@@ -134,10 +134,22 @@ const vectors = await emb.embed(["hello world", "another sentence"]);
|
|
|
134
134
|
const rerank = await Reranker.create("bge-reranker-base");
|
|
135
135
|
const scores = await rerank.score("query", ["doc1", "doc2", "doc3"]);
|
|
136
136
|
|
|
137
|
-
// Structured output
|
|
138
|
-
const
|
|
139
|
-
|
|
137
|
+
// Structured output — free-form JSON
|
|
138
|
+
const jsonReply = await chat.send("List three pros and cons of WebGPU as JSON.", { json: true });
|
|
139
|
+
const data = jsonReply.json<{ pros: string[]; cons: string[] }>();
|
|
140
|
+
|
|
141
|
+
// Structured output — JSON Schema constrained decoding (xgrammar via WebLLM)
|
|
142
|
+
const userReply = await chat.send("Extract user info from: 'Ada, 36, …'", {
|
|
143
|
+
jsonSchema: {
|
|
144
|
+
type: "object",
|
|
145
|
+
required: ["name", "age"],
|
|
146
|
+
properties: {
|
|
147
|
+
name: { type: "string" },
|
|
148
|
+
age: { type: "integer", minimum: 0 },
|
|
149
|
+
},
|
|
150
|
+
},
|
|
140
151
|
});
|
|
152
|
+
const user = userReply.json<{ name: string; age: number }>();
|
|
141
153
|
```
|
|
142
154
|
|
|
143
155
|
The shape mirrors `ort-vision-sdk-web`: `await Class.create(model)` then `predict()` / `send()` / `embed()` / `score()`.
|
|
@@ -24,6 +24,33 @@ class ModelNotLoadedError extends LocalmWebError {
|
|
|
24
24
|
}
|
|
25
25
|
class GenerationAbortedError extends LocalmWebError {
|
|
26
26
|
}
|
|
27
|
+
class StructuredOutputError extends LocalmWebError {
|
|
28
|
+
}
|
|
29
|
+
function assertJsonSchema(schema) {
|
|
30
|
+
if (schema === null || typeof schema !== "object" || Array.isArray(schema)) {
|
|
31
|
+
throw new StructuredOutputError("jsonSchema must be a plain object describing a JSON Schema.");
|
|
32
|
+
}
|
|
33
|
+
const keys = Object.keys(schema);
|
|
34
|
+
const recognized = [
|
|
35
|
+
"type",
|
|
36
|
+
"$ref",
|
|
37
|
+
"oneOf",
|
|
38
|
+
"anyOf",
|
|
39
|
+
"allOf",
|
|
40
|
+
"enum",
|
|
41
|
+
"const",
|
|
42
|
+
"properties"
|
|
43
|
+
];
|
|
44
|
+
if (!keys.some((key) => recognized.includes(key))) {
|
|
45
|
+
throw new StructuredOutputError(
|
|
46
|
+
"jsonSchema does not look like a JSON Schema (missing type/$ref/oneOf/anyOf/allOf/enum/const/properties)."
|
|
47
|
+
);
|
|
48
|
+
}
|
|
49
|
+
}
|
|
50
|
+
function serializeJsonSchema(schema) {
|
|
51
|
+
assertJsonSchema(schema);
|
|
52
|
+
return JSON.stringify(schema);
|
|
53
|
+
}
|
|
27
54
|
let webllmModulePromise = null;
|
|
28
55
|
async function loadWebLLM() {
|
|
29
56
|
if (!webllmModulePromise) {
|
|
@@ -41,6 +68,15 @@ function buildSamplingParams(options) {
|
|
|
41
68
|
if (options.topP !== void 0) params.top_p = options.topP;
|
|
42
69
|
return params;
|
|
43
70
|
}
|
|
71
|
+
function buildResponseFormat(options) {
|
|
72
|
+
if (options.jsonSchema !== void 0) {
|
|
73
|
+
return { type: "json_object", schema: serializeJsonSchema(options.jsonSchema) };
|
|
74
|
+
}
|
|
75
|
+
if (options.json) {
|
|
76
|
+
return { type: "json_object" };
|
|
77
|
+
}
|
|
78
|
+
return void 0;
|
|
79
|
+
}
|
|
44
80
|
function toChatMessages(messages) {
|
|
45
81
|
return messages.map((m) => {
|
|
46
82
|
switch (m.role) {
|
|
@@ -95,10 +131,12 @@ class WebLLMEngine {
|
|
|
95
131
|
if (options.signal?.aborted) {
|
|
96
132
|
throw new GenerationAbortedError("Generation aborted before start.");
|
|
97
133
|
}
|
|
134
|
+
const responseFormat = buildResponseFormat(options);
|
|
98
135
|
const completion = await engine2.chat.completions.create({
|
|
99
136
|
...buildSamplingParams(options),
|
|
100
137
|
messages: toChatMessages(messages),
|
|
101
|
-
stream: false
|
|
138
|
+
stream: false,
|
|
139
|
+
...responseFormat ? { response_format: responseFormat } : {}
|
|
102
140
|
});
|
|
103
141
|
return completion.choices[0]?.message?.content ?? "";
|
|
104
142
|
}
|
|
@@ -107,10 +145,12 @@ class WebLLMEngine {
|
|
|
107
145
|
if (options.signal?.aborted) {
|
|
108
146
|
throw new GenerationAbortedError("Generation aborted before start.");
|
|
109
147
|
}
|
|
148
|
+
const responseFormat = buildResponseFormat(options);
|
|
110
149
|
const completion = await engine2.chat.completions.create({
|
|
111
150
|
...buildSamplingParams(options),
|
|
112
151
|
messages: toChatMessages(messages),
|
|
113
|
-
stream: true
|
|
152
|
+
stream: true,
|
|
153
|
+
...responseFormat ? { response_format: responseFormat } : {}
|
|
114
154
|
});
|
|
115
155
|
let index = 0;
|
|
116
156
|
let finished = false;
|
|
@@ -144,10 +184,12 @@ class WebLLMEngine {
|
|
|
144
184
|
if (options.signal?.aborted) {
|
|
145
185
|
throw new GenerationAbortedError("Generation aborted before start.");
|
|
146
186
|
}
|
|
187
|
+
const responseFormat = buildResponseFormat(options);
|
|
147
188
|
const completion = await engine2.completions.create({
|
|
148
189
|
...buildSamplingParams(options),
|
|
149
190
|
prompt,
|
|
150
|
-
stream: false
|
|
191
|
+
stream: false,
|
|
192
|
+
...responseFormat ? { response_format: responseFormat } : {}
|
|
151
193
|
});
|
|
152
194
|
return completion.choices[0]?.text ?? "";
|
|
153
195
|
}
|
|
@@ -156,10 +198,12 @@ class WebLLMEngine {
|
|
|
156
198
|
if (options.signal?.aborted) {
|
|
157
199
|
throw new GenerationAbortedError("Generation aborted before start.");
|
|
158
200
|
}
|
|
201
|
+
const responseFormat = buildResponseFormat(options);
|
|
159
202
|
const completion = await engine2.completions.create({
|
|
160
203
|
...buildSamplingParams(options),
|
|
161
204
|
prompt,
|
|
162
|
-
stream: true
|
|
205
|
+
stream: true,
|
|
206
|
+
...responseFormat ? { response_format: responseFormat } : {}
|
|
163
207
|
});
|
|
164
208
|
let index = 0;
|
|
165
209
|
let finished = false;
|
|
@@ -327,4 +371,4 @@ self.addEventListener("message", (event) => {
|
|
|
327
371
|
return;
|
|
328
372
|
}
|
|
329
373
|
});
|
|
330
|
-
//# sourceMappingURL=inference.worker-
|
|
374
|
+
//# sourceMappingURL=inference.worker-DZbXKJZY.js.map
|
|
@@ -0,0 +1 @@
|
|
|
1
|
+
{"version":3,"file":"inference.worker-DZbXKJZY.js","sources":["../src/core/load-phase.ts","../src/core/exceptions.ts","../src/structured/json-schema.ts","../src/core/webllm-engine.ts","../src/worker/inference.worker.ts"],"sourcesContent":["import type { ModelLoadPhase } from \"../types\";\n\nconst DOWNLOAD_PATTERN: RegExp = /\\b(fetch|download|loading from cache|cache hit|param)/i;\nconst COMPILE_PATTERN: RegExp = /\\b(compil|shader|kernel|tensor|init|allocat|warm)/i;\n\n/**\n * Classify a runtime status text into a {@link ModelLoadPhase}.\n *\n * Heuristic: match download-related verbs first (network or cache hits are\n * treated as `downloading`), then compile-related verbs. Anything else falls\n * back to the generic `loading` bucket. The `ready` phase is never returned\n * here — callers emit it explicitly when the load resolves.\n *\n * @param text - The raw status string from the runtime.\n * @returns The classified phase.\n */\nexport function classifyLoadPhase(text: string): ModelLoadPhase {\n if (DOWNLOAD_PATTERN.test(text)) return \"downloading\";\n if (COMPILE_PATTERN.test(text)) return \"compiling\";\n return \"loading\";\n}\n","/**\n * Error hierarchy for localm-web.\n *\n * All errors thrown by the SDK extend `LocalmWebError` so consumers can\n * distinguish SDK errors from unrelated runtime errors with a single\n * `instanceof` check.\n */\n\n/** Base class for every error raised by localm-web. */\nexport class LocalmWebError extends Error {\n /**\n * @param message - Human-readable description of the error.\n * @param cause - Underlying error, if any.\n */\n constructor(\n message: string,\n public readonly cause?: unknown\n ) {\n super(message);\n this.name = new.target.name;\n }\n}\n\n/** Thrown when WebGPU is required but not available in the host browser. */\nexport class WebGPUUnavailableError extends LocalmWebError {}\n\n/** Thrown when a model fails to load (network, parsing, runtime init). */\nexport class ModelLoadError extends LocalmWebError {}\n\n/** Thrown when an inference call is made before a model has loaded. */\nexport class ModelNotLoadedError extends LocalmWebError {}\n\n/** Thrown when a model id is not present in the curated registry. */\nexport class UnknownModelError extends LocalmWebError {}\n\n/** Thrown when generation is aborted via an `AbortSignal`. */\nexport class GenerationAbortedError extends LocalmWebError {}\n\n/** Thrown when the browser denies storage quota for the model cache. */\nexport class QuotaExceededError extends LocalmWebError {}\n\n/** Thrown when no usable backend is available on the current platform. */\nexport class BackendNotAvailableError extends LocalmWebError {}\n\n/**\n * Thrown when structured output (JSON mode or JSON Schema constrained\n * decoding) fails to parse as valid JSON.\n *\n * Wraps the underlying `SyntaxError` from `JSON.parse` so consumers can\n * distinguish SDK-issued failures from unrelated runtime exceptions.\n */\nexport class StructuredOutputError extends LocalmWebError {}\n","/**\n * JSON Schema helpers for structured output.\n *\n * The SDK delegates the actual constrained decoding to the underlying\n * runtime (xgrammar inside WebLLM today, ORT-Web equivalent later). These\n * helpers normalize user input — turning a JS object schema into the\n * JSON-string shape that WebLLM's `response_format.schema` expects — and\n * parse the runtime's textual output back into typed JSON.\n */\n\nimport { StructuredOutputError } from \"../core/exceptions\";\n\n/**\n * Minimal structural sanity check for a JSON Schema.\n *\n * Does not validate the schema against the JSON Schema meta-schema. The goal\n * is to fail fast on obvious mistakes (passing a string, an array, `null`)\n * before handing the value off to the runtime, where errors surface much\n * later and with much worse messages.\n *\n * @param schema - Candidate JSON Schema object.\n * @throws StructuredOutputError when `schema` is not a plain object or has\n * no recognizable schema shape (`type`, `$ref`, `oneOf`, `anyOf`, `allOf`,\n * `enum`).\n */\nexport function assertJsonSchema(schema: unknown): asserts schema is object {\n if (schema === null || typeof schema !== \"object\" || Array.isArray(schema)) {\n throw new StructuredOutputError(\"jsonSchema must be a plain object describing a JSON Schema.\");\n }\n const keys: string[] = Object.keys(schema);\n const recognized: readonly string[] = [\n \"type\",\n \"$ref\",\n \"oneOf\",\n \"anyOf\",\n \"allOf\",\n \"enum\",\n \"const\",\n \"properties\",\n ];\n if (!keys.some((key) => recognized.includes(key))) {\n throw new StructuredOutputError(\n \"jsonSchema does not look like a JSON Schema (missing type/$ref/oneOf/anyOf/allOf/enum/const/properties).\"\n );\n }\n}\n\n/**\n * Serialize a JSON Schema object for the WebLLM `response_format.schema`\n * field.\n *\n * WebLLM expects the schema as a JSON-encoded string (xgrammar parses it\n * server-side). Validates the shape via {@link assertJsonSchema} first.\n *\n * @param schema - JSON Schema object.\n * @returns The schema serialized as a JSON string.\n * @throws StructuredOutputError when `schema` is not a recognizable JSON\n * Schema shape.\n */\nexport function serializeJsonSchema(schema: unknown): string {\n assertJsonSchema(schema);\n return JSON.stringify(schema);\n}\n\n/**\n * Parse the textual output of a structured-decoding generation as JSON.\n *\n * @typeParam T - The expected parsed shape. The function does not validate\n * the parsed value against `T`; that is the caller's responsibility.\n * @param text - Raw text returned by the engine.\n * @returns The parsed JSON value cast to `T`.\n * @throws StructuredOutputError when the text is not valid JSON.\n */\nexport function parseStructuredOutput<T = unknown>(text: string): T {\n try {\n return JSON.parse(text) as T;\n } catch (err) {\n throw new StructuredOutputError(\n \"Engine output is not valid JSON. The model may have ignored the constrained decoding directive.\",\n err\n );\n }\n}\n","import type { Engine } from \"./engine\";\nimport { classifyLoadPhase } from \"./load-phase\";\nimport type { GenerationOptions, Message, ProgressCallback, TokenChunk } from \"../types\";\nimport {\n GenerationAbortedError,\n ModelLoadError,\n ModelNotLoadedError,\n WebGPUUnavailableError,\n} from \"./exceptions\";\nimport { serializeJsonSchema } from \"../structured/json-schema\";\n\ntype WebLLMModule = typeof import(\"@mlc-ai/web-llm\");\ntype MLCEngine = import(\"@mlc-ai/web-llm\").MLCEngineInterface;\ntype ChatCompletionMessageParam = import(\"@mlc-ai/web-llm\").ChatCompletionMessageParam;\ntype ResponseFormat = import(\"@mlc-ai/web-llm\").ResponseFormat;\n\nlet webllmModulePromise: Promise<WebLLMModule> | null = null;\n\nasync function loadWebLLM(): Promise<WebLLMModule> {\n if (!webllmModulePromise) {\n webllmModulePromise = import(\"@mlc-ai/web-llm\");\n }\n return webllmModulePromise;\n}\n\nfunction isWebGPUAvailable(): boolean {\n return typeof navigator !== \"undefined\" && \"gpu\" in navigator;\n}\n\ninterface SamplingParams {\n max_tokens?: number;\n temperature?: number;\n top_p?: number;\n}\n\nfunction buildSamplingParams(options: GenerationOptions): SamplingParams {\n const params: SamplingParams = {};\n if (options.maxTokens !== undefined) params.max_tokens = options.maxTokens;\n if (options.temperature !== undefined) params.temperature = options.temperature;\n if (options.topP !== undefined) params.top_p = options.topP;\n return params;\n}\n\n/**\n * Build the WebLLM `response_format` payload from generation options.\n *\n * Returns `undefined` when the caller has not requested structured output —\n * letting WebLLM use its default free-text decoding path. When `jsonSchema`\n * is set it takes priority and is serialized into the `schema` field\n * (xgrammar parses it server-side). When only `json` is set the payload\n * carries `{ type: \"json_object\" }` for unconstrained-but-valid JSON.\n */\nfunction buildResponseFormat(options: GenerationOptions): ResponseFormat | undefined {\n if (options.jsonSchema !== undefined) {\n return { type: \"json_object\", schema: serializeJsonSchema(options.jsonSchema) };\n }\n if (options.json) {\n return { type: \"json_object\" };\n }\n return undefined;\n}\n\nfunction toChatMessages(messages: Message[]): ChatCompletionMessageParam[] {\n return messages.map((m): ChatCompletionMessageParam => {\n switch (m.role) {\n case \"system\":\n return { role: \"system\", content: m.content };\n case \"user\":\n return { role: \"user\", content: m.content };\n case \"assistant\":\n return { role: \"assistant\", content: m.content };\n case \"tool\":\n return { role: \"tool\", content: m.content, tool_call_id: m.name ?? \"\" };\n }\n });\n}\n\n/**\n * Inference engine backed by [WebLLM (MLC)](https://github.com/mlc-ai/web-llm).\n *\n * Requires WebGPU. The fallback path planned for v0.5 will route to ORT-Web\n * when WebGPU is missing.\n */\nexport class WebLLMEngine implements Engine {\n private engine: MLCEngine | null = null;\n\n isLoaded(): boolean {\n return this.engine !== null;\n }\n\n async load(modelId: string, onProgress?: ProgressCallback): Promise<void> {\n if (!isWebGPUAvailable()) {\n throw new WebGPUUnavailableError(\n \"WebGPU is not available in this browser. The ORT-Web fallback is planned for v0.5.\"\n );\n }\n const webllm = await loadWebLLM();\n try {\n this.engine = await webllm.CreateMLCEngine(modelId, {\n initProgressCallback: (report): void => {\n onProgress?.({\n progress: report.progress,\n text: report.text,\n loaded: 0,\n total: 0,\n phase: classifyLoadPhase(report.text),\n });\n },\n });\n onProgress?.({\n progress: 1,\n text: \"Model ready.\",\n loaded: 0,\n total: 0,\n phase: \"ready\",\n });\n } catch (err) {\n throw new ModelLoadError(`Failed to load model \"${modelId}\".`, err);\n }\n }\n\n async generate(messages: Message[], options: GenerationOptions = {}): Promise<string> {\n const engine = this.requireEngine();\n if (options.signal?.aborted) {\n throw new GenerationAbortedError(\"Generation aborted before start.\");\n }\n const responseFormat = buildResponseFormat(options);\n const completion = await engine.chat.completions.create({\n ...buildSamplingParams(options),\n messages: toChatMessages(messages),\n stream: false,\n ...(responseFormat ? { response_format: responseFormat } : {}),\n });\n return completion.choices[0]?.message?.content ?? \"\";\n }\n\n async *stream(messages: Message[], options: GenerationOptions = {}): AsyncIterable<TokenChunk> {\n const engine = this.requireEngine();\n if (options.signal?.aborted) {\n throw new GenerationAbortedError(\"Generation aborted before start.\");\n }\n const responseFormat = buildResponseFormat(options);\n const completion = await engine.chat.completions.create({\n ...buildSamplingParams(options),\n messages: toChatMessages(messages),\n stream: true,\n ...(responseFormat ? { response_format: responseFormat } : {}),\n });\n let index: number = 0;\n let finished: boolean = false;\n try {\n for await (const chunk of completion) {\n if (options.signal?.aborted) {\n throw new GenerationAbortedError(\"Generation aborted by signal.\");\n }\n const choice = chunk.choices[0];\n const delta = choice?.delta?.content ?? \"\";\n if (delta) {\n yield { text: delta, index, done: false };\n index += 1;\n }\n if (choice?.finish_reason) {\n finished = true;\n yield { text: \"\", index, done: true };\n index += 1;\n }\n }\n if (!finished) {\n yield { text: \"\", index, done: true };\n }\n } catch (err) {\n if (err instanceof GenerationAbortedError) throw err;\n throw new ModelLoadError(\"Streaming generation failed.\", err);\n }\n }\n\n async complete(prompt: string, options: GenerationOptions = {}): Promise<string> {\n const engine = this.requireEngine();\n if (options.signal?.aborted) {\n throw new GenerationAbortedError(\"Generation aborted before start.\");\n }\n const responseFormat = buildResponseFormat(options);\n const completion = await engine.completions.create({\n ...buildSamplingParams(options),\n prompt,\n stream: false,\n ...(responseFormat ? { response_format: responseFormat } : {}),\n });\n return completion.choices[0]?.text ?? \"\";\n }\n\n async *streamCompletion(\n prompt: string,\n options: GenerationOptions = {}\n ): AsyncIterable<TokenChunk> {\n const engine = this.requireEngine();\n if (options.signal?.aborted) {\n throw new GenerationAbortedError(\"Generation aborted before start.\");\n }\n const responseFormat = buildResponseFormat(options);\n const completion = await engine.completions.create({\n ...buildSamplingParams(options),\n prompt,\n stream: true,\n ...(responseFormat ? { response_format: responseFormat } : {}),\n });\n let index: number = 0;\n let finished: boolean = false;\n try {\n for await (const chunk of completion) {\n if (options.signal?.aborted) {\n throw new GenerationAbortedError(\"Generation aborted by signal.\");\n }\n const choice = chunk.choices[0];\n const delta = choice?.text ?? \"\";\n if (delta) {\n yield { text: delta, index, done: false };\n index += 1;\n }\n if (choice?.finish_reason) {\n finished = true;\n yield { text: \"\", index, done: true };\n index += 1;\n }\n }\n if (!finished) {\n yield { text: \"\", index, done: true };\n }\n } catch (err) {\n if (err instanceof GenerationAbortedError) throw err;\n throw new ModelLoadError(\"Streaming completion failed.\", err);\n }\n }\n\n async unload(): Promise<void> {\n if (this.engine) {\n await this.engine.unload();\n this.engine = null;\n }\n }\n\n private requireEngine(): MLCEngine {\n if (!this.engine) {\n throw new ModelNotLoadedError(\"Engine not loaded. Call load() before generation.\");\n }\n return this.engine;\n }\n}\n","/// <reference lib=\"webworker\" />\n\nimport { WebLLMEngine } from \"../core/webllm-engine\";\nimport type { WorkerRequest, WorkerResponse } from \"./protocol\";\n\ndeclare const self: DedicatedWorkerGlobalScope;\n\nconst engine: WebLLMEngine = new WebLLMEngine();\nconst aborts: Map<number, AbortController> = new Map();\n\nfunction reply(message: WorkerResponse): void {\n self.postMessage(message);\n}\n\nfunction fail(id: number, err: unknown): void {\n const error = err instanceof Error ? err : new Error(String(err));\n reply({ op: \"error\", id, name: error.name, message: error.message });\n}\n\nasync function handleLoad(req: Extract<WorkerRequest, { op: \"load\" }>): Promise<void> {\n try {\n await engine.load(req.modelId, (payload) => {\n reply({ op: \"progress\", id: req.id, payload });\n });\n reply({ op: \"loaded\", id: req.id });\n } catch (err) {\n fail(req.id, err);\n }\n}\n\nasync function handleGenerate(req: Extract<WorkerRequest, { op: \"generate\" }>): Promise<void> {\n const controller: AbortController = new AbortController();\n aborts.set(req.id, controller);\n try {\n const text: string = await engine.generate(req.messages, {\n ...req.options,\n signal: controller.signal,\n });\n reply({ op: \"generated\", id: req.id, text });\n } catch (err) {\n fail(req.id, err);\n } finally {\n aborts.delete(req.id);\n }\n}\n\nasync function handleComplete(req: Extract<WorkerRequest, { op: \"complete\" }>): Promise<void> {\n const controller: AbortController = new AbortController();\n aborts.set(req.id, controller);\n try {\n const text: string = await engine.complete(req.prompt, {\n ...req.options,\n signal: controller.signal,\n });\n reply({ op: \"generated\", id: req.id, text });\n } catch (err) {\n fail(req.id, err);\n } finally {\n aborts.delete(req.id);\n }\n}\n\nasync function handleStreamCompletion(\n req: Extract<WorkerRequest, { op: \"stream-completion\" }>\n): Promise<void> {\n const controller: AbortController = new AbortController();\n aborts.set(req.id, controller);\n try {\n for await (const chunk of engine.streamCompletion(req.prompt, {\n ...req.options,\n signal: controller.signal,\n })) {\n reply({ op: \"token\", id: req.id, chunk });\n }\n reply({ op: \"stream-end\", id: req.id });\n } catch (err) {\n fail(req.id, err);\n } finally {\n aborts.delete(req.id);\n }\n}\n\nasync function handleStream(req: Extract<WorkerRequest, { op: \"stream\" }>): Promise<void> {\n const controller: AbortController = new AbortController();\n aborts.set(req.id, controller);\n try {\n for await (const chunk of engine.stream(req.messages, {\n ...req.options,\n signal: controller.signal,\n })) {\n reply({ op: \"token\", id: req.id, chunk });\n }\n reply({ op: \"stream-end\", id: req.id });\n } catch (err) {\n fail(req.id, err);\n } finally {\n aborts.delete(req.id);\n }\n}\n\nasync function handleUnload(req: Extract<WorkerRequest, { op: \"unload\" }>): Promise<void> {\n try {\n await engine.unload();\n reply({ op: \"unloaded\", id: req.id });\n } catch (err) {\n fail(req.id, err);\n }\n}\n\nfunction handleIsLoaded(req: Extract<WorkerRequest, { op: \"isLoaded\" }>): void {\n reply({ op: \"is-loaded\", id: req.id, value: engine.isLoaded() });\n}\n\nfunction handleAbort(req: Extract<WorkerRequest, { op: \"abort\" }>): void {\n aborts.get(req.id)?.abort();\n}\n\nself.addEventListener(\"message\", (event: MessageEvent<WorkerRequest>): void => {\n const req = event.data;\n switch (req.op) {\n case \"load\":\n void handleLoad(req);\n return;\n case \"generate\":\n void handleGenerate(req);\n return;\n case \"stream\":\n void handleStream(req);\n return;\n case \"complete\":\n void handleComplete(req);\n return;\n case \"stream-completion\":\n void handleStreamCompletion(req);\n return;\n case \"unload\":\n void handleUnload(req);\n return;\n case \"isLoaded\":\n handleIsLoaded(req);\n return;\n case \"abort\":\n handleAbort(req);\n return;\n }\n});\n"],"names":["engine"],"mappings":"AAEA,MAAM,mBAA2B;AACjC,MAAM,kBAA0B;AAazB,SAAS,kBAAkB,MAA8B;AAC9D,MAAI,iBAAiB,KAAK,IAAI,EAAG,QAAO;AACxC,MAAI,gBAAgB,KAAK,IAAI,EAAG,QAAO;AACvC,SAAO;AACT;ACXO,MAAM,uBAAuB,MAAM;AAAA;AAAA;AAAA;AAAA;AAAA,EAKxC,YACE,SACgB,OAChB;AACA,UAAM,OAAO;AAFG,SAAA,QAAA;AAGhB,SAAK,OAAO,WAAW;AAAA,EACzB;AACF;AAGO,MAAM,+BAA+B,eAAe;AAAC;AAGrD,MAAM,uBAAuB,eAAe;AAAC;AAG7C,MAAM,4BAA4B,eAAe;AAAC;AAMlD,MAAM,+BAA+B,eAAe;AAAC;AAerD,MAAM,8BAA8B,eAAe;AAAC;AC1BpD,SAAS,iBAAiB,QAA2C;AAC1E,MAAI,WAAW,QAAQ,OAAO,WAAW,YAAY,MAAM,QAAQ,MAAM,GAAG;AAC1E,UAAM,IAAI,sBAAsB,6DAA6D;AAAA,EAC/F;AACA,QAAM,OAAiB,OAAO,KAAK,MAAM;AACzC,QAAM,aAAgC;AAAA,IACpC;AAAA,IACA;AAAA,IACA;AAAA,IACA;AAAA,IACA;AAAA,IACA;AAAA,IACA;AAAA,IACA;AAAA,EAAA;AAEF,MAAI,CAAC,KAAK,KAAK,CAAC,QAAQ,WAAW,SAAS,GAAG,CAAC,GAAG;AACjD,UAAM,IAAI;AAAA,MACR;AAAA,IAAA;AAAA,EAEJ;AACF;AAcO,SAAS,oBAAoB,QAAyB;AAC3D,mBAAiB,MAAM;AACvB,SAAO,KAAK,UAAU,MAAM;AAC9B;AC9CA,IAAI,sBAAoD;AAExD,eAAe,aAAoC;AACjD,MAAI,CAAC,qBAAqB;AACxB,0BAAsB,OAAO,qBAAiB;AAAA,EAChD;AACA,SAAO;AACT;AAEA,SAAS,oBAA6B;AACpC,SAAO,OAAO,cAAc,eAAe,SAAS;AACtD;AAQA,SAAS,oBAAoB,SAA4C;AACvE,QAAM,SAAyB,CAAA;AAC/B,MAAI,QAAQ,cAAc,OAAW,QAAO,aAAa,QAAQ;AACjE,MAAI,QAAQ,gBAAgB,OAAW,QAAO,cAAc,QAAQ;AACpE,MAAI,QAAQ,SAAS,OAAW,QAAO,QAAQ,QAAQ;AACvD,SAAO;AACT;AAWA,SAAS,oBAAoB,SAAwD;AACnF,MAAI,QAAQ,eAAe,QAAW;AACpC,WAAO,EAAE,MAAM,eAAe,QAAQ,oBAAoB,QAAQ,UAAU,EAAA;AAAA,EAC9E;AACA,MAAI,QAAQ,MAAM;AAChB,WAAO,EAAE,MAAM,cAAA;AAAA,EACjB;AACA,SAAO;AACT;AAEA,SAAS,eAAe,UAAmD;AACzE,SAAO,SAAS,IAAI,CAAC,MAAkC;AACrD,YAAQ,EAAE,MAAA;AAAA,MACR,KAAK;AACH,eAAO,EAAE,MAAM,UAAU,SAAS,EAAE,QAAA;AAAA,MACtC,KAAK;AACH,eAAO,EAAE,MAAM,QAAQ,SAAS,EAAE,QAAA;AAAA,MACpC,KAAK;AACH,eAAO,EAAE,MAAM,aAAa,SAAS,EAAE,QAAA;AAAA,MACzC,KAAK;AACH,eAAO,EAAE,MAAM,QAAQ,SAAS,EAAE,SAAS,cAAc,EAAE,QAAQ,GAAA;AAAA,IAAG;AAAA,EAE5E,CAAC;AACH;AAQO,MAAM,aAA+B;AAAA,EAClC,SAA2B;AAAA,EAEnC,WAAoB;AAClB,WAAO,KAAK,WAAW;AAAA,EACzB;AAAA,EAEA,MAAM,KAAK,SAAiB,YAA8C;AACxE,QAAI,CAAC,qBAAqB;AACxB,YAAM,IAAI;AAAA,QACR;AAAA,MAAA;AAAA,IAEJ;AACA,UAAM,SAAS,MAAM,WAAA;AACrB,QAAI;AACF,WAAK,SAAS,MAAM,OAAO,gBAAgB,SAAS;AAAA,QAClD,sBAAsB,CAAC,WAAiB;AACtC,uBAAa;AAAA,YACX,UAAU,OAAO;AAAA,YACjB,MAAM,OAAO;AAAA,YACb,QAAQ;AAAA,YACR,OAAO;AAAA,YACP,OAAO,kBAAkB,OAAO,IAAI;AAAA,UAAA,CACrC;AAAA,QACH;AAAA,MAAA,CACD;AACD,mBAAa;AAAA,QACX,UAAU;AAAA,QACV,MAAM;AAAA,QACN,QAAQ;AAAA,QACR,OAAO;AAAA,QACP,OAAO;AAAA,MAAA,CACR;AAAA,IACH,SAAS,KAAK;AACZ,YAAM,IAAI,eAAe,yBAAyB,OAAO,MAAM,GAAG;AAAA,IACpE;AAAA,EACF;AAAA,EAEA,MAAM,SAAS,UAAqB,UAA6B,IAAqB;AACpF,UAAMA,UAAS,KAAK,cAAA;AACpB,QAAI,QAAQ,QAAQ,SAAS;AAC3B,YAAM,IAAI,uBAAuB,kCAAkC;AAAA,IACrE;AACA,UAAM,iBAAiB,oBAAoB,OAAO;AAClD,UAAM,aAAa,MAAMA,QAAO,KAAK,YAAY,OAAO;AAAA,MACtD,GAAG,oBAAoB,OAAO;AAAA,MAC9B,UAAU,eAAe,QAAQ;AAAA,MACjC,QAAQ;AAAA,MACR,GAAI,iBAAiB,EAAE,iBAAiB,mBAAmB,CAAA;AAAA,IAAC,CAC7D;AACD,WAAO,WAAW,QAAQ,CAAC,GAAG,SAAS,WAAW;AAAA,EACpD;AAAA,EAEA,OAAO,OAAO,UAAqB,UAA6B,IAA+B;AAC7F,UAAMA,UAAS,KAAK,cAAA;AACpB,QAAI,QAAQ,QAAQ,SAAS;AAC3B,YAAM,IAAI,uBAAuB,kCAAkC;AAAA,IACrE;AACA,UAAM,iBAAiB,oBAAoB,OAAO;AAClD,UAAM,aAAa,MAAMA,QAAO,KAAK,YAAY,OAAO;AAAA,MACtD,GAAG,oBAAoB,OAAO;AAAA,MAC9B,UAAU,eAAe,QAAQ;AAAA,MACjC,QAAQ;AAAA,MACR,GAAI,iBAAiB,EAAE,iBAAiB,mBAAmB,CAAA;AAAA,IAAC,CAC7D;AACD,QAAI,QAAgB;AACpB,QAAI,WAAoB;AACxB,QAAI;AACF,uBAAiB,SAAS,YAAY;AACpC,YAAI,QAAQ,QAAQ,SAAS;AAC3B,gBAAM,IAAI,uBAAuB,+BAA+B;AAAA,QAClE;AACA,cAAM,SAAS,MAAM,QAAQ,CAAC;AAC9B,cAAM,QAAQ,QAAQ,OAAO,WAAW;AACxC,YAAI,OAAO;AACT,gBAAM,EAAE,MAAM,OAAO,OAAO,MAAM,MAAA;AAClC,mBAAS;AAAA,QACX;AACA,YAAI,QAAQ,eAAe;AACzB,qBAAW;AACX,gBAAM,EAAE,MAAM,IAAI,OAAO,MAAM,KAAA;AAC/B,mBAAS;AAAA,QACX;AAAA,MACF;AACA,UAAI,CAAC,UAAU;AACb,cAAM,EAAE,MAAM,IAAI,OAAO,MAAM,KAAA;AAAA,MACjC;AAAA,IACF,SAAS,KAAK;AACZ,UAAI,eAAe,uBAAwB,OAAM;AACjD,YAAM,IAAI,eAAe,gCAAgC,GAAG;AAAA,IAC9D;AAAA,EACF;AAAA,EAEA,MAAM,SAAS,QAAgB,UAA6B,IAAqB;AAC/E,UAAMA,UAAS,KAAK,cAAA;AACpB,QAAI,QAAQ,QAAQ,SAAS;AAC3B,YAAM,IAAI,uBAAuB,kCAAkC;AAAA,IACrE;AACA,UAAM,iBAAiB,oBAAoB,OAAO;AAClD,UAAM,aAAa,MAAMA,QAAO,YAAY,OAAO;AAAA,MACjD,GAAG,oBAAoB,OAAO;AAAA,MAC9B;AAAA,MACA,QAAQ;AAAA,MACR,GAAI,iBAAiB,EAAE,iBAAiB,mBAAmB,CAAA;AAAA,IAAC,CAC7D;AACD,WAAO,WAAW,QAAQ,CAAC,GAAG,QAAQ;AAAA,EACxC;AAAA,EAEA,OAAO,iBACL,QACA,UAA6B,IACF;AAC3B,UAAMA,UAAS,KAAK,cAAA;AACpB,QAAI,QAAQ,QAAQ,SAAS;AAC3B,YAAM,IAAI,uBAAuB,kCAAkC;AAAA,IACrE;AACA,UAAM,iBAAiB,oBAAoB,OAAO;AAClD,UAAM,aAAa,MAAMA,QAAO,YAAY,OAAO;AAAA,MACjD,GAAG,oBAAoB,OAAO;AAAA,MAC9B;AAAA,MACA,QAAQ;AAAA,MACR,GAAI,iBAAiB,EAAE,iBAAiB,mBAAmB,CAAA;AAAA,IAAC,CAC7D;AACD,QAAI,QAAgB;AACpB,QAAI,WAAoB;AACxB,QAAI;AACF,uBAAiB,SAAS,YAAY;AACpC,YAAI,QAAQ,QAAQ,SAAS;AAC3B,gBAAM,IAAI,uBAAuB,+BAA+B;AAAA,QAClE;AACA,cAAM,SAAS,MAAM,QAAQ,CAAC;AAC9B,cAAM,QAAQ,QAAQ,QAAQ;AAC9B,YAAI,OAAO;AACT,gBAAM,EAAE,MAAM,OAAO,OAAO,MAAM,MAAA;AAClC,mBAAS;AAAA,QACX;AACA,YAAI,QAAQ,eAAe;AACzB,qBAAW;AACX,gBAAM,EAAE,MAAM,IAAI,OAAO,MAAM,KAAA;AAC/B,mBAAS;AAAA,QACX;AAAA,MACF;AACA,UAAI,CAAC,UAAU;AACb,cAAM,EAAE,MAAM,IAAI,OAAO,MAAM,KAAA;AAAA,MACjC;AAAA,IACF,SAAS,KAAK;AACZ,UAAI,eAAe,uBAAwB,OAAM;AACjD,YAAM,IAAI,eAAe,gCAAgC,GAAG;AAAA,IAC9D;AAAA,EACF;AAAA,EAEA,MAAM,SAAwB;AAC5B,QAAI,KAAK,QAAQ;AACf,YAAM,KAAK,OAAO,OAAA;AAClB,WAAK,SAAS;AAAA,IAChB;AAAA,EACF;AAAA,EAEQ,gBAA2B;AACjC,QAAI,CAAC,KAAK,QAAQ;AAChB,YAAM,IAAI,oBAAoB,mDAAmD;AAAA,IACnF;AACA,WAAO,KAAK;AAAA,EACd;AACF;AChPA,MAAM,SAAuB,IAAI,aAAA;AACjC,MAAM,6BAA2C,IAAA;AAEjD,SAAS,MAAM,SAA+B;AAC5C,OAAK,YAAY,OAAO;AAC1B;AAEA,SAAS,KAAK,IAAY,KAAoB;AAC5C,QAAM,QAAQ,eAAe,QAAQ,MAAM,IAAI,MAAM,OAAO,GAAG,CAAC;AAChE,QAAM,EAAE,IAAI,SAAS,IAAI,MAAM,MAAM,MAAM,SAAS,MAAM,QAAA,CAAS;AACrE;AAEA,eAAe,WAAW,KAA4D;AACpF,MAAI;AACF,UAAM,OAAO,KAAK,IAAI,SAAS,CAAC,YAAY;AAC1C,YAAM,EAAE,IAAI,YAAY,IAAI,IAAI,IAAI,SAAS;AAAA,IAC/C,CAAC;AACD,UAAM,EAAE,IAAI,UAAU,IAAI,IAAI,IAAI;AAAA,EACpC,SAAS,KAAK;AACZ,SAAK,IAAI,IAAI,GAAG;AAAA,EAClB;AACF;AAEA,eAAe,eAAe,KAAgE;AAC5F,QAAM,aAA8B,IAAI,gBAAA;AACxC,SAAO,IAAI,IAAI,IAAI,UAAU;AAC7B,MAAI;AACF,UAAM,OAAe,MAAM,OAAO,SAAS,IAAI,UAAU;AAAA,MACvD,GAAG,IAAI;AAAA,MACP,QAAQ,WAAW;AAAA,IAAA,CACpB;AACD,UAAM,EAAE,IAAI,aAAa,IAAI,IAAI,IAAI,MAAM;AAAA,EAC7C,SAAS,KAAK;AACZ,SAAK,IAAI,IAAI,GAAG;AAAA,EAClB,UAAA;AACE,WAAO,OAAO,IAAI,EAAE;AAAA,EACtB;AACF;AAEA,eAAe,eAAe,KAAgE;AAC5F,QAAM,aAA8B,IAAI,gBAAA;AACxC,SAAO,IAAI,IAAI,IAAI,UAAU;AAC7B,MAAI;AACF,UAAM,OAAe,MAAM,OAAO,SAAS,IAAI,QAAQ;AAAA,MACrD,GAAG,IAAI;AAAA,MACP,QAAQ,WAAW;AAAA,IAAA,CACpB;AACD,UAAM,EAAE,IAAI,aAAa,IAAI,IAAI,IAAI,MAAM;AAAA,EAC7C,SAAS,KAAK;AACZ,SAAK,IAAI,IAAI,GAAG;AAAA,EAClB,UAAA;AACE,WAAO,OAAO,IAAI,EAAE;AAAA,EACtB;AACF;AAEA,eAAe,uBACb,KACe;AACf,QAAM,aAA8B,IAAI,gBAAA;AACxC,SAAO,IAAI,IAAI,IAAI,UAAU;AAC7B,MAAI;AACF,qBAAiB,SAAS,OAAO,iBAAiB,IAAI,QAAQ;AAAA,MAC5D,GAAG,IAAI;AAAA,MACP,QAAQ,WAAW;AAAA,IAAA,CACpB,GAAG;AACF,YAAM,EAAE,IAAI,SAAS,IAAI,IAAI,IAAI,OAAO;AAAA,IAC1C;AACA,UAAM,EAAE,IAAI,cAAc,IAAI,IAAI,IAAI;AAAA,EACxC,SAAS,KAAK;AACZ,SAAK,IAAI,IAAI,GAAG;AAAA,EAClB,UAAA;AACE,WAAO,OAAO,IAAI,EAAE;AAAA,EACtB;AACF;AAEA,eAAe,aAAa,KAA8D;AACxF,QAAM,aAA8B,IAAI,gBAAA;AACxC,SAAO,IAAI,IAAI,IAAI,UAAU;AAC7B,MAAI;AACF,qBAAiB,SAAS,OAAO,OAAO,IAAI,UAAU;AAAA,MACpD,GAAG,IAAI;AAAA,MACP,QAAQ,WAAW;AAAA,IAAA,CACpB,GAAG;AACF,YAAM,EAAE,IAAI,SAAS,IAAI,IAAI,IAAI,OAAO;AAAA,IAC1C;AACA,UAAM,EAAE,IAAI,cAAc,IAAI,IAAI,IAAI;AAAA,EACxC,SAAS,KAAK;AACZ,SAAK,IAAI,IAAI,GAAG;AAAA,EAClB,UAAA;AACE,WAAO,OAAO,IAAI,EAAE;AAAA,EACtB;AACF;AAEA,eAAe,aAAa,KAA8D;AACxF,MAAI;AACF,UAAM,OAAO,OAAA;AACb,UAAM,EAAE,IAAI,YAAY,IAAI,IAAI,IAAI;AAAA,EACtC,SAAS,KAAK;AACZ,SAAK,IAAI,IAAI,GAAG;AAAA,EAClB;AACF;AAEA,SAAS,eAAe,KAAuD;AAC7E,QAAM,EAAE,IAAI,aAAa,IAAI,IAAI,IAAI,OAAO,OAAO,SAAA,GAAY;AACjE;AAEA,SAAS,YAAY,KAAoD;AACvE,SAAO,IAAI,IAAI,EAAE,GAAG,MAAA;AACtB;AAEA,KAAK,iBAAiB,WAAW,CAAC,UAA6C;AAC7E,QAAM,MAAM,MAAM;AAClB,UAAQ,IAAI,IAAA;AAAA,IACV,KAAK;AACH,WAAK,WAAW,GAAG;AACnB;AAAA,IACF,KAAK;AACH,WAAK,eAAe,GAAG;AACvB;AAAA,IACF,KAAK;AACH,WAAK,aAAa,GAAG;AACrB;AAAA,IACF,KAAK;AACH,WAAK,eAAe,GAAG;AACvB;AAAA,IACF,KAAK;AACH,WAAK,uBAAuB,GAAG;AAC/B;AAAA,IACF,KAAK;AACH,WAAK,aAAa,GAAG;AACrB;AAAA,IACF,KAAK;AACH,qBAAe,GAAG;AAClB;AAAA,IACF,KAAK;AACH,kBAAY,GAAG;AACf;AAAA,EAAA;AAEN,CAAC;"}
|