@arabold/docs-mcp-server 1.14.0 → 1.15.0

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
package/README.md CHANGED
@@ -103,6 +103,8 @@ This method provides a persistent local setup by running the server and web inte
103
103
 
104
104
  Restart your AI assistant application after updating the configuration.
105
105
 
106
+ Note: The Docker Compose setup runs the Docs MCP Server in HTTP mode (via SSE) by design, as it's intended as a standalone, connectable instance. It does not support stdio communication.
107
+
106
108
  6. **Access the Web Interface:**
107
109
  The web interface will be available at `http://localhost:6281`.
108
110
 
@@ -131,7 +133,7 @@ Once the Docs MCP Server is running, you can use the Web Interface to **add new
131
133
  4. **Click "Queue Job":** The server will start a background job to fetch, process, and index the documentation. You can monitor its progress in the "Job Queue" section of the Web UI.
132
134
  5. **Repeat:** Repeat steps 3-4 for every library whose documentation you want the server to manage.
133
135
 
134
- **That's it!** Once a job completes successfully, the documentation for that library and version becomes available for searching through your connected AI coding assistant (using the `search_docs` tool) or directly in the Web UI's by clicking on the library name in the "Indexed Documenation" section.
136
+ **That's it!** Once a job completes successfully, the documentation for that library and version becomes available for searching through your connected AI coding assistant (using the `search_docs` tool) or directly in the Web UI by clicking on the library name in the "Indexed Documenation" section.
135
137
 
136
138
  ## Alternative: Using Docker
137
139
 
@@ -159,7 +161,7 @@ This approach is easy, straightforward, and doesn't require cloning the reposito
159
161
  "ghcr.io/arabold/docs-mcp-server:latest"
160
162
  ],
161
163
  "env": {
162
- "OPENAI_API_KEY": "sk-proj-..." // Required: Replace with your key
164
+ "OPENAI_API_KEY": "sk-proj-..." // Required if using OpenAI (default)
163
165
  },
164
166
  "disabled": false,
165
167
  "autoApprove": []
@@ -187,7 +189,9 @@ docker run -i --rm \
187
189
  -e OPENAI_API_KEY="your-key-here" \
188
190
  -e DOCS_MCP_EMBEDDING_MODEL="text-embedding-3-small" \
189
191
  -v docs-mcp-data:/data \
190
- ghcr.io/arabold/docs-mcp-server:latest
192
+ ghcr.io/arabold/docs-mcp-server:latest # Runs MCP server (stdio by default)
193
+ # To run MCP server in HTTP mode on port 6280, append to the line above:
194
+ # --protocol http --port 6280
191
195
 
192
196
  # Example 2: Using OpenAI-compatible API (like Ollama)
193
197
  docker run -i --rm \
@@ -199,7 +203,7 @@ docker run -i --rm \
199
203
 
200
204
  # Example 3a: Using Google Cloud Vertex AI embeddings
201
205
  docker run -i --rm \
202
- -e OPENAI_API_KEY="your-openai-key" \ # Keep for fallback to OpenAI
206
+ -e OPENAI_API_KEY="your-openai-key" \ # For OpenAI provider
203
207
  -e DOCS_MCP_EMBEDDING_MODEL="vertex:text-embedding-004" \
204
208
  -e GOOGLE_APPLICATION_CREDENTIALS="/app/gcp-key.json" \
205
209
  -v docs-mcp-data:/data \
@@ -208,7 +212,7 @@ docker run -i --rm \
208
212
 
209
213
  # Example 3b: Using Google Generative AI (Gemini) embeddings
210
214
  docker run -i --rm \
211
- -e OPENAI_API_KEY="your-openai-key" \ # Keep for fallback to OpenAI
215
+ -e OPENAI_API_KEY="your-openai-key" \ # For OpenAI provider
212
216
  -e DOCS_MCP_EMBEDDING_MODEL="gemini:embedding-001" \
213
217
  -e GOOGLE_API_KEY="your-google-api-key" \
214
218
  -v docs-mcp-data:/data \
@@ -246,7 +250,7 @@ docker run --rm \
246
250
  -v docs-mcp-data:/data \
247
251
  -p 6281:6281 \
248
252
  ghcr.io/arabold/docs-mcp-server:latest \
249
- docs-web
253
+ web --port 6281
250
254
  ```
251
255
 
252
256
  Make sure to:
@@ -257,17 +261,27 @@ Make sure to:
257
261
 
258
262
  ### Using the CLI
259
263
 
260
- You can use the CLI to manage documentation directly via Docker.
264
+ You can use the CLI to manage documentation directly via Docker by passing CLI commands after the image name:
261
265
 
262
266
  ```bash
263
267
  docker run --rm \
264
268
  -e OPENAI_API_KEY="your-openai-api-key-here" \
265
269
  -v docs-mcp-data:/data \
266
270
  ghcr.io/arabold/docs-mcp-server:latest \
267
- docs-cli <command> [options]
271
+ <command> [options] # e.g., list, scrape <library> <url>, search <library> <query>
268
272
  ```
269
273
 
270
- Make sure to use the same volume name (`docs-mcp-data` in this example) as you did for the server. Any of the configuration environment variables (see [Configuration](#configuration) above) can be passed using `-e` flags, just like with the server.
274
+ Example:
275
+
276
+ ```bash
277
+ docker run --rm \
278
+ -e OPENAI_API_KEY="your-openai-api-key-here" \
279
+ -v docs-mcp-data:/data \
280
+ ghcr.io/arabold/docs-mcp-server:latest \
281
+ list
282
+ ```
283
+
284
+ Make sure to use the same volume name (`docs-mcp-data` in this example) as your MCP server container if you want them to share data. Any of the configuration environment variables (see [Configuration](#configuration) above) can be passed using `-e` flags.
271
285
 
272
286
  The main commands available are:
273
287
 
@@ -278,7 +292,7 @@ The main commands available are:
278
292
  - `fetch-url`: Fetches a single URL and converts to Markdown.
279
293
  - `find-version`: Finds the best matching version for a library.
280
294
 
281
- See the [CLI Command Reference](#cli-command-reference) below for detailed command usage.
295
+ For detailed command usage, run the CLI with the --help flag (e.g., `docker run ... ghcr.io/arabold/docs-mcp-server:latest --help`).
282
296
 
283
297
  ## Alternative: Using npx
284
298
 
@@ -295,9 +309,12 @@ This approach is useful when you need local file access (e.g., indexing document
295
309
  "mcpServers": {
296
310
  "docs-mcp-server": {
297
311
  "command": "npx",
298
- "args": ["-y", "--package=@arabold/docs-mcp-server", "docs-server"],
312
+ "args": ["-y", "@arabold/docs-mcp-server"],
313
+ // This will run the default MCP server (stdio).
314
+ // To run in HTTP mode, add arguments: e.g.
315
+ // "args": ["-y", "@arabold/docs-mcp-server", "--protocol", "http", "--port", "6280"],
299
316
  "env": {
300
- "OPENAI_API_KEY": "sk-proj-..." // Required: Replace with your key
317
+ "OPENAI_API_KEY": "sk-proj-..." // Required if using OpenAI (default)
301
318
  },
302
319
  "disabled": false,
303
320
  "autoApprove": []
@@ -312,27 +329,33 @@ This approach is useful when you need local file access (e.g., indexing document
312
329
 
313
330
  ### Launching Web Interface
314
331
 
315
- If you're running the server with `npx`, use `npx` for the web interface as well:
332
+ If you're running the MCP server with `npx` (as shown above, it runs by default), use `npx` for the web interface as well:
316
333
 
317
334
  ```bash
318
- npx -y --package=@arabold/docs-mcp-server docs-web --port 6281
335
+ npx -y @arabold/docs-mcp-server web --port 6281
319
336
  ```
320
337
 
321
- You can specify a different port using the `--port` flag.
338
+ You can specify a different port for the web interface using its `--port` flag.
322
339
 
323
340
  The `npx` approach will use the default data directory on your system (typically in your home directory), ensuring consistency between server and web interface.
324
341
 
325
342
  ### Using the CLI
326
343
 
327
- If you're running the server with npx, use `npx` for the CLI as well:
344
+ If you're running the MCP server with `npx`, you can also use `npx` for CLI commands:
345
+
346
+ ```bash
347
+ npx -y @arabold/docs-mcp-server <command> [options]
348
+ ```
349
+
350
+ Example:
328
351
 
329
352
  ```bash
330
- npx -y --package=@arabold/docs-mcp-server docs-cli <command> [options]
353
+ npx -y @arabold/docs-mcp-server list
331
354
  ```
332
355
 
333
- The `npx` approach will use the default data directory on your system (typically in your home directory), ensuring consistency between server and CLI.
356
+ The `npx` approach will use the default data directory on your system (typically in your home directory), ensuring consistency.
334
357
 
335
- See the [CLI Command Reference](#cli-command-reference) below for detailed command usage.
358
+ For detailed command usage, run the CLI with the --help flag (e.g., `npx -y @arabold/docs-mcp-server --help`).
336
359
 
337
360
  ## Configuration
338
361
 
@@ -342,11 +365,11 @@ The following environment variables are supported to configure the embedding mod
342
365
 
343
366
  - `DOCS_MCP_EMBEDDING_MODEL`: **Optional.** Format: `provider:model_name` or just `model_name` (defaults to `text-embedding-3-small`). Supported providers and their required environment variables:
344
367
 
345
- - `openai` (default): Uses OpenAI's embedding models
368
+ - `openai` (default provider): Uses OpenAI's embedding models.
346
369
 
347
- - `OPENAI_API_KEY`: **Required.** Your OpenAI API key
370
+ - `OPENAI_API_KEY`: Your OpenAI API key. **Required if `openai` is the active provider.**
348
371
  - `OPENAI_ORG_ID`: **Optional.** Your OpenAI Organization ID
349
- - `OPENAI_API_BASE`: **Optional.** Custom base URL for OpenAI-compatible APIs (e.g., Ollama, Azure OpenAI)
372
+ - `OPENAI_API_BASE`: **Optional.** Custom base URL for OpenAI-compatible APIs (e.g., Ollama).
350
373
 
351
374
  - `vertex`: Uses Google Cloud Vertex AI embeddings
352
375
 
@@ -376,10 +399,16 @@ For OpenAI-compatible APIs (like Ollama), use the `openai` provider with `OPENAI
376
399
 
377
400
  ## Development
378
401
 
379
- This section covers running the server/CLI directly from the source code for development purposes. The primary usage method is now via the public Docker image as described in "Method 2".
402
+ This section covers running the server/CLI directly from the source code for development purposes. The primary usage method is via the public Docker image (`ghcr.io/arabold/docs-mcp-server:latest`), as detailed in the "Alternative: Using Docker" section, or via Docker Compose as described in the "Recommended: Docker Desktop" section.
380
403
 
381
404
  ### Running from Source
382
405
 
406
+ > **Note:** Playwright browsers are not installed automatically during `npm install`. If you need to run tests or use features that require Playwright, run:
407
+ >
408
+ > ```bash
409
+ > npx playwright install --no-shell --with-deps chromium
410
+ > ```
411
+
383
412
  This provides an isolated environment and exposes the server via HTTP endpoints.
384
413
 
385
414
  This method is useful for contributing to the project or running un-published versions.
@@ -402,16 +431,43 @@ This method is useful for contributing to the project or running un-published ve
402
431
  Create and configure your `.env` file as described in the [Configuration](#configuration) section. This is crucial for providing the `OPENAI_API_KEY`.
403
432
 
404
433
  5. **Run:**
405
- - **Server (Development Mode):** `npm run dev:server` (builds, watches, and restarts)
406
- - **Server (Production Mode):** `npm run start` (runs pre-built code)
407
- - **CLI:** `npm run cli -- <command> [options]` or `node dist/cli.js <command> [options]`
434
+ - **Default MCP Server (Development):**
435
+ - Stdio mode (default): `npm run dev:server`
436
+ - HTTP mode: `npm run dev:server:http` (uses default port)
437
+ - Custom HTTP: `vite-node src/index.ts -- --protocol http --port <your_port>`
438
+ - **Web Interface (Development):** `npm run dev:web`
439
+ - This starts the web server (e.g., on port 6281) and watches for asset changes.
440
+ - **CLI Commands (Development):** `npm run dev:cli -- <command> [options]`
441
+ - Example: `npm run dev:cli -- list`
442
+ - Example: `vite-node src/index.ts scrape <library> <url>`
443
+ - **Production Mode (after `npm run build`):**
444
+ - Default MCP Server (stdio): `npm run start` (or `node dist/index.js`)
445
+ - MCP Server (HTTP): `npm run start -- --protocol http --port <your_port>` (or `node dist/index.js --protocol http --port <your_port>`)
446
+ - Web Interface: `npm run web -- --port <web_port>` (or `node dist/index.js web --port <web_port>`)
447
+ - CLI Commands: `npm run cli -- <command> [options]` (or `node dist/index.js <command> [options]`)
408
448
 
409
449
  ### Testing
410
450
 
411
- Since MCP servers communicate over stdio when run directly via Node.js, debugging can be challenging. We recommend using the [MCP Inspector](https://github.com/modelcontextprotocol/inspector), which is available as a package script after building:
451
+ Since MCP servers communicate over stdio when run directly via Node.js (or `vite-node`), debugging can be challenging. We recommend using the [MCP Inspector](https://github.com/modelcontextprotocol/inspector).
452
+
453
+ After building the project (`npm run build`):
412
454
 
413
455
  ```bash
414
- npx @modelcontextprotocol/inspector node dist/server.js
456
+ # For stdio mode (default)
457
+ npx @modelcontextprotocol/inspector node dist/index.js
458
+
459
+ # For HTTP mode (e.g., on port 6280)
460
+ npx @modelcontextprotocol/inspector node dist/index.js -- --protocol http --port 6280
461
+ ```
462
+
463
+ If using `vite-node` for development:
464
+
465
+ ```bash
466
+ # For stdio mode (default)
467
+ npx @modelcontextprotocol/inspector vite-node src/index.ts
468
+
469
+ # For HTTP mode (e.g., on port 6280)
470
+ npx @modelcontextprotocol/inspector vite-node src/index.ts -- --protocol http --port 6280
415
471
  ```
416
472
 
417
473
  The Inspector will provide a URL to access debugging tools in your browser.
@@ -2,7 +2,7 @@ import { BedrockEmbeddings } from "@langchain/aws";
2
2
  import { GoogleGenerativeAIEmbeddings } from "@langchain/google-genai";
3
3
  import { VertexAIEmbeddings } from "@langchain/google-vertexai";
4
4
  import { AzureOpenAIEmbeddings, OpenAIEmbeddings } from "@langchain/openai";
5
- import { q as DimensionError, r as VECTOR_DIMENSION } from "./DocumentManagementService-BZ_ZZgPI.js";
5
+ import { D as DimensionError, V as VECTOR_DIMENSION } from "./index.js";
6
6
  import { Embeddings } from "@langchain/core/embeddings";
7
7
  class FixedDimensionEmbeddings extends Embeddings {
8
8
  constructor(embeddings, targetDimension, providerAndModel, allowTruncate = false) {
@@ -172,4 +172,4 @@ export {
172
172
  UnsupportedProviderError,
173
173
  createEmbeddingModel
174
174
  };
175
- //# sourceMappingURL=EmbeddingFactory-Dz1hdJJe.js.map
175
+ //# sourceMappingURL=EmbeddingFactory-C6_OpOiy.js.map
@@ -1 +1 @@
1
- {"version":3,"file":"EmbeddingFactory-Dz1hdJJe.js","sources":["../src/store/embeddings/FixedDimensionEmbeddings.ts","../src/store/embeddings/EmbeddingFactory.ts"],"sourcesContent":["import { Embeddings } from \"@langchain/core/embeddings\";\nimport { DimensionError } from \"../errors\";\n\n/**\n * Wrapper around an Embeddings implementation that ensures vectors have a fixed dimension.\n * - If a vector's dimension is greater than the target and truncation is allowed,\n * the vector is truncated (e.g., for models that support MRL - Matryoshka\n * Representation Learning).\n * - If a vector's dimension is greater than the target and truncation is not\n * allowed, a DimensionError is thrown.\n * - If a vector's dimension is less than the target, it is padded with zeros.\n */\nexport class FixedDimensionEmbeddings extends Embeddings {\n private provider: string;\n private model: string;\n\n constructor(\n private readonly embeddings: Embeddings,\n private readonly targetDimension: number,\n providerAndModel: string,\n private readonly allowTruncate: boolean = false,\n ) {\n super({});\n // Parse provider and model from string (e.g., \"gemini:embedding-001\" or just \"text-embedding-3-small\")\n const [providerOrModel, modelName] = providerAndModel.split(\":\");\n this.provider = modelName ? providerOrModel : \"openai\"; // Default to openai if no provider specified\n this.model = modelName || providerOrModel;\n }\n\n /**\n * Normalize a vector to the target dimension by truncating (for MRL models) or padding.\n * @throws {DimensionError} If vector is too large and provider doesn't support MRL\n */\n private normalizeVector(vector: number[]): number[] {\n const dimension = vector.length;\n\n if (dimension > this.targetDimension) {\n // If truncation is allowed (e.g., for MRL models like Gemini), truncate the vector\n if (this.allowTruncate) {\n return vector.slice(0, this.targetDimension);\n }\n // Otherwise, throw an error\n throw new DimensionError(\n `${this.provider}:${this.model}`,\n dimension,\n this.targetDimension,\n );\n }\n\n if (dimension < this.targetDimension) {\n // Pad with zeros to reach target dimension\n return [...vector, ...new Array(this.targetDimension - dimension).fill(0)];\n }\n\n return vector;\n }\n\n async embedQuery(text: string): Promise<number[]> {\n const vector = await this.embeddings.embedQuery(text);\n return this.normalizeVector(vector);\n }\n\n async embedDocuments(documents: string[]): Promise<number[][]> {\n const vectors = await this.embeddings.embedDocuments(documents);\n return vectors.map((vector) => this.normalizeVector(vector));\n }\n}\n","import { BedrockEmbeddings } from \"@langchain/aws\";\nimport type { Embeddings } from \"@langchain/core/embeddings\";\nimport { GoogleGenerativeAIEmbeddings } from \"@langchain/google-genai\";\nimport { VertexAIEmbeddings } from \"@langchain/google-vertexai\";\nimport {\n AzureOpenAIEmbeddings,\n type ClientOptions,\n OpenAIEmbeddings,\n type OpenAIEmbeddingsParams,\n} from \"@langchain/openai\";\nimport { VECTOR_DIMENSION } from \"../types\";\nimport { FixedDimensionEmbeddings } from \"./FixedDimensionEmbeddings\";\n\n/**\n * Supported embedding model providers. Each provider requires specific environment\n * variables to be set for API access.\n */\nexport type EmbeddingProvider = \"openai\" | \"vertex\" | \"gemini\" | \"aws\" | \"microsoft\";\n\n/**\n * Error thrown when an invalid or unsupported embedding provider is specified.\n */\nexport class UnsupportedProviderError extends Error {\n constructor(provider: string) {\n super(`Unsupported embedding provider: ${provider}`);\n this.name = \"UnsupportedProviderError\";\n }\n}\n\n/**\n * Error thrown when there's an issue with the model configuration or missing environment variables.\n */\nexport class ModelConfigurationError extends Error {\n constructor(message: string) {\n super(message);\n this.name = \"ModelConfigurationError\";\n }\n}\n\n/**\n * Creates an embedding model instance based on the specified provider and model name.\n * The provider and model name should be specified in the format \"provider:model_name\"\n * (e.g., \"google:text-embedding-004\"). If no provider is specified (i.e., just \"model_name\"),\n * OpenAI is used as the default provider.\n *\n * Environment variables required per provider:\n * - OpenAI: OPENAI_API_KEY (and optionally OPENAI_API_BASE, OPENAI_ORG_ID)\n * - Google Cloud Vertex AI: GOOGLE_APPLICATION_CREDENTIALS (path to service account JSON)\n * - Google GenAI (Gemini): GOOGLE_API_KEY\n * - AWS: AWS_ACCESS_KEY_ID, AWS_SECRET_ACCESS_KEY, AWS_REGION (or BEDROCK_AWS_REGION)\n * - Microsoft: AZURE_OPENAI_API_KEY, AZURE_OPENAI_API_INSTANCE_NAME, AZURE_OPENAI_API_DEPLOYMENT_NAME, AZURE_OPENAI_API_VERSION\n *\n * @param providerAndModel - The provider and model name in the format \"provider:model_name\"\n * or just \"model_name\" for OpenAI models.\n * @returns A configured instance of the appropriate Embeddings implementation.\n * @throws {UnsupportedProviderError} If an unsupported provider is specified.\n * @throws {ModelConfigurationError} If there's an issue with the model configuration.\n */\nexport function createEmbeddingModel(providerAndModel: string): Embeddings {\n // Parse provider and model name\n const [providerOrModel, ...modelNameParts] = providerAndModel.split(\":\");\n const modelName = modelNameParts.join(\":\");\n const provider = modelName ? (providerOrModel as EmbeddingProvider) : \"openai\";\n const model = modelName || providerOrModel;\n\n // Default configuration for each provider\n const baseConfig = { stripNewLines: true };\n\n switch (provider) {\n case \"openai\": {\n const config: Partial<OpenAIEmbeddingsParams> & { configuration?: ClientOptions } =\n {\n ...baseConfig,\n modelName: model,\n batchSize: 512, // OpenAI supports large batches\n };\n // Add custom base URL if specified\n const baseURL = process.env.OPENAI_API_BASE;\n if (baseURL) {\n config.configuration = { baseURL };\n }\n return new OpenAIEmbeddings(config);\n }\n\n case \"vertex\": {\n if (!process.env.GOOGLE_APPLICATION_CREDENTIALS) {\n throw new ModelConfigurationError(\n \"GOOGLE_APPLICATION_CREDENTIALS environment variable is required for Google Cloud Vertex AI\",\n );\n }\n return new VertexAIEmbeddings({\n ...baseConfig,\n model: model, // e.g., \"text-embedding-004\"\n });\n }\n\n case \"gemini\": {\n if (!process.env.GOOGLE_API_KEY) {\n throw new ModelConfigurationError(\n \"GOOGLE_API_KEY environment variable is required for Google AI (Gemini)\",\n );\n }\n // Create base embeddings and wrap with FixedDimensionEmbeddings since Gemini\n // supports MRL (Matryoshka Representation Learning) for safe truncation\n const baseEmbeddings = new GoogleGenerativeAIEmbeddings({\n ...baseConfig,\n apiKey: process.env.GOOGLE_API_KEY,\n model: model, // e.g., \"gemini-embedding-exp-03-07\"\n });\n return new FixedDimensionEmbeddings(\n baseEmbeddings,\n VECTOR_DIMENSION,\n providerAndModel,\n true,\n );\n }\n\n case \"aws\": {\n // For AWS, model should be the full Bedrock model ID\n const region = process.env.BEDROCK_AWS_REGION || process.env.AWS_REGION;\n if (!region) {\n throw new ModelConfigurationError(\n \"BEDROCK_AWS_REGION or AWS_REGION environment variable is required for AWS Bedrock\",\n );\n }\n if (!process.env.AWS_ACCESS_KEY_ID || !process.env.AWS_SECRET_ACCESS_KEY) {\n throw new ModelConfigurationError(\n \"AWS_ACCESS_KEY_ID and AWS_SECRET_ACCESS_KEY environment variables are required for AWS Bedrock\",\n );\n }\n\n return new BedrockEmbeddings({\n ...baseConfig,\n model: model, // e.g., \"amazon.titan-embed-text-v1\"\n region,\n credentials: {\n accessKeyId: process.env.AWS_ACCESS_KEY_ID,\n secretAccessKey: process.env.AWS_SECRET_ACCESS_KEY,\n sessionToken: process.env.AWS_SESSION_TOKEN,\n },\n });\n }\n\n case \"microsoft\": {\n // For Azure, model name corresponds to the deployment name\n if (!process.env.AZURE_OPENAI_API_KEY) {\n throw new ModelConfigurationError(\n \"AZURE_OPENAI_API_KEY environment variable is required for Azure OpenAI\",\n );\n }\n if (!process.env.AZURE_OPENAI_API_INSTANCE_NAME) {\n throw new ModelConfigurationError(\n \"AZURE_OPENAI_API_INSTANCE_NAME environment variable is required for Azure OpenAI\",\n );\n }\n if (!process.env.AZURE_OPENAI_API_DEPLOYMENT_NAME) {\n throw new ModelConfigurationError(\n \"AZURE_OPENAI_API_DEPLOYMENT_NAME environment variable is required for Azure OpenAI\",\n );\n }\n if (!process.env.AZURE_OPENAI_API_VERSION) {\n throw new ModelConfigurationError(\n \"AZURE_OPENAI_API_VERSION environment variable is required for Azure OpenAI\",\n );\n }\n\n return new AzureOpenAIEmbeddings({\n ...baseConfig,\n azureOpenAIApiKey: process.env.AZURE_OPENAI_API_KEY,\n azureOpenAIApiInstanceName: process.env.AZURE_OPENAI_API_INSTANCE_NAME,\n azureOpenAIApiDeploymentName: process.env.AZURE_OPENAI_API_DEPLOYMENT_NAME,\n azureOpenAIApiVersion: process.env.AZURE_OPENAI_API_VERSION,\n deploymentName: model,\n });\n }\n\n default:\n throw new UnsupportedProviderError(provider);\n }\n}\n"],"names":[],"mappings":";;;;;;AAYO,MAAM,iCAAiC,WAAW;AAAA,EAIvD,YACmB,YACA,iBACjB,kBACiB,gBAAyB,OAC1C;AACA,UAAM,CAAA,CAAE;AALS,SAAA,aAAA;AACA,SAAA,kBAAA;AAEA,SAAA,gBAAA;AAIjB,UAAM,CAAC,iBAAiB,SAAS,IAAI,iBAAiB,MAAM,GAAG;AAC1D,SAAA,WAAW,YAAY,kBAAkB;AAC9C,SAAK,QAAQ,aAAa;AAAA,EAAA;AAAA,EAbpB;AAAA,EACA;AAAA;AAAA;AAAA;AAAA;AAAA,EAmBA,gBAAgB,QAA4B;AAClD,UAAM,YAAY,OAAO;AAErB,QAAA,YAAY,KAAK,iBAAiB;AAEpC,UAAI,KAAK,eAAe;AACtB,eAAO,OAAO,MAAM,GAAG,KAAK,eAAe;AAAA,MAAA;AAG7C,YAAM,IAAI;AAAA,QACR,GAAG,KAAK,QAAQ,IAAI,KAAK,KAAK;AAAA,QAC9B;AAAA,QACA,KAAK;AAAA,MACP;AAAA,IAAA;AAGE,QAAA,YAAY,KAAK,iBAAiB;AAEpC,aAAO,CAAC,GAAG,QAAQ,GAAG,IAAI,MAAM,KAAK,kBAAkB,SAAS,EAAE,KAAK,CAAC,CAAC;AAAA,IAAA;AAGpE,WAAA;AAAA,EAAA;AAAA,EAGT,MAAM,WAAW,MAAiC;AAChD,UAAM,SAAS,MAAM,KAAK,WAAW,WAAW,IAAI;AAC7C,WAAA,KAAK,gBAAgB,MAAM;AAAA,EAAA;AAAA,EAGpC,MAAM,eAAe,WAA0C;AAC7D,UAAM,UAAU,MAAM,KAAK,WAAW,eAAe,SAAS;AAC9D,WAAO,QAAQ,IAAI,CAAC,WAAW,KAAK,gBAAgB,MAAM,CAAC;AAAA,EAAA;AAE/D;AC5CO,MAAM,iCAAiC,MAAM;AAAA,EAClD,YAAY,UAAkB;AACtB,UAAA,mCAAmC,QAAQ,EAAE;AACnD,SAAK,OAAO;AAAA,EAAA;AAEhB;AAKO,MAAM,gCAAgC,MAAM;AAAA,EACjD,YAAY,SAAiB;AAC3B,UAAM,OAAO;AACb,SAAK,OAAO;AAAA,EAAA;AAEhB;AAqBO,SAAS,qBAAqB,kBAAsC;AAEzE,QAAM,CAAC,iBAAiB,GAAG,cAAc,IAAI,iBAAiB,MAAM,GAAG;AACjE,QAAA,YAAY,eAAe,KAAK,GAAG;AACnC,QAAA,WAAW,YAAa,kBAAwC;AACtE,QAAM,QAAQ,aAAa;AAGrB,QAAA,aAAa,EAAE,eAAe,KAAK;AAEzC,UAAQ,UAAU;AAAA,IAChB,KAAK,UAAU;AACb,YAAM,SACJ;AAAA,QACE,GAAG;AAAA,QACH,WAAW;AAAA,QACX,WAAW;AAAA;AAAA,MACb;AAEI,YAAA,UAAU,QAAQ,IAAI;AAC5B,UAAI,SAAS;AACJ,eAAA,gBAAgB,EAAE,QAAQ;AAAA,MAAA;AAE5B,aAAA,IAAI,iBAAiB,MAAM;AAAA,IAAA;AAAA,IAGpC,KAAK,UAAU;AACT,UAAA,CAAC,QAAQ,IAAI,gCAAgC;AAC/C,cAAM,IAAI;AAAA,UACR;AAAA,QACF;AAAA,MAAA;AAEF,aAAO,IAAI,mBAAmB;AAAA,QAC5B,GAAG;AAAA,QACH;AAAA;AAAA,MAAA,CACD;AAAA,IAAA;AAAA,IAGH,KAAK,UAAU;AACT,UAAA,CAAC,QAAQ,IAAI,gBAAgB;AAC/B,cAAM,IAAI;AAAA,UACR;AAAA,QACF;AAAA,MAAA;AAII,YAAA,iBAAiB,IAAI,6BAA6B;AAAA,QACtD,GAAG;AAAA,QACH,QAAQ,QAAQ,IAAI;AAAA,QACpB;AAAA;AAAA,MAAA,CACD;AACD,aAAO,IAAI;AAAA,QACT;AAAA,QACA;AAAA,QACA;AAAA,QACA;AAAA,MACF;AAAA,IAAA;AAAA,IAGF,KAAK,OAAO;AAEV,YAAM,SAAS,QAAQ,IAAI,sBAAsB,QAAQ,IAAI;AAC7D,UAAI,CAAC,QAAQ;AACX,cAAM,IAAI;AAAA,UACR;AAAA,QACF;AAAA,MAAA;AAEF,UAAI,CAAC,QAAQ,IAAI,qBAAqB,CAAC,QAAQ,IAAI,uBAAuB;AACxE,cAAM,IAAI;AAAA,UACR;AAAA,QACF;AAAA,MAAA;AAGF,aAAO,IAAI,kBAAkB;AAAA,QAC3B,GAAG;AAAA,QACH;AAAA;AAAA,QACA;AAAA,QACA,aAAa;AAAA,UACX,aAAa,QAAQ,IAAI;AAAA,UACzB,iBAAiB,QAAQ,IAAI;AAAA,UAC7B,cAAc,QAAQ,IAAI;AAAA,QAAA;AAAA,MAC5B,CACD;AAAA,IAAA;AAAA,IAGH,KAAK,aAAa;AAEZ,UAAA,CAAC,QAAQ,IAAI,sBAAsB;AACrC,cAAM,IAAI;AAAA,UACR;AAAA,QACF;AAAA,MAAA;AAEE,UAAA,CAAC,QAAQ,IAAI,gCAAgC;AAC/C,cAAM,IAAI;AAAA,UACR;AAAA,QACF;AAAA,MAAA;AAEE,UAAA,CAAC,QAAQ,IAAI,kCAAkC;AACjD,cAAM,IAAI;AAAA,UACR;AAAA,QACF;AAAA,MAAA;AAEE,UAAA,CAAC,QAAQ,IAAI,0BAA0B;AACzC,cAAM,IAAI;AAAA,UACR;AAAA,QACF;AAAA,MAAA;AAGF,aAAO,IAAI,sBAAsB;AAAA,QAC/B,GAAG;AAAA,QACH,mBAAmB,QAAQ,IAAI;AAAA,QAC/B,4BAA4B,QAAQ,IAAI;AAAA,QACxC,8BAA8B,QAAQ,IAAI;AAAA,QAC1C,uBAAuB,QAAQ,IAAI;AAAA,QACnC,gBAAgB;AAAA,MAAA,CACjB;AAAA,IAAA;AAAA,IAGH;AACQ,YAAA,IAAI,yBAAyB,QAAQ;AAAA,EAAA;AAEjD;"}
1
+ {"version":3,"file":"EmbeddingFactory-C6_OpOiy.js","sources":["../src/store/embeddings/FixedDimensionEmbeddings.ts","../src/store/embeddings/EmbeddingFactory.ts"],"sourcesContent":["import { Embeddings } from \"@langchain/core/embeddings\";\nimport { DimensionError } from \"../errors\";\n\n/**\n * Wrapper around an Embeddings implementation that ensures vectors have a fixed dimension.\n * - If a vector's dimension is greater than the target and truncation is allowed,\n * the vector is truncated (e.g., for models that support MRL - Matryoshka\n * Representation Learning).\n * - If a vector's dimension is greater than the target and truncation is not\n * allowed, a DimensionError is thrown.\n * - If a vector's dimension is less than the target, it is padded with zeros.\n */\nexport class FixedDimensionEmbeddings extends Embeddings {\n private provider: string;\n private model: string;\n\n constructor(\n private readonly embeddings: Embeddings,\n private readonly targetDimension: number,\n providerAndModel: string,\n private readonly allowTruncate: boolean = false,\n ) {\n super({});\n // Parse provider and model from string (e.g., \"gemini:embedding-001\" or just \"text-embedding-3-small\")\n const [providerOrModel, modelName] = providerAndModel.split(\":\");\n this.provider = modelName ? providerOrModel : \"openai\"; // Default to openai if no provider specified\n this.model = modelName || providerOrModel;\n }\n\n /**\n * Normalize a vector to the target dimension by truncating (for MRL models) or padding.\n * @throws {DimensionError} If vector is too large and provider doesn't support MRL\n */\n private normalizeVector(vector: number[]): number[] {\n const dimension = vector.length;\n\n if (dimension > this.targetDimension) {\n // If truncation is allowed (e.g., for MRL models like Gemini), truncate the vector\n if (this.allowTruncate) {\n return vector.slice(0, this.targetDimension);\n }\n // Otherwise, throw an error\n throw new DimensionError(\n `${this.provider}:${this.model}`,\n dimension,\n this.targetDimension,\n );\n }\n\n if (dimension < this.targetDimension) {\n // Pad with zeros to reach target dimension\n return [...vector, ...new Array(this.targetDimension - dimension).fill(0)];\n }\n\n return vector;\n }\n\n async embedQuery(text: string): Promise<number[]> {\n const vector = await this.embeddings.embedQuery(text);\n return this.normalizeVector(vector);\n }\n\n async embedDocuments(documents: string[]): Promise<number[][]> {\n const vectors = await this.embeddings.embedDocuments(documents);\n return vectors.map((vector) => this.normalizeVector(vector));\n }\n}\n","import { BedrockEmbeddings } from \"@langchain/aws\";\nimport type { Embeddings } from \"@langchain/core/embeddings\";\nimport { GoogleGenerativeAIEmbeddings } from \"@langchain/google-genai\";\nimport { VertexAIEmbeddings } from \"@langchain/google-vertexai\";\nimport {\n AzureOpenAIEmbeddings,\n type ClientOptions,\n OpenAIEmbeddings,\n type OpenAIEmbeddingsParams,\n} from \"@langchain/openai\";\nimport { VECTOR_DIMENSION } from \"../types\";\nimport { FixedDimensionEmbeddings } from \"./FixedDimensionEmbeddings\";\n\n/**\n * Supported embedding model providers. Each provider requires specific environment\n * variables to be set for API access.\n */\nexport type EmbeddingProvider = \"openai\" | \"vertex\" | \"gemini\" | \"aws\" | \"microsoft\";\n\n/**\n * Error thrown when an invalid or unsupported embedding provider is specified.\n */\nexport class UnsupportedProviderError extends Error {\n constructor(provider: string) {\n super(`Unsupported embedding provider: ${provider}`);\n this.name = \"UnsupportedProviderError\";\n }\n}\n\n/**\n * Error thrown when there's an issue with the model configuration or missing environment variables.\n */\nexport class ModelConfigurationError extends Error {\n constructor(message: string) {\n super(message);\n this.name = \"ModelConfigurationError\";\n }\n}\n\n/**\n * Creates an embedding model instance based on the specified provider and model name.\n * The provider and model name should be specified in the format \"provider:model_name\"\n * (e.g., \"google:text-embedding-004\"). If no provider is specified (i.e., just \"model_name\"),\n * OpenAI is used as the default provider.\n *\n * Environment variables required per provider:\n * - OpenAI: OPENAI_API_KEY (and optionally OPENAI_API_BASE, OPENAI_ORG_ID)\n * - Google Cloud Vertex AI: GOOGLE_APPLICATION_CREDENTIALS (path to service account JSON)\n * - Google GenAI (Gemini): GOOGLE_API_KEY\n * - AWS: AWS_ACCESS_KEY_ID, AWS_SECRET_ACCESS_KEY, AWS_REGION (or BEDROCK_AWS_REGION)\n * - Microsoft: AZURE_OPENAI_API_KEY, AZURE_OPENAI_API_INSTANCE_NAME, AZURE_OPENAI_API_DEPLOYMENT_NAME, AZURE_OPENAI_API_VERSION\n *\n * @param providerAndModel - The provider and model name in the format \"provider:model_name\"\n * or just \"model_name\" for OpenAI models.\n * @returns A configured instance of the appropriate Embeddings implementation.\n * @throws {UnsupportedProviderError} If an unsupported provider is specified.\n * @throws {ModelConfigurationError} If there's an issue with the model configuration.\n */\nexport function createEmbeddingModel(providerAndModel: string): Embeddings {\n // Parse provider and model name\n const [providerOrModel, ...modelNameParts] = providerAndModel.split(\":\");\n const modelName = modelNameParts.join(\":\");\n const provider = modelName ? (providerOrModel as EmbeddingProvider) : \"openai\";\n const model = modelName || providerOrModel;\n\n // Default configuration for each provider\n const baseConfig = { stripNewLines: true };\n\n switch (provider) {\n case \"openai\": {\n const config: Partial<OpenAIEmbeddingsParams> & { configuration?: ClientOptions } =\n {\n ...baseConfig,\n modelName: model,\n batchSize: 512, // OpenAI supports large batches\n };\n // Add custom base URL if specified\n const baseURL = process.env.OPENAI_API_BASE;\n if (baseURL) {\n config.configuration = { baseURL };\n }\n return new OpenAIEmbeddings(config);\n }\n\n case \"vertex\": {\n if (!process.env.GOOGLE_APPLICATION_CREDENTIALS) {\n throw new ModelConfigurationError(\n \"GOOGLE_APPLICATION_CREDENTIALS environment variable is required for Google Cloud Vertex AI\",\n );\n }\n return new VertexAIEmbeddings({\n ...baseConfig,\n model: model, // e.g., \"text-embedding-004\"\n });\n }\n\n case \"gemini\": {\n if (!process.env.GOOGLE_API_KEY) {\n throw new ModelConfigurationError(\n \"GOOGLE_API_KEY environment variable is required for Google AI (Gemini)\",\n );\n }\n // Create base embeddings and wrap with FixedDimensionEmbeddings since Gemini\n // supports MRL (Matryoshka Representation Learning) for safe truncation\n const baseEmbeddings = new GoogleGenerativeAIEmbeddings({\n ...baseConfig,\n apiKey: process.env.GOOGLE_API_KEY,\n model: model, // e.g., \"gemini-embedding-exp-03-07\"\n });\n return new FixedDimensionEmbeddings(\n baseEmbeddings,\n VECTOR_DIMENSION,\n providerAndModel,\n true,\n );\n }\n\n case \"aws\": {\n // For AWS, model should be the full Bedrock model ID\n const region = process.env.BEDROCK_AWS_REGION || process.env.AWS_REGION;\n if (!region) {\n throw new ModelConfigurationError(\n \"BEDROCK_AWS_REGION or AWS_REGION environment variable is required for AWS Bedrock\",\n );\n }\n if (!process.env.AWS_ACCESS_KEY_ID || !process.env.AWS_SECRET_ACCESS_KEY) {\n throw new ModelConfigurationError(\n \"AWS_ACCESS_KEY_ID and AWS_SECRET_ACCESS_KEY environment variables are required for AWS Bedrock\",\n );\n }\n\n return new BedrockEmbeddings({\n ...baseConfig,\n model: model, // e.g., \"amazon.titan-embed-text-v1\"\n region,\n credentials: {\n accessKeyId: process.env.AWS_ACCESS_KEY_ID,\n secretAccessKey: process.env.AWS_SECRET_ACCESS_KEY,\n sessionToken: process.env.AWS_SESSION_TOKEN,\n },\n });\n }\n\n case \"microsoft\": {\n // For Azure, model name corresponds to the deployment name\n if (!process.env.AZURE_OPENAI_API_KEY) {\n throw new ModelConfigurationError(\n \"AZURE_OPENAI_API_KEY environment variable is required for Azure OpenAI\",\n );\n }\n if (!process.env.AZURE_OPENAI_API_INSTANCE_NAME) {\n throw new ModelConfigurationError(\n \"AZURE_OPENAI_API_INSTANCE_NAME environment variable is required for Azure OpenAI\",\n );\n }\n if (!process.env.AZURE_OPENAI_API_DEPLOYMENT_NAME) {\n throw new ModelConfigurationError(\n \"AZURE_OPENAI_API_DEPLOYMENT_NAME environment variable is required for Azure OpenAI\",\n );\n }\n if (!process.env.AZURE_OPENAI_API_VERSION) {\n throw new ModelConfigurationError(\n \"AZURE_OPENAI_API_VERSION environment variable is required for Azure OpenAI\",\n );\n }\n\n return new AzureOpenAIEmbeddings({\n ...baseConfig,\n azureOpenAIApiKey: process.env.AZURE_OPENAI_API_KEY,\n azureOpenAIApiInstanceName: process.env.AZURE_OPENAI_API_INSTANCE_NAME,\n azureOpenAIApiDeploymentName: process.env.AZURE_OPENAI_API_DEPLOYMENT_NAME,\n azureOpenAIApiVersion: process.env.AZURE_OPENAI_API_VERSION,\n deploymentName: model,\n });\n }\n\n default:\n throw new UnsupportedProviderError(provider);\n }\n}\n"],"names":[],"mappings":";;;;;;AAYO,MAAM,iCAAiC,WAAW;AAAA,EAIvD,YACmB,YACA,iBACjB,kBACiB,gBAAyB,OAC1C;AACA,UAAM,CAAA,CAAE;AALS,SAAA,aAAA;AACA,SAAA,kBAAA;AAEA,SAAA,gBAAA;AAIjB,UAAM,CAAC,iBAAiB,SAAS,IAAI,iBAAiB,MAAM,GAAG;AAC1D,SAAA,WAAW,YAAY,kBAAkB;AAC9C,SAAK,QAAQ,aAAa;AAAA,EAAA;AAAA,EAbpB;AAAA,EACA;AAAA;AAAA;AAAA;AAAA;AAAA,EAmBA,gBAAgB,QAA4B;AAClD,UAAM,YAAY,OAAO;AAErB,QAAA,YAAY,KAAK,iBAAiB;AAEpC,UAAI,KAAK,eAAe;AACtB,eAAO,OAAO,MAAM,GAAG,KAAK,eAAe;AAAA,MAAA;AAG7C,YAAM,IAAI;AAAA,QACR,GAAG,KAAK,QAAQ,IAAI,KAAK,KAAK;AAAA,QAC9B;AAAA,QACA,KAAK;AAAA,MACP;AAAA,IAAA;AAGE,QAAA,YAAY,KAAK,iBAAiB;AAEpC,aAAO,CAAC,GAAG,QAAQ,GAAG,IAAI,MAAM,KAAK,kBAAkB,SAAS,EAAE,KAAK,CAAC,CAAC;AAAA,IAAA;AAGpE,WAAA;AAAA,EAAA;AAAA,EAGT,MAAM,WAAW,MAAiC;AAChD,UAAM,SAAS,MAAM,KAAK,WAAW,WAAW,IAAI;AAC7C,WAAA,KAAK,gBAAgB,MAAM;AAAA,EAAA;AAAA,EAGpC,MAAM,eAAe,WAA0C;AAC7D,UAAM,UAAU,MAAM,KAAK,WAAW,eAAe,SAAS;AAC9D,WAAO,QAAQ,IAAI,CAAC,WAAW,KAAK,gBAAgB,MAAM,CAAC;AAAA,EAAA;AAE/D;AC5CO,MAAM,iCAAiC,MAAM;AAAA,EAClD,YAAY,UAAkB;AACtB,UAAA,mCAAmC,QAAQ,EAAE;AACnD,SAAK,OAAO;AAAA,EAAA;AAEhB;AAKO,MAAM,gCAAgC,MAAM;AAAA,EACjD,YAAY,SAAiB;AAC3B,UAAM,OAAO;AACb,SAAK,OAAO;AAAA,EAAA;AAEhB;AAqBO,SAAS,qBAAqB,kBAAsC;AAEzE,QAAM,CAAC,iBAAiB,GAAG,cAAc,IAAI,iBAAiB,MAAM,GAAG;AACjE,QAAA,YAAY,eAAe,KAAK,GAAG;AACnC,QAAA,WAAW,YAAa,kBAAwC;AACtE,QAAM,QAAQ,aAAa;AAGrB,QAAA,aAAa,EAAE,eAAe,KAAK;AAEzC,UAAQ,UAAU;AAAA,IAChB,KAAK,UAAU;AACb,YAAM,SACJ;AAAA,QACE,GAAG;AAAA,QACH,WAAW;AAAA,QACX,WAAW;AAAA;AAAA,MACb;AAEI,YAAA,UAAU,QAAQ,IAAI;AAC5B,UAAI,SAAS;AACJ,eAAA,gBAAgB,EAAE,QAAQ;AAAA,MAAA;AAE5B,aAAA,IAAI,iBAAiB,MAAM;AAAA,IAAA;AAAA,IAGpC,KAAK,UAAU;AACT,UAAA,CAAC,QAAQ,IAAI,gCAAgC;AAC/C,cAAM,IAAI;AAAA,UACR;AAAA,QACF;AAAA,MAAA;AAEF,aAAO,IAAI,mBAAmB;AAAA,QAC5B,GAAG;AAAA,QACH;AAAA;AAAA,MAAA,CACD;AAAA,IAAA;AAAA,IAGH,KAAK,UAAU;AACT,UAAA,CAAC,QAAQ,IAAI,gBAAgB;AAC/B,cAAM,IAAI;AAAA,UACR;AAAA,QACF;AAAA,MAAA;AAII,YAAA,iBAAiB,IAAI,6BAA6B;AAAA,QACtD,GAAG;AAAA,QACH,QAAQ,QAAQ,IAAI;AAAA,QACpB;AAAA;AAAA,MAAA,CACD;AACD,aAAO,IAAI;AAAA,QACT;AAAA,QACA;AAAA,QACA;AAAA,QACA;AAAA,MACF;AAAA,IAAA;AAAA,IAGF,KAAK,OAAO;AAEV,YAAM,SAAS,QAAQ,IAAI,sBAAsB,QAAQ,IAAI;AAC7D,UAAI,CAAC,QAAQ;AACX,cAAM,IAAI;AAAA,UACR;AAAA,QACF;AAAA,MAAA;AAEF,UAAI,CAAC,QAAQ,IAAI,qBAAqB,CAAC,QAAQ,IAAI,uBAAuB;AACxE,cAAM,IAAI;AAAA,UACR;AAAA,QACF;AAAA,MAAA;AAGF,aAAO,IAAI,kBAAkB;AAAA,QAC3B,GAAG;AAAA,QACH;AAAA;AAAA,QACA;AAAA,QACA,aAAa;AAAA,UACX,aAAa,QAAQ,IAAI;AAAA,UACzB,iBAAiB,QAAQ,IAAI;AAAA,UAC7B,cAAc,QAAQ,IAAI;AAAA,QAAA;AAAA,MAC5B,CACD;AAAA,IAAA;AAAA,IAGH,KAAK,aAAa;AAEZ,UAAA,CAAC,QAAQ,IAAI,sBAAsB;AACrC,cAAM,IAAI;AAAA,UACR;AAAA,QACF;AAAA,MAAA;AAEE,UAAA,CAAC,QAAQ,IAAI,gCAAgC;AAC/C,cAAM,IAAI;AAAA,UACR;AAAA,QACF;AAAA,MAAA;AAEE,UAAA,CAAC,QAAQ,IAAI,kCAAkC;AACjD,cAAM,IAAI;AAAA,UACR;AAAA,QACF;AAAA,MAAA;AAEE,UAAA,CAAC,QAAQ,IAAI,0BAA0B;AACzC,cAAM,IAAI;AAAA,UACR;AAAA,QACF;AAAA,MAAA;AAGF,aAAO,IAAI,sBAAsB;AAAA,QAC/B,GAAG;AAAA,QACH,mBAAmB,QAAQ,IAAI;AAAA,QAC/B,4BAA4B,QAAQ,IAAI;AAAA,QACxC,8BAA8B,QAAQ,IAAI;AAAA,QAC1C,uBAAuB,QAAQ,IAAI;AAAA,QACnC,gBAAgB;AAAA,MAAA,CACjB;AAAA,IAAA;AAAA,IAGH;AACQ,YAAA,IAAI,yBAAyB,QAAQ;AAAA,EAAA;AAEjD;"}