@leanlabsinnov/codegraph 0.1.3 → 0.1.5

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
package/README.md CHANGED
@@ -1,13 +1,14 @@
1
1
  # @leanlabsinnov/codegraph
2
2
 
3
- Live, queryable knowledge graph for your codebase. Indexes JS/TS into an embedded graph
4
- DB with vector embeddings, then exposes a local MCP server that Claude Code, Cursor, and
5
- Windsurf can call to answer questions like "what calls `useAuth`?", "what's the blast
6
- radius of changing `formatPrice`?", or "find symbols semantically similar to a JWT auth
7
- helper".
3
+ [![npm version](https://img.shields.io/npm/v/@leanlabsinnov/codegraph)](https://www.npmjs.com/package/@leanlabsinnov/codegraph)
4
+ [![License: MIT](https://img.shields.io/badge/License-MIT-yellow.svg)](../../LICENSE)
5
+ [![Node.js >=20](https://img.shields.io/badge/node-%3E%3D20-brightgreen)](https://nodejs.org)
8
6
 
9
- Zero infrastructure. The graph lives in a folder under `~/.codegraph/`. No Docker, no
10
- external services.
7
+ **Live, queryable knowledge graph for your codebase.** Indexes JS/TS/Python into an embedded graph database with vector embeddings, then exposes a local MCP server that Claude Code, Cursor, and Windsurf can call to answer structural questions about your code.
8
+
9
+ Zero infrastructure. The graph lives at `~/.codegraph/`. No Docker, no external services.
10
+
11
+ ---
11
12
 
12
13
  ## Install
13
14
 
@@ -15,36 +16,101 @@ external services.
15
16
  npm i -g @leanlabsinnov/codegraph
16
17
  ```
17
18
 
18
- Requires Node 20+.
19
+ Requires **Node.js 20+**.
20
+
21
+ ---
19
22
 
20
23
  ## Quickstart
21
24
 
22
25
  ```bash
23
- codegraph config llm set byo-openai
26
+ # 1. Pick an LLM provider
27
+ codegraph config llm set byo-openai # also: byo-anthropic, byo-google, local-ollama
24
28
  export OPENAI_API_KEY=sk-...
29
+
30
+ # 2. Verify the wire-up (5-token gen + 1 embedding round-trip)
31
+ codegraph config llm test
32
+
33
+ # 3. Index a repo
25
34
  codegraph index ~/my-project
35
+
36
+ # 4. Boot the MCP server
26
37
  codegraph serve
38
+ # → http://127.0.0.1:3748/mcp
27
39
  ```
28
40
 
29
- Then point Claude Code / Cursor / Windsurf at `http://127.0.0.1:3748/mcp` with the bearer
30
- token from `~/.codegraph/config.json`.
41
+ Point Claude Code / Cursor / Windsurf at `http://127.0.0.1:3748/mcp` with the bearer token from `~/.codegraph/config.json`.
31
42
 
32
- See the full [README on GitHub](https://github.com/leanlabsinnov/codegraph) for client setup, all
33
- 10 MCP tools, and troubleshooting.
43
+ See [client setup docs](https://github.com/Cirilcetra/codegraph/blob/main/docs/clients.md) for copy-paste config snippets.
44
+
45
+ ---
34
46
 
35
47
  ## Commands
36
48
 
37
- | Command | What it does |
49
+ | Command | Description |
38
50
  |---|---|
39
- | `codegraph index <path>` | Walk the repo, parse JS/TS, embed every symbol, write to the graph |
40
- | `codegraph index <path> --no-embed` | Skip embeddings (faster, semantic search disabled) |
41
- | `codegraph status <path>` | Node + edge counts and embedding coverage |
42
- | `codegraph serve` | Boot the MCP server on `:3748` |
43
- | `codegraph doctor` | Verify Node version, config, LLM credentials, Kuzu writeable |
44
- | `codegraph config show` | Print the resolved config |
45
- | `codegraph config llm set [preset]` | Switch LLM preset (interactive when omitted) |
51
+ | `codegraph index <path>` | Walk, parse, embed every symbol, write to graph |
52
+ | `codegraph index <path> --no-embed` | Parse only faster, semantic search disabled |
53
+ | `codegraph status <path>` | Node/edge counts and embedding coverage |
54
+ | `codegraph wipe [path]` | Delete a repo's graph rows (`--yes` skips prompt) |
55
+ | `codegraph serve [--port N] [--host H]` | Boot the MCP server (default `:3748`) |
56
+ | `codegraph doctor` | Health check: Node, config, API keys, Kuzu, LLM |
57
+ | `codegraph config show` | Print resolved `~/.codegraph/config.json` |
58
+ | `codegraph config llm set [preset]` | Switch LLM preset (interactive picker if no arg) |
46
59
  | `codegraph config llm test` | Round-trip the configured provider |
47
60
 
61
+ ---
62
+
63
+ ## LLM Providers
64
+
65
+ | Preset | Generation | Embeddings | Required env var |
66
+ |---|---|---|---|
67
+ | `byo-openai` | gpt-4o-mini | text-embedding-3-small | `OPENAI_API_KEY` |
68
+ | `byo-anthropic` | claude-3-5-haiku-latest | text-embedding-3-small | `ANTHROPIC_API_KEY` + `OPENAI_API_KEY` |
69
+ | `byo-google` | gemini-1.5-flash-latest | text-embedding-004 | `GOOGLE_GENERATIVE_AI_API_KEY` |
70
+ | `local-ollama` | qwen2.5-coder:14b | nomic-embed-text | Ollama running locally |
71
+
72
+ ---
73
+
74
+ ## What you can ask
75
+
76
+ Once connected, ask your AI assistant:
77
+
78
+ - *"What calls `useAuth`?"*
79
+ - *"Show the component tree rooted at `App`."*
80
+ - *"What's the blast radius of renaming `formatPrice`?"*
81
+ - *"Find symbols semantically similar to 'JWT auth helper'."*
82
+ - *"What are the transitive imports of `src/lib/db.ts`?"*
83
+
84
+ ---
85
+
86
+ ## Configuration
87
+
88
+ Config is stored at `~/.codegraph/config.json` (auto-generated on first `serve`):
89
+
90
+ ```json
91
+ {
92
+ "llm": {
93
+ "mode": "byo",
94
+ "generation": { "provider": "openai", "model": "gpt-4o-mini" },
95
+ "embeddings": { "provider": "openai", "model": "text-embedding-3-small", "dimension": 1536 }
96
+ },
97
+ "server": {
98
+ "port": 3748,
99
+ "bearerToken": "<32-hex-chars>"
100
+ }
101
+ }
102
+ ```
103
+
104
+ You can also override via environment variables — see `.env.example` in the repo root.
105
+
106
+ ---
107
+
108
+ ## Troubleshooting
109
+
110
+ Run `codegraph doctor` first. For client-specific config issues, see the [full docs](https://github.com/Cirilcetra/codegraph/blob/main/docs/clients.md#troubleshooting).
111
+
112
+ ---
113
+
48
114
  ## License
49
115
 
50
116
  MIT
package/dist/bin.js CHANGED
@@ -2,10 +2,10 @@
2
2
  import {
3
3
  buildProgram,
4
4
  renderError
5
- } from "./chunk-5WYXRWEY.js";
6
- import "./chunk-B2TIVKUB.js";
7
- import "./chunk-AVP24SX5.js";
8
- import "./chunk-XGPZDCQ4.js";
5
+ } from "./chunk-KYPDPBI5.js";
6
+ import "./chunk-GOJIV25M.js";
7
+ import "./chunk-C2AULDUQ.js";
8
+ import "./chunk-Z6DQLXRR.js";
9
9
 
10
10
  // src/bin.ts
11
11
  import { CommanderError } from "commander";
@@ -1,7 +1,7 @@
1
1
  import {
2
2
  EDGE_KINDS,
3
3
  NODE_KINDS
4
- } from "./chunk-XGPZDCQ4.js";
4
+ } from "./chunk-Z6DQLXRR.js";
5
5
 
6
6
  // ../graph-db/src/client.ts
7
7
  import { mkdir } from "fs/promises";
@@ -387,4 +387,4 @@ export {
387
387
  defaultDbPath,
388
388
  GraphDb
389
389
  };
390
- //# sourceMappingURL=chunk-AVP24SX5.js.map
390
+ //# sourceMappingURL=chunk-C2AULDUQ.js.map
@@ -1,6 +1,6 @@
1
1
  import {
2
2
  namespaceToString
3
- } from "./chunk-XGPZDCQ4.js";
3
+ } from "./chunk-Z6DQLXRR.js";
4
4
 
5
5
  // ../llm-router/src/router.ts
6
6
  import { embedMany, generateText } from "ai";
@@ -15,7 +15,7 @@ async function createLlmRouter(opts) {
15
15
  }
16
16
  };
17
17
  const generationModel = await buildGenerationModel(opts.config, env);
18
- const embeddingModel = await buildEmbeddingModel(opts.config.embeddings, env);
18
+ const embeddingModel = await buildEmbeddingModel(opts.config.embeddings, env, opts.config.baseUrl);
19
19
  async function doEmbed(texts) {
20
20
  if (texts.length === 0) return [];
21
21
  const { embeddings } = await embedMany({ model: embeddingModel, values: texts });
@@ -71,8 +71,8 @@ async function buildGenerationModel(config, env) {
71
71
  if (provider === "ollama") return ollamaText(config.generation, env, config.baseUrl);
72
72
  throw new Error(`Unsupported generation provider: ${provider}`);
73
73
  }
74
- async function buildEmbeddingModel(config, env) {
75
- if (config.provider === "openai") return openaiEmbedding(config, env);
74
+ async function buildEmbeddingModel(config, env, baseUrl) {
75
+ if (config.provider === "openai") return openaiEmbedding(config, env, baseUrl);
76
76
  if (config.provider === "google") return googleEmbedding(config, env);
77
77
  if (config.provider === "ollama") return ollamaEmbedding(config, env);
78
78
  if (config.provider === "anthropic") {
@@ -91,9 +91,12 @@ async function openaiText(config, env, baseUrl) {
91
91
  });
92
92
  return openai(config.model);
93
93
  }
94
- async function openaiEmbedding(config, env) {
94
+ async function openaiEmbedding(config, env, baseUrl) {
95
95
  const { createOpenAI } = await import("@ai-sdk/openai");
96
- const openai = createOpenAI(env.OPENAI_API_KEY ? { apiKey: env.OPENAI_API_KEY } : {});
96
+ const openai = createOpenAI({
97
+ ...env.OPENAI_API_KEY ? { apiKey: env.OPENAI_API_KEY } : {},
98
+ ...baseUrl ? { baseURL: baseUrl } : {}
99
+ });
97
100
  return openai.embedding(config.model);
98
101
  }
99
102
  async function anthropicText(config, env, baseUrl) {
@@ -136,4 +139,4 @@ export {
136
139
  createLlmRouter,
137
140
  namespaceLabel
138
141
  };
139
- //# sourceMappingURL=chunk-B2TIVKUB.js.map
142
+ //# sourceMappingURL=chunk-GOJIV25M.js.map
@@ -0,0 +1 @@
1
+ {"version":3,"sources":["../../llm-router/src/router.ts"],"sourcesContent":["import type { EmbeddingModel, LanguageModel } from \"ai\";\nimport { embedMany, generateText } from \"ai\";\nimport type { LlmConfig, LlmEmbeddingConfig, LlmProviderConfig } from \"@codegraph/shared\";\nimport { namespaceToString } from \"@codegraph/shared\";\nimport type { GenerateOptions, LlmRouter, ResolvedLlmConfig } from \"./types.js\";\n\nexport interface CreateRouterOptions {\n config: LlmConfig;\n env?: NodeJS.ProcessEnv;\n}\n\nexport async function createLlmRouter(opts: CreateRouterOptions): Promise<LlmRouter> {\n const env = opts.env ?? process.env;\n const resolved: ResolvedLlmConfig = {\n ...opts.config,\n embeddingNamespace: {\n provider: opts.config.embeddings.provider,\n model: opts.config.embeddings.model,\n dimension: opts.config.embeddings.dimension,\n },\n };\n\n const generationModel = await buildGenerationModel(opts.config, env);\n const embeddingModel = await buildEmbeddingModel(opts.config.embeddings, env, opts.config.baseUrl);\n\n async function doEmbed(texts: string[]): Promise<number[][]> {\n if (texts.length === 0) return [];\n const { embeddings } = await embedMany({ model: embeddingModel, values: texts });\n return embeddings;\n }\n\n async function doGenerate(options: GenerateOptions): Promise<string> {\n const { text } = await generateText({\n model: generationModel,\n ...(options.system !== undefined ? { system: options.system } : {}),\n messages: options.messages,\n ...(options.temperature !== undefined ? { temperature: options.temperature } : {}),\n ...(options.maxTokens !== undefined ? { maxTokens: options.maxTokens } : {}),\n });\n return text;\n }\n\n const router: LlmRouter = {\n config: resolved,\n embed: doEmbed,\n generate: doGenerate,\n async selfTest() {\n const embedStart = Date.now();\n const embeds = await doEmbed([\"codegraph self-test\"]);\n const embedLatencyMs = Date.now() - embedStart;\n const genStart = Date.now();\n const out = await doGenerate({\n messages: [{ role: \"user\", content: \"Reply with the single word ok.\" }],\n maxTokens: 5,\n temperature: 0,\n });\n const generateLatencyMs = Date.now() - genStart;\n return {\n embedDims: embeds[0]?.length ?? 0,\n generationOk: typeof out === \"string\" && out.length > 0,\n embedLatencyMs,\n generateLatencyMs,\n };\n },\n };\n return router;\n}\n\n/**\n * Used by the CLI's `config llm` printer; surfaces the resolved namespace\n * tag without forcing a provider import upstream.\n */\nexport function namespaceLabel(config: LlmConfig): string {\n return namespaceToString({\n provider: config.embeddings.provider,\n model: config.embeddings.model,\n dimension: config.embeddings.dimension,\n });\n}\n\nasync function buildGenerationModel(\n config: LlmConfig,\n env: NodeJS.ProcessEnv,\n): Promise<LanguageModel> {\n const provider = config.generation.provider;\n if (provider === \"openai\") return openaiText(config.generation, env, config.baseUrl);\n if (provider === \"anthropic\") return anthropicText(config.generation, env, config.baseUrl);\n if (provider === \"google\") return googleText(config.generation, env, config.baseUrl);\n if (provider === \"ollama\") return ollamaText(config.generation, env, config.baseUrl);\n throw new Error(`Unsupported generation provider: ${provider}`);\n}\n\nasync function buildEmbeddingModel(\n config: LlmEmbeddingConfig,\n env: NodeJS.ProcessEnv,\n baseUrl?: string,\n): Promise<EmbeddingModel<string>> {\n if (config.provider === \"openai\") return openaiEmbedding(config, env, baseUrl);\n if (config.provider === \"google\") return googleEmbedding(config, env);\n if (config.provider === \"ollama\") return ollamaEmbedding(config, env);\n if (config.provider === \"anthropic\") {\n // Anthropic has no embedding endpoint; fall back to OpenAI so the\n // pipeline keeps working when generation is anthropic.\n return openaiEmbedding(\n { provider: \"openai\", model: \"text-embedding-3-small\", dimension: 1536 },\n env,\n );\n }\n throw new Error(`Unsupported embedding provider: ${config.provider}`);\n}\n\nasync function openaiText(\n config: LlmProviderConfig,\n env: NodeJS.ProcessEnv,\n baseUrl?: string,\n): Promise<LanguageModel> {\n const { createOpenAI } = await import(\"@ai-sdk/openai\");\n const openai = createOpenAI({\n ...(env.OPENAI_API_KEY ? { apiKey: env.OPENAI_API_KEY } : {}),\n ...(baseUrl ? { baseURL: baseUrl } : {}),\n });\n return openai(config.model);\n}\n\nasync function openaiEmbedding(\n config: LlmEmbeddingConfig,\n env: NodeJS.ProcessEnv,\n baseUrl?: string,\n): Promise<EmbeddingModel<string>> {\n const { createOpenAI } = await import(\"@ai-sdk/openai\");\n const openai = createOpenAI({\n ...(env.OPENAI_API_KEY ? { apiKey: env.OPENAI_API_KEY } : {}),\n ...(baseUrl ? { baseURL: baseUrl } : {}),\n });\n return openai.embedding(config.model);\n}\n\nasync function anthropicText(\n config: LlmProviderConfig,\n env: NodeJS.ProcessEnv,\n baseUrl?: string,\n): Promise<LanguageModel> {\n const { createAnthropic } = await import(\"@ai-sdk/anthropic\");\n const anthropic = createAnthropic({\n ...(env.ANTHROPIC_API_KEY ? { apiKey: env.ANTHROPIC_API_KEY } : {}),\n ...(baseUrl ? { baseURL: baseUrl } : {}),\n });\n return anthropic(config.model);\n}\n\nasync function googleText(\n config: LlmProviderConfig,\n env: NodeJS.ProcessEnv,\n baseUrl?: string,\n): Promise<LanguageModel> {\n const { createGoogleGenerativeAI } = await import(\"@ai-sdk/google\");\n const google = createGoogleGenerativeAI({\n ...(env.GOOGLE_GENERATIVE_AI_API_KEY\n ? { apiKey: env.GOOGLE_GENERATIVE_AI_API_KEY }\n : {}),\n ...(baseUrl ? { baseURL: baseUrl } : {}),\n });\n return google(config.model);\n}\n\nasync function googleEmbedding(\n config: LlmEmbeddingConfig,\n env: NodeJS.ProcessEnv,\n): Promise<EmbeddingModel<string>> {\n const { createGoogleGenerativeAI } = await import(\"@ai-sdk/google\");\n const google = createGoogleGenerativeAI(\n env.GOOGLE_GENERATIVE_AI_API_KEY\n ? { apiKey: env.GOOGLE_GENERATIVE_AI_API_KEY }\n : {},\n );\n return google.textEmbeddingModel(config.model);\n}\n\nasync function ollamaText(\n config: LlmProviderConfig,\n env: NodeJS.ProcessEnv,\n baseUrl?: string,\n): Promise<LanguageModel> {\n const { createOllama } = await import(\"ollama-ai-provider\");\n const base = baseUrl ?? env.OLLAMA_BASE_URL;\n const ollama = createOllama(base ? { baseURL: `${base.replace(/\\/$/, \"\")}/api` } : {});\n return ollama(config.model);\n}\n\nasync function ollamaEmbedding(\n config: LlmEmbeddingConfig,\n env: NodeJS.ProcessEnv,\n): Promise<EmbeddingModel<string>> {\n const { createOllama } = await import(\"ollama-ai-provider\");\n const base = env.OLLAMA_BASE_URL;\n const ollama = createOllama(base ? { baseURL: `${base.replace(/\\/$/, \"\")}/api` } : {});\n return ollama.embedding(config.model);\n}\n"],"mappings":";;;;;AACA,SAAS,WAAW,oBAAoB;AAUxC,eAAsB,gBAAgB,MAA+C;AACnF,QAAM,MAAM,KAAK,OAAO,QAAQ;AAChC,QAAM,WAA8B;AAAA,IAClC,GAAG,KAAK;AAAA,IACR,oBAAoB;AAAA,MAClB,UAAU,KAAK,OAAO,WAAW;AAAA,MACjC,OAAO,KAAK,OAAO,WAAW;AAAA,MAC9B,WAAW,KAAK,OAAO,WAAW;AAAA,IACpC;AAAA,EACF;AAEA,QAAM,kBAAkB,MAAM,qBAAqB,KAAK,QAAQ,GAAG;AACnE,QAAM,iBAAiB,MAAM,oBAAoB,KAAK,OAAO,YAAY,KAAK,KAAK,OAAO,OAAO;AAEjG,iBAAe,QAAQ,OAAsC;AAC3D,QAAI,MAAM,WAAW,EAAG,QAAO,CAAC;AAChC,UAAM,EAAE,WAAW,IAAI,MAAM,UAAU,EAAE,OAAO,gBAAgB,QAAQ,MAAM,CAAC;AAC/E,WAAO;AAAA,EACT;AAEA,iBAAe,WAAW,SAA2C;AACnE,UAAM,EAAE,KAAK,IAAI,MAAM,aAAa;AAAA,MAClC,OAAO;AAAA,MACP,GAAI,QAAQ,WAAW,SAAY,EAAE,QAAQ,QAAQ,OAAO,IAAI,CAAC;AAAA,MACjE,UAAU,QAAQ;AAAA,MAClB,GAAI,QAAQ,gBAAgB,SAAY,EAAE,aAAa,QAAQ,YAAY,IAAI,CAAC;AAAA,MAChF,GAAI,QAAQ,cAAc,SAAY,EAAE,WAAW,QAAQ,UAAU,IAAI,CAAC;AAAA,IAC5E,CAAC;AACD,WAAO;AAAA,EACT;AAEA,QAAM,SAAoB;AAAA,IACxB,QAAQ;AAAA,IACR,OAAO;AAAA,IACP,UAAU;AAAA,IACV,MAAM,WAAW;AACf,YAAM,aAAa,KAAK,IAAI;AAC5B,YAAM,SAAS,MAAM,QAAQ,CAAC,qBAAqB,CAAC;AACpD,YAAM,iBAAiB,KAAK,IAAI,IAAI;AACpC,YAAM,WAAW,KAAK,IAAI;AAC1B,YAAM,MAAM,MAAM,WAAW;AAAA,QAC3B,UAAU,CAAC,EAAE,MAAM,QAAQ,SAAS,iCAAiC,CAAC;AAAA,QACtE,WAAW;AAAA,QACX,aAAa;AAAA,MACf,CAAC;AACD,YAAM,oBAAoB,KAAK,IAAI,IAAI;AACvC,aAAO;AAAA,QACL,WAAW,OAAO,CAAC,GAAG,UAAU;AAAA,QAChC,cAAc,OAAO,QAAQ,YAAY,IAAI,SAAS;AAAA,QACtD;AAAA,QACA;AAAA,MACF;AAAA,IACF;AAAA,EACF;AACA,SAAO;AACT;AAMO,SAAS,eAAe,QAA2B;AACxD,SAAO,kBAAkB;AAAA,IACvB,UAAU,OAAO,WAAW;AAAA,IAC5B,OAAO,OAAO,WAAW;AAAA,IACzB,WAAW,OAAO,WAAW;AAAA,EAC/B,CAAC;AACH;AAEA,eAAe,qBACb,QACA,KACwB;AACxB,QAAM,WAAW,OAAO,WAAW;AACnC,MAAI,aAAa,SAAU,QAAO,WAAW,OAAO,YAAY,KAAK,OAAO,OAAO;AACnF,MAAI,aAAa,YAAa,QAAO,cAAc,OAAO,YAAY,KAAK,OAAO,OAAO;AACzF,MAAI,aAAa,SAAU,QAAO,WAAW,OAAO,YAAY,KAAK,OAAO,OAAO;AACnF,MAAI,aAAa,SAAU,QAAO,WAAW,OAAO,YAAY,KAAK,OAAO,OAAO;AACnF,QAAM,IAAI,MAAM,oCAAoC,QAAQ,EAAE;AAChE;AAEA,eAAe,oBACb,QACA,KACA,SACiC;AACjC,MAAI,OAAO,aAAa,SAAU,QAAO,gBAAgB,QAAQ,KAAK,OAAO;AAC7E,MAAI,OAAO,aAAa,SAAU,QAAO,gBAAgB,QAAQ,GAAG;AACpE,MAAI,OAAO,aAAa,SAAU,QAAO,gBAAgB,QAAQ,GAAG;AACpE,MAAI,OAAO,aAAa,aAAa;AAGnC,WAAO;AAAA,MACL,EAAE,UAAU,UAAU,OAAO,0BAA0B,WAAW,KAAK;AAAA,MACvE;AAAA,IACF;AAAA,EACF;AACA,QAAM,IAAI,MAAM,mCAAmC,OAAO,QAAQ,EAAE;AACtE;AAEA,eAAe,WACb,QACA,KACA,SACwB;AACxB,QAAM,EAAE,aAAa,IAAI,MAAM,OAAO,gBAAgB;AACtD,QAAM,SAAS,aAAa;AAAA,IAC1B,GAAI,IAAI,iBAAiB,EAAE,QAAQ,IAAI,eAAe,IAAI,CAAC;AAAA,IAC3D,GAAI,UAAU,EAAE,SAAS,QAAQ,IAAI,CAAC;AAAA,EACxC,CAAC;AACD,SAAO,OAAO,OAAO,KAAK;AAC5B;AAEA,eAAe,gBACb,QACA,KACA,SACiC;AACjC,QAAM,EAAE,aAAa,IAAI,MAAM,OAAO,gBAAgB;AACtD,QAAM,SAAS,aAAa;AAAA,IAC1B,GAAI,IAAI,iBAAiB,EAAE,QAAQ,IAAI,eAAe,IAAI,CAAC;AAAA,IAC3D,GAAI,UAAU,EAAE,SAAS,QAAQ,IAAI,CAAC;AAAA,EACxC,CAAC;AACD,SAAO,OAAO,UAAU,OAAO,KAAK;AACtC;AAEA,eAAe,cACb,QACA,KACA,SACwB;AACxB,QAAM,EAAE,gBAAgB,IAAI,MAAM,OAAO,mBAAmB;AAC5D,QAAM,YAAY,gBAAgB;AAAA,IAChC,GAAI,IAAI,oBAAoB,EAAE,QAAQ,IAAI,kBAAkB,IAAI,CAAC;AAAA,IACjE,GAAI,UAAU,EAAE,SAAS,QAAQ,IAAI,CAAC;AAAA,EACxC,CAAC;AACD,SAAO,UAAU,OAAO,KAAK;AAC/B;AAEA,eAAe,WACb,QACA,KACA,SACwB;AACxB,QAAM,EAAE,yBAAyB,IAAI,MAAM,OAAO,gBAAgB;AAClE,QAAM,SAAS,yBAAyB;AAAA,IACtC,GAAI,IAAI,+BACJ,EAAE,QAAQ,IAAI,6BAA6B,IAC3C,CAAC;AAAA,IACL,GAAI,UAAU,EAAE,SAAS,QAAQ,IAAI,CAAC;AAAA,EACxC,CAAC;AACD,SAAO,OAAO,OAAO,KAAK;AAC5B;AAEA,eAAe,gBACb,QACA,KACiC;AACjC,QAAM,EAAE,yBAAyB,IAAI,MAAM,OAAO,gBAAgB;AAClE,QAAM,SAAS;AAAA,IACb,IAAI,+BACA,EAAE,QAAQ,IAAI,6BAA6B,IAC3C,CAAC;AAAA,EACP;AACA,SAAO,OAAO,mBAAmB,OAAO,KAAK;AAC/C;AAEA,eAAe,WACb,QACA,KACA,SACwB;AACxB,QAAM,EAAE,aAAa,IAAI,MAAM,OAAO,oBAAoB;AAC1D,QAAM,OAAO,WAAW,IAAI;AAC5B,QAAM,SAAS,aAAa,OAAO,EAAE,SAAS,GAAG,KAAK,QAAQ,OAAO,EAAE,CAAC,OAAO,IAAI,CAAC,CAAC;AACrF,SAAO,OAAO,OAAO,KAAK;AAC5B;AAEA,eAAe,gBACb,QACA,KACiC;AACjC,QAAM,EAAE,aAAa,IAAI,MAAM,OAAO,oBAAoB;AAC1D,QAAM,OAAO,IAAI;AACjB,QAAM,SAAS,aAAa,OAAO,EAAE,SAAS,GAAG,KAAK,QAAQ,OAAO,EAAE,CAAC,OAAO,IAAI,CAAC,CAAC;AACrF,SAAO,OAAO,UAAU,OAAO,KAAK;AACtC;","names":[]}