@diabolicallabs/llm-client 0.1.1 → 0.2.0

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
package/README.md CHANGED
@@ -1,12 +1,12 @@
1
1
  # @diabolicallabs/llm-client
2
2
 
3
- Unified LLM API across Anthropic, OpenAI, Google Gemini, and DeepSeek. Single interface for completion, streaming, and structured output. All provider errors are normalized into a consistent `LlmError` shape. © Diabolical Labs
3
+ Unified LLM API across Anthropic, OpenAI, Google Gemini, DeepSeek, and Perplexity. Single interface for completion, streaming, and structured output. All provider errors are normalized into a consistent `LlmError` shape. © Diabolical Labs
4
4
 
5
5
  **Pre-1.0. APIs may change between minor versions.**
6
6
 
7
7
  ## Status
8
8
 
9
- **Published — v0.1.0.** All four providers are implemented. A fifth provider (Perplexity) is a stub and will be implemented in a future release.
9
+ **Published — v0.2.0.** All five providers are implemented. Perplexity adds web-grounded responses with citation extraction and search filters.
10
10
 
11
11
  ## Install
12
12
 
@@ -55,9 +55,79 @@ const result = await client.structured(messages, schema);
55
55
  |---|---|---|
56
56
  | `anthropic` | Implemented | `ANTHROPIC_API_KEY` |
57
57
  | `openai` | Implemented | `OPENAI_API_KEY` |
58
- | `google` | Implemented | `GOOGLE_AI_API_KEY` |
58
+ | `gemini` | Implemented | `GOOGLE_AI_API_KEY` |
59
59
  | `deepseek` | Implemented | `DEEPSEEK_API_KEY` |
60
- | `perplexity` | Stub throws `LlmError` | — |
60
+ | `perplexity` | Implemented | `PERPLEXITY_API_KEY` |
61
+
62
+ ## Perplexity — web-grounded responses
63
+
64
+ The Perplexity provider returns real-time web-grounded answers with source citations. Use it via `createClient` or `createClientFromEnv`:
65
+
66
+ ```typescript
67
+ const client = createClientFromEnv('perplexity', 'sonar');
68
+ const response = await client.complete([
69
+ { role: 'user', content: 'What happened in AI this week?' },
70
+ ]);
71
+
72
+ // Citations are deduplicated by URL
73
+ console.log(response.citations);
74
+ // [{ url: 'https://example.com/article' }, { url: 'https://reuters.com/story' }]
75
+ ```
76
+
77
+ ### Citations
78
+
79
+ `LlmResponse.citations` is populated when Perplexity returns source URLs. It is `undefined` for all other providers.
80
+
81
+ ```typescript
82
+ interface LlmResponse {
83
+ content: string;
84
+ model: string;
85
+ usage: LlmUsage;
86
+ latencyMs: number;
87
+ citations?: Array<{
88
+ url: string;
89
+ title?: string; // Perplexity currently returns URLs only; title is always undefined
90
+ }>;
91
+ }
92
+ ```
93
+
94
+ Citations are deduplicated by URL within a single response. They are **not available in stream mode** — use `complete()` when you need citations.
95
+
96
+ ### Search filters via `providerOptions`
97
+
98
+ Perplexity supports search-specific parameters. Pass them via the `providerOptions` escape hatch on any call:
99
+
100
+ ```typescript
101
+ await client.complete(messages, {
102
+ providerOptions: {
103
+ search_recency_filter: 'week', // 'month' | 'week' | 'day' | 'hour'
104
+ search_domain_filter: ['nytimes.com', 'reuters.com'], // allowlist
105
+ },
106
+ });
107
+ ```
108
+
109
+ `providerOptions` is `Record<string, unknown>` — unknown fields are forwarded to the Perplexity API unchanged, so newly-released filters work without a toolkit update. Other providers ignore `providerOptions`.
110
+
111
+ ### Reasoning models
112
+
113
+ Pass reasoning model IDs as the `model` string:
114
+
115
+ ```typescript
116
+ const client = createClientFromEnv('perplexity', 'sonar-reasoning-pro');
117
+ ```
118
+
119
+ Available models (verified 2026-05-08):
120
+
121
+ | Model | Notes |
122
+ |---|---|
123
+ | `sonar` | Lightweight search model. Default. |
124
+ | `sonar-pro` | Advanced search, more citations. |
125
+ | `sonar-reasoning-pro` | Chain-of-thought reasoning. Replaces deprecated `sonar-reasoning`. |
126
+ | `sonar-deep-research` | Exhaustive research. Perplexity docs indicate async job support — treat as experimental with this toolkit. |
127
+
128
+ `structured()` with `sonar-reasoning-pro` works correctly — reasoning tokens (`<think>...</think>`) are stripped before JSON parsing.
129
+
130
+ `sonar-deep-research` is accepted as a model string. If Perplexity's API returns an incompatible async response shape, the call will throw a clear `LlmError`. In that case, use `sonar-reasoning-pro` instead, or wait for a future deep-research-specific brief.
61
131
 
62
132
  ## API
63
133
 
@@ -70,17 +140,29 @@ Creates an `LlmClient` for the given provider.
70
140
  Reads the API key from the environment automatically:
71
141
  - `anthropic` → `ANTHROPIC_API_KEY`
72
142
  - `openai` → `OPENAI_API_KEY`
73
- - `google` → `GOOGLE_AI_API_KEY`
143
+ - `gemini` → `GOOGLE_AI_API_KEY`
74
144
  - `deepseek` → `DEEPSEEK_API_KEY`
145
+ - `perplexity` → `PERPLEXITY_API_KEY`
75
146
 
76
147
  ### `LlmClient` interface
77
148
 
78
149
  | Method | Description |
79
150
  |---|---|
80
- | `complete(messages, options?)` | Non-streaming completion. Returns `LlmResponse`. |
81
- | `stream(messages, options?)` | Streaming — async generator of `LlmStreamChunk`. Final chunk includes `usage`. |
151
+ | `complete(messages, options?)` | Non-streaming completion. Returns `LlmResponse` (includes `citations` for Perplexity). |
152
+ | `stream(messages, options?)` | Streaming — async generator of `LlmStreamChunk`. Final chunk includes `usage`. Citations unavailable. |
82
153
  | `structured(messages, schema, options?)` | Structured output validated against a Zod schema. Returns `LlmStructuredResponse<T>`. |
83
154
 
155
+ All methods accept `LlmCallOptions` as the options parameter:
156
+
157
+ ```typescript
158
+ interface LlmCallOptions {
159
+ model?: string;
160
+ maxTokens?: number;
161
+ temperature?: number;
162
+ providerOptions?: Record<string, unknown>; // Perplexity search filters, etc.
163
+ }
164
+ ```
165
+
84
166
  ## Error handling
85
167
 
86
168
  All provider errors are normalized into `LlmError`:
package/dist/index.d.ts CHANGED
@@ -2,6 +2,10 @@
2
2
  * Core type definitions for @diabolicallabs/llm-client.
3
3
  * These are the stable public API surface — implementation is in Week 2.
4
4
  * Types here match the spec in briefs/brief-platform.md §4.1 exactly.
5
+ *
6
+ * Week 5 additions:
7
+ * LlmResponse.citations — populated by the Perplexity provider; undefined for all others.
8
+ * LlmCallOptions — per-call options type extracted for reuse; adds providerOptions escape hatch.
5
9
  */
6
10
  interface LlmMessage {
7
11
  role: 'system' | 'user' | 'assistant';
@@ -29,6 +33,27 @@ interface LlmResponse {
29
33
  model: string;
30
34
  usage: LlmUsage;
31
35
  latencyMs: number;
36
+ /**
37
+ * Web citations returned by the Perplexity provider.
38
+ * Populated only when the Perplexity API returns source references.
39
+ * Always undefined for Anthropic, OpenAI, Gemini, and DeepSeek.
40
+ * Deduplicated by URL within a single response.
41
+ */
42
+ citations?: Array<{
43
+ url: string;
44
+ title?: string;
45
+ }>;
46
+ }
47
+ /**
48
+ * Per-call options shared across complete(), stream(), and structured().
49
+ * Extends the standard model/maxTokens/temperature overrides with:
50
+ * providerOptions — generic escape hatch for provider-specific parameters.
51
+ * The Perplexity provider reads search_domain_filter and
52
+ * search_recency_filter from this field; other providers ignore it.
53
+ * Unknown fields are passed through unchanged.
54
+ */
55
+ interface LlmCallOptions extends Partial<Pick<LlmClientConfig, 'model' | 'maxTokens' | 'temperature'>> {
56
+ providerOptions?: Record<string, unknown>;
32
57
  }
33
58
  interface LlmStreamChunk {
34
59
  token: string;
@@ -55,11 +80,11 @@ type LlmStructuredResponse<T> = {
55
80
  };
56
81
  interface LlmClient {
57
82
  readonly config: Readonly<LlmClientConfig>;
58
- complete(messages: LlmMessage[], options?: Partial<Pick<LlmClientConfig, 'model' | 'maxTokens' | 'temperature'>>): Promise<LlmResponse>;
59
- stream(messages: LlmMessage[], options?: Partial<Pick<LlmClientConfig, 'model' | 'maxTokens' | 'temperature'>>): AsyncGenerator<LlmStreamChunk>;
83
+ complete(messages: LlmMessage[], options?: LlmCallOptions): Promise<LlmResponse>;
84
+ stream(messages: LlmMessage[], options?: LlmCallOptions): AsyncGenerator<LlmStreamChunk>;
60
85
  structured<T>(messages: LlmMessage[], schema: {
61
86
  parse: (data: unknown) => T;
62
- }, options?: Partial<Pick<LlmClientConfig, 'model' | 'maxTokens' | 'temperature'>>): Promise<LlmStructuredResponse<T>>;
87
+ }, options?: LlmCallOptions): Promise<LlmStructuredResponse<T>>;
63
88
  }
64
89
 
65
90
  /**
@@ -73,15 +98,13 @@ interface LlmClient {
73
98
  * 'openai' → fully implemented (Week 2)
74
99
  * 'gemini' → fully implemented (Week 3)
75
100
  * 'deepseek' → fully implemented (Week 3)
76
- * 'perplexity' → stub, throws "not yet implemented" (later week)
101
+ * 'perplexity' → fully implemented (Week 5) search-grounded, citations, providerOptions
77
102
  */
78
103
 
79
104
  /**
80
105
  * Create an LlmClient for the given provider and config.
81
106
  * Dispatches to the provider-specific implementation.
82
- *
83
- * Anthropic, OpenAI, Gemini, and DeepSeek are fully implemented.
84
- * Perplexity is a type-registered stub that throws "not yet implemented".
107
+ * All five providers are fully implemented.
85
108
  */
86
109
  declare function createClient(config: LlmClientConfig): LlmClient;
87
110
  /**
@@ -92,10 +115,10 @@ declare function createClient(config: LlmClientConfig): LlmClient;
92
115
  * openai → OPENAI_API_KEY
93
116
  * gemini → GOOGLE_AI_API_KEY
94
117
  * deepseek → DEEPSEEK_API_KEY
95
- * perplexity → PERPLEXITY_API_KEY
118
+ * perplexity → PERPLEXITY_API_KEY — recommended default model: 'sonar'
96
119
  *
97
120
  * Throws LlmError if the required env var is not set.
98
121
  */
99
122
  declare function createClientFromEnv(provider: LlmClientConfig['provider'], model: string, overrides?: Partial<Omit<LlmClientConfig, 'provider' | 'model' | 'apiKey'>>): LlmClient;
100
123
 
101
- export { type LlmClient, type LlmClientConfig, LlmError, type LlmMessage, type LlmResponse, type LlmStreamChunk, type LlmStructuredResponse, type LlmUsage, createClient, createClientFromEnv };
124
+ export { type LlmCallOptions, type LlmClient, type LlmClientConfig, LlmError, type LlmMessage, type LlmResponse, type LlmStreamChunk, type LlmStructuredResponse, type LlmUsage, createClient, createClientFromEnv };
package/dist/index.js CHANGED
@@ -828,39 +828,224 @@ function createOpenAIProvider(config) {
828
828
  };
829
829
  }
830
830
 
831
- // src/providers/stubs.ts
832
- function rejectingStream(err) {
833
- const rejected = Promise.reject(err);
834
- rejected.catch(() => void 0);
831
+ // src/providers/perplexity.ts
832
+ import OpenAI3 from "openai";
833
+ var PROVIDER5 = "perplexity";
834
+ var PERPLEXITY_BASE_URL = "https://api.perplexity.ai";
835
+ function normalizeUsage5(usage) {
836
+ const inputTokens = usage?.prompt_tokens ?? 0;
837
+ const outputTokens = usage?.completion_tokens ?? 0;
835
838
  return {
836
- next: () => rejected,
837
- return: () => Promise.resolve({ value: void 0, done: true }),
838
- throw: () => Promise.reject(err),
839
- [Symbol.asyncIterator]() {
840
- return this;
841
- },
842
- [Symbol.asyncDispose]: async () => void 0
839
+ inputTokens,
840
+ outputTokens,
841
+ totalTokens: usage?.total_tokens ?? inputTokens + outputTokens
843
842
  };
844
843
  }
845
- function notImplemented(provider) {
846
- const err = new LlmError({
847
- message: `[dlabs-toolkit] Provider '${provider}' is not yet implemented. Anthropic, OpenAI, Gemini, and DeepSeek are available; Perplexity ships in a later week.`,
848
- provider,
849
- retryable: false
844
+ function buildMessages2(messages) {
845
+ return messages.map((m) => ({
846
+ role: m.role,
847
+ content: m.content
848
+ }));
849
+ }
850
+ function extractCitations(response) {
851
+ const rawCitations = response.citations;
852
+ if (rawCitations === void 0 || rawCitations.length === 0) return void 0;
853
+ const seen = /* @__PURE__ */ new Set();
854
+ const deduped = [];
855
+ for (const url of rawCitations) {
856
+ if (!seen.has(url)) {
857
+ seen.add(url);
858
+ deduped.push({ url });
859
+ }
860
+ }
861
+ return deduped.length > 0 ? deduped : void 0;
862
+ }
863
+ function extractProviderOptions(providerOptions) {
864
+ if (providerOptions === void 0) return {};
865
+ return { ...providerOptions };
866
+ }
867
+ function normalizePerplexityError(err) {
868
+ if (err instanceof LlmError) return err;
869
+ if (typeof OpenAI3.APIConnectionError === "function" && err instanceof OpenAI3.APIConnectionError) {
870
+ return new LlmError({
871
+ message: err.message,
872
+ provider: PROVIDER5,
873
+ retryable: true,
874
+ cause: err
875
+ });
876
+ }
877
+ if (typeof OpenAI3.APIError === "function" && err instanceof OpenAI3.APIError) {
878
+ const status = err.status;
879
+ if (status !== void 0) {
880
+ const retryable = [429, 502, 503, 504].includes(status) || status >= 500;
881
+ return new LlmError({
882
+ message: err.message,
883
+ provider: PROVIDER5,
884
+ statusCode: status,
885
+ retryable,
886
+ cause: err
887
+ });
888
+ }
889
+ return new LlmError({ message: err.message, provider: PROVIDER5, retryable: false, cause: err });
890
+ }
891
+ return normalizeThrownError(err, PROVIDER5);
892
+ }
893
+ function createPerplexityProvider(config) {
894
+ const client = new OpenAI3({
895
+ apiKey: config.apiKey,
896
+ baseURL: PERPLEXITY_BASE_URL,
897
+ timeout: config.timeoutMs ?? 3e4,
898
+ maxRetries: 0
899
+ // Retries managed by withRetry
850
900
  });
901
+ const retryOpts = {
902
+ maxRetries: config.maxRetries ?? 3,
903
+ baseDelayMs: config.baseDelayMs ?? 1e3,
904
+ provider: PROVIDER5
905
+ };
906
+ async function complete(messages, options) {
907
+ const model = options?.model ?? config.model;
908
+ const chatMessages = buildMessages2(messages);
909
+ const start = Date.now();
910
+ const extraParams = extractProviderOptions(options?.providerOptions);
911
+ return withRetry(async () => {
912
+ try {
913
+ const params = {
914
+ model,
915
+ messages: chatMessages,
916
+ stream: false,
917
+ ...extraParams
918
+ };
919
+ const maxTokens = options?.maxTokens ?? config.maxTokens;
920
+ if (maxTokens !== void 0) params.max_tokens = maxTokens;
921
+ const temperature = options?.temperature ?? config.temperature;
922
+ if (temperature !== void 0) params.temperature = temperature;
923
+ const rawResponse = await client.chat.completions.create(
924
+ params
925
+ );
926
+ const response = rawResponse;
927
+ const content = response.choices.map((c) => c.message.content ?? "").join("");
928
+ const result = {
929
+ content,
930
+ model: response.model,
931
+ usage: normalizeUsage5(response.usage),
932
+ latencyMs: Date.now() - start
933
+ };
934
+ const citations = extractCitations(response);
935
+ if (citations !== void 0) result.citations = citations;
936
+ return result;
937
+ } catch (err) {
938
+ throw normalizePerplexityError(err);
939
+ }
940
+ }, retryOpts);
941
+ }
942
+ async function* stream(messages, options) {
943
+ const model = options?.model ?? config.model;
944
+ const chatMessages = buildMessages2(messages);
945
+ const extraParams = extractProviderOptions(options?.providerOptions);
946
+ const params = {
947
+ model,
948
+ messages: chatMessages,
949
+ stream: true,
950
+ stream_options: { include_usage: true },
951
+ ...extraParams
952
+ };
953
+ const maxTokens = options?.maxTokens ?? config.maxTokens;
954
+ if (maxTokens !== void 0) params.max_tokens = maxTokens;
955
+ const temperature = options?.temperature ?? config.temperature;
956
+ if (temperature !== void 0) params.temperature = temperature;
957
+ let sdkStream;
958
+ try {
959
+ sdkStream = await client.chat.completions.create(
960
+ params
961
+ );
962
+ } catch (err) {
963
+ throw normalizePerplexityError(err);
964
+ }
965
+ let finalUsage;
966
+ try {
967
+ for await (const chunk of sdkStream) {
968
+ const delta = chunk.choices[0]?.delta.content;
969
+ if (delta !== void 0 && delta !== null && delta.length > 0) {
970
+ yield { token: delta };
971
+ }
972
+ if (chunk.usage !== void 0 && chunk.usage !== null) {
973
+ finalUsage = normalizeUsage5(chunk.usage);
974
+ }
975
+ }
976
+ } catch (err) {
977
+ throw normalizePerplexityError(err);
978
+ }
979
+ if (finalUsage !== void 0) {
980
+ yield { token: "", usage: finalUsage };
981
+ }
982
+ }
983
+ async function structured(messages, schema, options) {
984
+ const jsonSystemInstruction = {
985
+ role: "system",
986
+ content: "You must respond with valid JSON only. No explanations, no markdown code fences, no extra text. Your entire response must be valid JSON that can be parsed with JSON.parse()."
987
+ };
988
+ const augmentedMessages = [jsonSystemInstruction, ...messages];
989
+ const model = options?.model ?? config.model;
990
+ const chatMessages = buildMessages2(augmentedMessages);
991
+ const start = Date.now();
992
+ const extraParams = extractProviderOptions(options?.providerOptions);
993
+ const rawResponse = await withRetry(async () => {
994
+ try {
995
+ const params = {
996
+ model,
997
+ messages: chatMessages,
998
+ stream: false,
999
+ ...extraParams
1000
+ };
1001
+ const maxTokens = options?.maxTokens ?? config.maxTokens;
1002
+ if (maxTokens !== void 0) params.max_tokens = maxTokens;
1003
+ const temperature = options?.temperature ?? config.temperature;
1004
+ if (temperature !== void 0) params.temperature = temperature;
1005
+ return await client.chat.completions.create(
1006
+ params
1007
+ );
1008
+ } catch (err) {
1009
+ throw normalizePerplexityError(err);
1010
+ }
1011
+ }, retryOpts);
1012
+ const rawContent = rawResponse.choices[0]?.message.content ?? "";
1013
+ let parsed;
1014
+ try {
1015
+ const cleaned = rawContent.replace(/<think>[\s\S]*?<\/think>/i, "").replace(/^```(?:json)?\s*/i, "").replace(/\s*```$/, "").trim();
1016
+ parsed = JSON.parse(cleaned);
1017
+ } catch (err) {
1018
+ throw new LlmError({
1019
+ message: `Perplexity structured output: response is not valid JSON. Raw: ${rawContent.slice(0, 200)}`,
1020
+ provider: PROVIDER5,
1021
+ retryable: false,
1022
+ cause: err
1023
+ });
1024
+ }
1025
+ let data;
1026
+ try {
1027
+ data = schema.parse(parsed);
1028
+ } catch (err) {
1029
+ throw new LlmError({
1030
+ message: `Perplexity structured output: response failed schema validation. ${String(err)}`,
1031
+ provider: PROVIDER5,
1032
+ retryable: false,
1033
+ cause: err
1034
+ });
1035
+ }
1036
+ return {
1037
+ data,
1038
+ usage: normalizeUsage5(rawResponse.usage),
1039
+ latencyMs: Date.now() - start
1040
+ };
1041
+ }
851
1042
  return {
852
- get config() {
853
- throw err;
854
- },
855
- complete: () => Promise.reject(err),
856
- stream: () => rejectingStream(err),
857
- structured: () => Promise.reject(err)
1043
+ config,
1044
+ complete,
1045
+ stream,
1046
+ structured
858
1047
  };
859
1048
  }
860
- function createPerplexityProvider(config) {
861
- void config;
862
- return notImplemented("perplexity");
863
- }
864
1049
 
865
1050
  // src/client.ts
866
1051
  function createClient(config) {
package/dist/index.js.map CHANGED
@@ -1 +1 @@
1
- {"version":3,"sources":["../src/providers/anthropic.ts","../src/types.ts","../src/retry.ts","../src/providers/deepseek.ts","../src/providers/gemini.ts","../src/providers/openai.ts","../src/providers/stubs.ts","../src/client.ts"],"sourcesContent":["/**\n * Anthropic Claude provider for @diabolicallabs/llm-client.\n *\n * Implements: complete(), stream(), structured()\n *\n * Token normalization:\n * Anthropic: input_tokens / output_tokens / cache_creation_input_tokens / cache_read_input_tokens\n * → LlmUsage: inputTokens / outputTokens / totalTokens / cacheCreationTokens / cacheReadTokens\n *\n * Error mapping:\n * APIStatusError.status → LlmError.statusCode + retryable flag\n * APIConnectionError → retryable: true\n */\n\nimport Anthropic from '@anthropic-ai/sdk';\nimport { normalizeThrownError, withRetry } from '../retry.js';\nimport type {\n LlmClient,\n LlmClientConfig,\n LlmMessage,\n LlmResponse,\n LlmStreamChunk,\n LlmStructuredResponse,\n LlmUsage,\n} from '../types.js';\nimport { LlmError } from '../types.js';\n\nconst PROVIDER = 'anthropic';\n\n/** Normalize Anthropic's usage object to LlmUsage. */\nfunction normalizeUsage(usage: Anthropic.Usage | undefined): LlmUsage {\n const inputTokens = usage?.input_tokens ?? 0;\n const outputTokens = usage?.output_tokens ?? 0;\n return {\n inputTokens,\n outputTokens,\n totalTokens: inputTokens + outputTokens,\n // cache_creation_input_tokens and cache_read_input_tokens are present on\n // extended usage objects from prompt caching — cast to access them safely.\n cacheCreationTokens: (usage as Anthropic.Usage & { cache_creation_input_tokens?: number })\n ?.cache_creation_input_tokens,\n cacheReadTokens: (usage as Anthropic.Usage & { cache_read_input_tokens?: number })\n ?.cache_read_input_tokens,\n };\n}\n\n/** Convert LlmMessages to Anthropic's message format. Extracts system prompt. */\nfunction buildAnthropicMessages(messages: LlmMessage[]): {\n system: string | undefined;\n messages: Anthropic.MessageParam[];\n} {\n const systemMessages = messages.filter((m) => m.role === 'system');\n const conversationMessages = messages.filter((m) => m.role !== 'system');\n\n const system =\n systemMessages.length > 0 ? systemMessages.map((m) => m.content).join('\\n') : undefined;\n\n const anthropicMessages: Anthropic.MessageParam[] = conversationMessages.map((m) => ({\n role: m.role as 'user' | 'assistant',\n content: m.content,\n }));\n\n return { system, messages: anthropicMessages };\n}\n\n/**\n * Normalize any Anthropic SDK error into LlmError.\n * Exported for direct unit testing of the normalization logic.\n */\nexport function normalizeAnthropicError(err: unknown): LlmError {\n if (err instanceof LlmError) return err;\n\n // Anthropic SDK v0.94+: uses Anthropic.APIError as the base class with a `.status` field.\n // APIConnectionError is a subclass of APIError with status: undefined — check it first\n // so network failures are always retryable regardless of the missing status code.\n if (\n typeof Anthropic.APIConnectionError === 'function' &&\n err instanceof Anthropic.APIConnectionError\n ) {\n return new LlmError({\n message: err.message,\n provider: PROVIDER,\n retryable: true,\n cause: err,\n });\n }\n\n // Catch all other APIError subclasses: RateLimitError (429), AuthenticationError (401),\n // InternalServerError (500), etc. Retryability is determined by HTTP status code.\n if (typeof Anthropic.APIError === 'function' && err instanceof Anthropic.APIError) {\n const status: number | undefined = err.status;\n if (status !== undefined) {\n const retryable = [429, 502, 503, 504].includes(status) || status >= 500;\n return new LlmError({\n message: err.message,\n provider: PROVIDER,\n statusCode: status,\n retryable,\n cause: err,\n });\n }\n return new LlmError({ message: err.message, provider: PROVIDER, retryable: false, cause: err });\n }\n\n return normalizeThrownError(err, PROVIDER);\n}\n\n/** Create the Anthropic provider implementation. */\nexport function createAnthropicProvider(config: LlmClientConfig): LlmClient {\n const client = new Anthropic({\n apiKey: config.apiKey,\n timeout: config.timeoutMs ?? 30_000,\n maxRetries: 0, // We manage retries ourselves via withRetry\n });\n\n const retryOpts = {\n maxRetries: config.maxRetries ?? 3,\n baseDelayMs: config.baseDelayMs ?? 1_000,\n provider: PROVIDER,\n };\n\n async function complete(\n messages: LlmMessage[],\n options?: Partial<Pick<LlmClientConfig, 'model' | 'maxTokens' | 'temperature'>>\n ): Promise<LlmResponse> {\n const model = options?.model ?? config.model;\n const { system, messages: anthropicMessages } = buildAnthropicMessages(messages);\n\n const start = Date.now();\n\n return withRetry(async () => {\n try {\n const params: Anthropic.MessageCreateParamsNonStreaming = {\n model,\n messages: anthropicMessages,\n max_tokens: options?.maxTokens ?? config.maxTokens ?? 1024,\n };\n\n if (system !== undefined) params.system = system;\n const temperature = options?.temperature ?? config.temperature;\n if (temperature !== undefined) {\n params.temperature = temperature;\n }\n\n const response = await client.messages.create(params);\n\n const content = response.content\n .filter((block): block is Anthropic.TextBlock => block.type === 'text')\n .map((block) => block.text)\n .join('');\n\n return {\n content,\n model: response.model,\n usage: normalizeUsage(response.usage),\n latencyMs: Date.now() - start,\n };\n } catch (err) {\n throw normalizeAnthropicError(err);\n }\n }, retryOpts);\n }\n\n async function* stream(\n messages: LlmMessage[],\n options?: Partial<Pick<LlmClientConfig, 'model' | 'maxTokens' | 'temperature'>>\n ): AsyncGenerator<LlmStreamChunk> {\n const model = options?.model ?? config.model;\n const { system, messages: anthropicMessages } = buildAnthropicMessages(messages);\n\n const params: Anthropic.MessageStreamParams = {\n model,\n messages: anthropicMessages,\n max_tokens: options?.maxTokens ?? config.maxTokens ?? 1024,\n };\n\n if (system !== undefined) params.system = system;\n const streamTemperature = options?.temperature ?? config.temperature;\n if (streamTemperature !== undefined) {\n params.temperature = streamTemperature;\n }\n\n let sdkStream: Awaited<ReturnType<typeof client.messages.stream>>;\n\n try {\n sdkStream = client.messages.stream(params);\n } catch (err) {\n throw normalizeAnthropicError(err);\n }\n\n // Accumulate usage — Anthropic sends it in the message_delta event at stream end\n let finalUsage: LlmUsage | undefined;\n\n try {\n for await (const event of sdkStream) {\n if (event.type === 'content_block_delta' && event.delta.type === 'text_delta') {\n yield { token: event.delta.text };\n } else if (event.type === 'message_delta' && 'usage' in event) {\n // Merge input tokens from message_start with output tokens from message_delta\n const accum = await sdkStream.finalMessage();\n finalUsage = normalizeUsage(accum.usage);\n }\n }\n } catch (err) {\n // Propagate as a normalized LlmError regardless of whether streaming had started.\n // Partial stream errors cannot be recovered from — the consumer must handle them.\n throw normalizeAnthropicError(err);\n }\n\n // Yield usage on the final empty chunk\n if (finalUsage !== undefined) {\n yield { token: '', usage: finalUsage };\n }\n }\n\n async function structured<T>(\n messages: LlmMessage[],\n schema: { parse: (data: unknown) => T },\n options?: Partial<Pick<LlmClientConfig, 'model' | 'maxTokens' | 'temperature'>>\n ): Promise<LlmStructuredResponse<T>> {\n // Anthropic JSON mode: append a system instruction to return only JSON.\n // We inject this into the messages so the provider returns parseable output.\n const jsonSystemInstruction: LlmMessage = {\n role: 'system',\n content:\n 'You must respond with valid JSON only. No explanations, no markdown code fences, no extra text. Your entire response must be valid JSON that can be parsed with JSON.parse().',\n };\n\n const augmentedMessages = [jsonSystemInstruction, ...messages];\n const start = Date.now();\n\n const response = await complete(augmentedMessages, options);\n\n let parsed: unknown;\n try {\n // Strip markdown code fences if the model included them despite the instruction\n const cleaned = response.content\n .replace(/^```(?:json)?\\s*/i, '')\n .replace(/\\s*```$/, '')\n .trim();\n parsed = JSON.parse(cleaned);\n } catch (err) {\n throw new LlmError({\n message: `Anthropic structured output: response is not valid JSON. Raw: ${response.content.slice(0, 200)}`,\n provider: PROVIDER,\n retryable: false,\n cause: err,\n });\n }\n\n let data: T;\n try {\n data = schema.parse(parsed);\n } catch (err) {\n throw new LlmError({\n message: `Anthropic structured output: response failed schema validation. ${String(err)}`,\n provider: PROVIDER,\n retryable: false,\n cause: err,\n });\n }\n\n return {\n data,\n usage: response.usage,\n latencyMs: Date.now() - start,\n };\n }\n\n return {\n config,\n complete,\n stream,\n structured,\n };\n}\n","/**\n * Core type definitions for @diabolicallabs/llm-client.\n * These are the stable public API surface — implementation is in Week 2.\n * Types here match the spec in briefs/brief-platform.md §4.1 exactly.\n */\n\n// The canonical message format shared across all providers\nexport interface LlmMessage {\n role: 'system' | 'user' | 'assistant';\n content: string;\n}\n\n// Config passed to createClient\nexport interface LlmClientConfig {\n // Full 5-provider union — gemini, deepseek, perplexity are type-only stubs in Week 2\n provider: 'anthropic' | 'openai' | 'gemini' | 'deepseek' | 'perplexity';\n model: string; // e.g. 'claude-sonnet-4-6', 'gpt-4o', 'gemini-2.5-flash'\n apiKey: string;\n maxRetries?: number; // default: 3\n baseDelayMs?: number; // default: 1000 — exponential backoff base\n maxTokens?: number; // provider default if omitted\n temperature?: number; // provider default if omitted\n timeoutMs?: number; // default: 30000\n}\n\n// Normalized token usage — same shape regardless of provider\nexport interface LlmUsage {\n inputTokens: number;\n outputTokens: number;\n totalTokens: number;\n cacheCreationTokens?: number; // Anthropic prompt cache write tokens\n cacheReadTokens?: number; // Anthropic prompt cache read tokens\n}\n\n// Non-streaming response\nexport interface LlmResponse {\n content: string;\n model: string; // model ID actually used (may differ from requested)\n usage: LlmUsage;\n latencyMs: number;\n}\n\n// Streaming chunk\nexport interface LlmStreamChunk {\n token: string;\n usage?: LlmUsage; // present only on the final chunk\n}\n\n// Normalized error — wraps provider-specific errors\nexport class LlmError extends Error {\n override readonly name = 'LlmError';\n readonly provider: string;\n readonly statusCode: number | undefined;\n readonly retryable: boolean;\n // `cause` is declared on Error in lib.es2022.error.d.ts as `cause?: unknown`\n // We override it here to make it always present (not optional) after construction.\n override readonly cause: unknown;\n\n constructor(opts: {\n message: string;\n provider: string;\n statusCode?: number;\n retryable: boolean;\n cause?: unknown;\n }) {\n super(opts.message, { cause: opts.cause });\n this.provider = opts.provider;\n this.statusCode = opts.statusCode;\n this.retryable = opts.retryable;\n this.cause = opts.cause;\n }\n}\n\n// Structured output — Zod schema inference\nexport type LlmStructuredResponse<T> = {\n data: T;\n usage: LlmUsage;\n latencyMs: number;\n};\n\n// The LlmClient interface — what consumers program against\nexport interface LlmClient {\n readonly config: Readonly<LlmClientConfig>;\n\n // Non-streaming completion\n complete(\n messages: LlmMessage[],\n options?: Partial<Pick<LlmClientConfig, 'model' | 'maxTokens' | 'temperature'>>\n ): Promise<LlmResponse>;\n\n // Streaming completion — async generator of chunks\n stream(\n messages: LlmMessage[],\n options?: Partial<Pick<LlmClientConfig, 'model' | 'maxTokens' | 'temperature'>>\n ): AsyncGenerator<LlmStreamChunk>;\n\n // Structured output — parses and validates the response against a Zod schema\n // Forces JSON mode on providers that support it; falls back to parse-and-validate\n structured<T>(\n messages: LlmMessage[],\n // Using a narrower interface than the full ZodType to avoid a hard zod dependency at types level\n schema: { parse: (data: unknown) => T },\n options?: Partial<Pick<LlmClientConfig, 'model' | 'maxTokens' | 'temperature'>>\n ): Promise<LlmStructuredResponse<T>>;\n}\n","/**\n * Exponential backoff with full jitter — shared across all providers.\n *\n * Formula: delay = random(0, baseDelayMs * 2^attempt)\n *\n * Retryable HTTP statuses: 429 (rate limit), 502/503/504 (server errors).\n * Retryable network codes: ECONNRESET, ETIMEDOUT.\n * Non-retryable: 400 (bad request), 401/403 (auth), 404.\n */\n\nimport { LlmError } from './types.js';\n\n// HTTP status codes that should trigger a retry\nconst RETRYABLE_HTTP_STATUSES = new Set([429, 502, 503, 504]);\n\n// Network error codes that should trigger a retry\nconst RETRYABLE_ERROR_CODES = new Set(['ECONNRESET', 'ETIMEDOUT', 'ECONNABORTED']);\n\n// HTTP status codes that should never retry (fail immediately)\nconst NON_RETRYABLE_HTTP_STATUSES = new Set([400, 401, 403, 404]);\n\n/** Determine if an HTTP status code is retryable. */\nexport function isRetryableStatus(statusCode: number): boolean {\n if (RETRYABLE_HTTP_STATUSES.has(statusCode)) return true;\n if (NON_RETRYABLE_HTTP_STATUSES.has(statusCode)) return false;\n // Treat any 5xx not explicitly handled as retryable\n return statusCode >= 500;\n}\n\n/** Determine if a network error code is retryable. */\nexport function isRetryableErrorCode(code: string): boolean {\n return RETRYABLE_ERROR_CODES.has(code);\n}\n\n/** Compute the delay in ms for attempt N (0-indexed). Full jitter. */\nexport function computeBackoffMs(attempt: number, baseDelayMs: number): number {\n const ceiling = baseDelayMs * 2 ** attempt;\n return Math.random() * ceiling;\n}\n\nexport interface RetryOptions {\n maxRetries: number;\n baseDelayMs: number;\n provider: string;\n}\n\n/**\n * Execute `fn` with retry logic. Wraps the result in structured error normalization.\n * `fn` receives the current attempt number (0-indexed).\n *\n * Throws LlmError after all retries are exhausted.\n */\nexport async function withRetry<T>(\n fn: (attempt: number) => Promise<T>,\n opts: RetryOptions\n): Promise<T> {\n let lastError: LlmError | undefined;\n\n for (let attempt = 0; attempt <= opts.maxRetries; attempt++) {\n try {\n return await fn(attempt);\n } catch (err) {\n const llmErr = normalizeThrownError(err, opts.provider);\n\n if (!llmErr.retryable || attempt === opts.maxRetries) {\n throw llmErr;\n }\n\n lastError = llmErr;\n const delayMs = computeBackoffMs(attempt, opts.baseDelayMs);\n await sleep(delayMs);\n }\n }\n\n // This path is unreachable — the loop always throws or returns.\n // TypeScript needs this for exhaustiveness.\n throw (\n lastError ??\n new LlmError({\n message: 'Unexpected retry exhaustion',\n provider: opts.provider,\n retryable: false,\n })\n );\n}\n\n/** Normalize any thrown value into an LlmError. */\nexport function normalizeThrownError(err: unknown, provider: string): LlmError {\n if (err instanceof LlmError) return err;\n\n if (err instanceof Error) {\n const errWithCode = err as Error & { status?: number; statusCode?: number; code?: string };\n\n const statusCode = errWithCode.status ?? errWithCode.statusCode;\n\n // Check for retryable network error codes\n if (errWithCode.code !== undefined && isRetryableErrorCode(errWithCode.code)) {\n if (statusCode !== undefined) {\n return new LlmError({\n message: err.message,\n provider,\n statusCode,\n retryable: true,\n cause: err,\n });\n }\n return new LlmError({ message: err.message, provider, retryable: true, cause: err });\n }\n\n // Check for retryable HTTP status codes\n if (statusCode !== undefined) {\n return new LlmError({\n message: err.message,\n provider,\n statusCode,\n retryable: isRetryableStatus(statusCode),\n cause: err,\n });\n }\n\n return new LlmError({\n message: err.message,\n provider,\n retryable: false,\n cause: err,\n });\n }\n\n return new LlmError({\n message: String(err),\n provider,\n retryable: false,\n cause: err,\n });\n}\n\nfunction sleep(ms: number): Promise<void> {\n return new Promise((resolve) => setTimeout(resolve, ms));\n}\n","/**\n * DeepSeek provider for @diabolicallabs/llm-client.\n *\n * DeepSeek's chat completions API is fully OpenAI-compatible, so this provider\n * uses the OpenAI SDK pointed at DeepSeek's base URL.\n *\n * API base URL: https://api.deepseek.com\n * Docs: https://platform.deepseek.com/api-docs/\n *\n * Implements: complete(), stream(), structured()\n *\n * Token normalization:\n * DeepSeek returns standard OpenAI-format usage: prompt_tokens / completion_tokens / total_tokens\n * → LlmUsage: inputTokens / outputTokens / totalTokens\n *\n * Error mapping:\n * APIConnectionError → retryable: true\n * APIError with status 429 / 5xx → retryable: true\n * Other APIErrors → non-retryable\n *\n * Note: DeepSeek does not support the json_object response_format on all models.\n * structured() injects a system prompt and parses the raw response. If the model\n * includes markdown fences, they are stripped before parsing.\n */\n\nimport OpenAI from 'openai';\nimport { normalizeThrownError, withRetry } from '../retry.js';\nimport type {\n LlmClient,\n LlmClientConfig,\n LlmMessage,\n LlmResponse,\n LlmStreamChunk,\n LlmStructuredResponse,\n LlmUsage,\n} from '../types.js';\nimport { LlmError } from '../types.js';\n\nconst PROVIDER = 'deepseek';\nconst DEEPSEEK_BASE_URL = 'https://api.deepseek.com';\n\n/** Normalize OpenAI-format usage object to LlmUsage. */\nfunction normalizeUsage(usage: OpenAI.CompletionUsage | undefined | null): LlmUsage {\n const inputTokens = usage?.prompt_tokens ?? 0;\n const outputTokens = usage?.completion_tokens ?? 0;\n return {\n inputTokens,\n outputTokens,\n totalTokens: usage?.total_tokens ?? inputTokens + outputTokens,\n };\n}\n\n/** Convert LlmMessages to OpenAI-format chat message params (compatible with DeepSeek). */\nfunction buildMessages(messages: LlmMessage[]): OpenAI.Chat.ChatCompletionMessageParam[] {\n return messages.map((m) => ({\n role: m.role,\n content: m.content,\n }));\n}\n\n/**\n * Normalize any DeepSeek / OpenAI SDK error into LlmError.\n * Exported for direct unit testing of the normalization logic.\n *\n * Uses the same OpenAI SDK error hierarchy (APIConnectionError before APIError)\n * since the client is an OpenAI instance pointed at DeepSeek's API.\n */\nexport function normalizeDeepSeekError(err: unknown): LlmError {\n if (err instanceof LlmError) return err;\n\n // APIConnectionError is a subclass of APIError with status: undefined —\n // check it first so network failures are always retryable.\n if (typeof OpenAI.APIConnectionError === 'function' && err instanceof OpenAI.APIConnectionError) {\n return new LlmError({\n message: err.message,\n provider: PROVIDER,\n retryable: true,\n cause: err,\n });\n }\n\n // Catch all other APIError subclasses: RateLimitError (429), AuthenticationError (401), etc.\n if (typeof OpenAI.APIError === 'function' && err instanceof OpenAI.APIError) {\n const status: number | undefined = err.status;\n if (status !== undefined) {\n const retryable = [429, 502, 503, 504].includes(status) || status >= 500;\n return new LlmError({\n message: err.message,\n provider: PROVIDER,\n statusCode: status,\n retryable,\n cause: err,\n });\n }\n return new LlmError({ message: err.message, provider: PROVIDER, retryable: false, cause: err });\n }\n\n return normalizeThrownError(err, PROVIDER);\n}\n\n/** Create the DeepSeek provider implementation. */\nexport function createDeepSeekProvider(config: LlmClientConfig): LlmClient {\n // OpenAI SDK pointed at DeepSeek's OpenAI-compatible endpoint\n const client = new OpenAI({\n apiKey: config.apiKey,\n baseURL: DEEPSEEK_BASE_URL,\n timeout: config.timeoutMs ?? 30_000,\n maxRetries: 0, // Retries managed by withRetry\n });\n\n const retryOpts = {\n maxRetries: config.maxRetries ?? 3,\n baseDelayMs: config.baseDelayMs ?? 1_000,\n provider: PROVIDER,\n };\n\n async function complete(\n messages: LlmMessage[],\n options?: Partial<Pick<LlmClientConfig, 'model' | 'maxTokens' | 'temperature'>>\n ): Promise<LlmResponse> {\n const model = options?.model ?? config.model;\n const chatMessages = buildMessages(messages);\n const start = Date.now();\n\n return withRetry(async () => {\n try {\n const params: OpenAI.Chat.ChatCompletionCreateParamsNonStreaming = {\n model,\n messages: chatMessages,\n stream: false,\n };\n\n const maxTokens = options?.maxTokens ?? config.maxTokens;\n if (maxTokens !== undefined) params.max_tokens = maxTokens;\n\n const temperature = options?.temperature ?? config.temperature;\n if (temperature !== undefined) params.temperature = temperature;\n\n const response = await client.chat.completions.create(params);\n const content = response.choices.map((c) => c.message.content ?? '').join('');\n\n return {\n content,\n model: response.model,\n usage: normalizeUsage(response.usage),\n latencyMs: Date.now() - start,\n };\n } catch (err) {\n throw normalizeDeepSeekError(err);\n }\n }, retryOpts);\n }\n\n async function* stream(\n messages: LlmMessage[],\n options?: Partial<Pick<LlmClientConfig, 'model' | 'maxTokens' | 'temperature'>>\n ): AsyncGenerator<LlmStreamChunk> {\n const model = options?.model ?? config.model;\n const chatMessages = buildMessages(messages);\n\n const params: OpenAI.Chat.ChatCompletionCreateParamsStreaming = {\n model,\n messages: chatMessages,\n stream: true,\n stream_options: { include_usage: true },\n };\n\n const maxTokens = options?.maxTokens ?? config.maxTokens;\n if (maxTokens !== undefined) params.max_tokens = maxTokens;\n\n const temperature = options?.temperature ?? config.temperature;\n if (temperature !== undefined) params.temperature = temperature;\n\n let sdkStream: Awaited<ReturnType<typeof client.chat.completions.create>>;\n\n try {\n sdkStream = await client.chat.completions.create(params);\n } catch (err) {\n throw normalizeDeepSeekError(err);\n }\n\n let finalUsage: LlmUsage | undefined;\n\n try {\n for await (const chunk of sdkStream) {\n const delta = chunk.choices[0]?.delta.content;\n if (delta !== undefined && delta !== null && delta.length > 0) {\n yield { token: delta };\n }\n\n // Usage arrives in the final chunk when stream_options.include_usage is true\n if (chunk.usage !== undefined && chunk.usage !== null) {\n finalUsage = normalizeUsage(chunk.usage);\n }\n }\n } catch (err) {\n throw normalizeDeepSeekError(err);\n }\n\n if (finalUsage !== undefined) {\n yield { token: '', usage: finalUsage };\n }\n }\n\n async function structured<T>(\n messages: LlmMessage[],\n schema: { parse: (data: unknown) => T },\n options?: Partial<Pick<LlmClientConfig, 'model' | 'maxTokens' | 'temperature'>>\n ): Promise<LlmStructuredResponse<T>> {\n // Inject JSON-only system instruction. DeepSeek does not guarantee json_object\n // response_format support across all models, so we rely on prompt-level enforcement.\n const jsonSystemInstruction: LlmMessage = {\n role: 'system',\n content:\n 'You must respond with valid JSON only. No explanations, no markdown code fences, no extra text. Your entire response must be valid JSON that can be parsed with JSON.parse().',\n };\n\n const augmentedMessages = [jsonSystemInstruction, ...messages];\n const model = options?.model ?? config.model;\n const chatMessages = buildMessages(augmentedMessages);\n const start = Date.now();\n\n const rawResponse = await withRetry(async () => {\n try {\n const params: OpenAI.Chat.ChatCompletionCreateParamsNonStreaming = {\n model,\n messages: chatMessages,\n stream: false,\n };\n\n const maxTokens = options?.maxTokens ?? config.maxTokens;\n if (maxTokens !== undefined) params.max_tokens = maxTokens;\n\n const temperature = options?.temperature ?? config.temperature;\n if (temperature !== undefined) params.temperature = temperature;\n\n return await client.chat.completions.create(params);\n } catch (err) {\n throw normalizeDeepSeekError(err);\n }\n }, retryOpts);\n\n const rawContent = rawResponse.choices[0]?.message.content ?? '';\n\n let parsed: unknown;\n try {\n // Strip markdown fences if the model included them despite the instruction\n const cleaned = rawContent\n .replace(/^```(?:json)?\\s*/i, '')\n .replace(/\\s*```$/, '')\n .trim();\n parsed = JSON.parse(cleaned);\n } catch (err) {\n throw new LlmError({\n message: `DeepSeek structured output: response is not valid JSON. Raw: ${rawContent.slice(0, 200)}`,\n provider: PROVIDER,\n retryable: false,\n cause: err,\n });\n }\n\n let data: T;\n try {\n data = schema.parse(parsed);\n } catch (err) {\n throw new LlmError({\n message: `DeepSeek structured output: response failed schema validation. ${String(err)}`,\n provider: PROVIDER,\n retryable: false,\n cause: err,\n });\n }\n\n return {\n data,\n usage: normalizeUsage(rawResponse.usage),\n latencyMs: Date.now() - start,\n };\n }\n\n return {\n config,\n complete,\n stream,\n structured,\n };\n}\n","/**\n * Google Gemini provider for @diabolicallabs/llm-client.\n *\n * Uses the @google/genai SDK (v1.x — not the deprecated @google/generative-ai).\n *\n * Implements: complete(), stream(), structured()\n *\n * Token normalization:\n * Gemini: usageMetadata.promptTokenCount / candidatesTokenCount / totalTokenCount\n * → LlmUsage: inputTokens / outputTokens / totalTokens\n *\n * Error mapping:\n * ApiError (public SDK class, status: number always defined):\n * retryable for 429 / 5xx\n * non-retryable for 4xx (except 429)\n * Other errors → normalizeThrownError (handles ECONNRESET / ETIMEDOUT as retryable)\n *\n * API notes:\n * - System instructions are passed via config.systemInstruction (not mixed into contents)\n * - Role mapping: 'user' → 'user', 'assistant' → 'model'\n * - Streaming via ai.models.generateContentStream() returns AsyncGenerator<GenerateContentResponse>\n * - Text is accessed via response.text getter on GenerateContentResponse\n * - Structured output: responseMimeType: 'application/json' in GenerateContentConfig\n *\n * SDK error class note:\n * The @google/genai public API exports only ApiError (lowercase 'a'), which has status: number.\n * Internal APIError / APIConnectionError classes (uppercase) are NOT exported from the package\n * root and must not be imported from internal dist paths.\n * Network errors (ECONNRESET, ETIMEDOUT) arrive as plain Error objects caught by normalizeThrownError.\n */\n\nimport {\n ApiError,\n type Content,\n type GenerateContentConfig,\n type GenerateContentResponse,\n type GenerateContentResponseUsageMetadata,\n GoogleGenAI,\n} from '@google/genai';\nimport { normalizeThrownError, withRetry } from '../retry.js';\nimport type {\n LlmClient,\n LlmClientConfig,\n LlmMessage,\n LlmResponse,\n LlmStreamChunk,\n LlmStructuredResponse,\n LlmUsage,\n} from '../types.js';\nimport { LlmError } from '../types.js';\n\nconst PROVIDER = 'gemini';\n\n/** Normalize Gemini's usageMetadata to LlmUsage. */\nfunction normalizeUsage(meta: GenerateContentResponseUsageMetadata | undefined): LlmUsage {\n const inputTokens = meta?.promptTokenCount ?? 0;\n const outputTokens = meta?.candidatesTokenCount ?? 0;\n return {\n inputTokens,\n outputTokens,\n totalTokens: meta?.totalTokenCount ?? inputTokens + outputTokens,\n };\n}\n\n/**\n * Convert LlmMessages to Gemini's Content array format.\n * Extracts system message — Gemini treats system instructions separately from contents.\n * Role mapping: 'user' → 'user', 'assistant' → 'model' (Gemini API requires 'model').\n */\nfunction buildGeminiContents(messages: LlmMessage[]): {\n system: string | undefined;\n contents: Content[];\n} {\n const systemMessages = messages.filter((m) => m.role === 'system');\n const conversationMessages = messages.filter((m) => m.role !== 'system');\n\n const system =\n systemMessages.length > 0 ? systemMessages.map((m) => m.content).join('\\n') : undefined;\n\n const contents: Content[] = conversationMessages.map((m) => ({\n role: m.role === 'assistant' ? 'model' : 'user',\n parts: [{ text: m.content }],\n }));\n\n return { system, contents };\n}\n\n/**\n * Normalize any Gemini SDK error into LlmError.\n * Exported for direct unit testing of the normalization logic.\n *\n * ApiError (public SDK class) always has status: number, so there is no undefined-status branch.\n * Network errors (no HTTP status) arrive as plain Error objects; normalizeThrownError\n * handles retryable error codes (ECONNRESET, ETIMEDOUT, etc.).\n */\nexport function normalizeGeminiError(err: unknown): LlmError {\n if (err instanceof LlmError) return err;\n\n // ApiError is the only publicly-exported SDK error class.\n // status is always number (not undefined) per the ApiError type definition.\n if (err instanceof ApiError) {\n const retryable = err.status === 429 || err.status >= 500;\n return new LlmError({\n message: err.message,\n provider: PROVIDER,\n statusCode: err.status,\n retryable,\n cause: err,\n });\n }\n\n // Network errors (ECONNRESET, ETIMEDOUT, etc.) arrive as plain Error objects.\n // normalizeThrownError classifies retryable codes and handles the unknown-error case.\n return normalizeThrownError(err, PROVIDER);\n}\n\n/** Create the Gemini provider implementation. */\nexport function createGeminiProvider(config: LlmClientConfig): LlmClient {\n const ai = new GoogleGenAI({\n apiKey: config.apiKey,\n httpOptions: {\n timeout: config.timeoutMs ?? 30_000,\n },\n });\n\n const retryOpts = {\n maxRetries: config.maxRetries ?? 3,\n baseDelayMs: config.baseDelayMs ?? 1_000,\n provider: PROVIDER,\n };\n\n async function complete(\n messages: LlmMessage[],\n options?: Partial<Pick<LlmClientConfig, 'model' | 'maxTokens' | 'temperature'>>\n ): Promise<LlmResponse> {\n const model = options?.model ?? config.model;\n const { system, contents } = buildGeminiContents(messages);\n const start = Date.now();\n\n return withRetry(async () => {\n try {\n // Build config object — always passed (empty object is valid GenerateContentConfig)\n const geminiConfig: GenerateContentConfig = {};\n\n if (system !== undefined) geminiConfig.systemInstruction = system;\n const maxTokens = options?.maxTokens ?? config.maxTokens;\n if (maxTokens !== undefined) geminiConfig.maxOutputTokens = maxTokens;\n const temperature = options?.temperature ?? config.temperature;\n if (temperature !== undefined) geminiConfig.temperature = temperature;\n\n const response = await ai.models.generateContent({\n model,\n contents,\n config: geminiConfig,\n });\n\n return {\n content: response.text ?? '',\n model,\n usage: normalizeUsage(response.usageMetadata),\n latencyMs: Date.now() - start,\n };\n } catch (err) {\n throw normalizeGeminiError(err);\n }\n }, retryOpts);\n }\n\n async function* stream(\n messages: LlmMessage[],\n options?: Partial<Pick<LlmClientConfig, 'model' | 'maxTokens' | 'temperature'>>\n ): AsyncGenerator<LlmStreamChunk> {\n const model = options?.model ?? config.model;\n const { system, contents } = buildGeminiContents(messages);\n\n // Build config — always passed (empty object is valid GenerateContentConfig)\n const geminiConfig: GenerateContentConfig = {};\n if (system !== undefined) geminiConfig.systemInstruction = system;\n const maxTokens = options?.maxTokens ?? config.maxTokens;\n if (maxTokens !== undefined) geminiConfig.maxOutputTokens = maxTokens;\n const temperature = options?.temperature ?? config.temperature;\n if (temperature !== undefined) geminiConfig.temperature = temperature;\n\n let sdkStream: AsyncGenerator<GenerateContentResponse>;\n\n try {\n sdkStream = await ai.models.generateContentStream({\n model,\n contents,\n config: geminiConfig,\n });\n } catch (err) {\n throw normalizeGeminiError(err);\n }\n\n let finalUsage: LlmUsage | undefined;\n\n try {\n for await (const chunk of sdkStream) {\n const text = chunk.text;\n if (text !== undefined && text.length > 0) {\n yield { token: text };\n }\n // Capture usage from each chunk — the final chunk has the complete totals\n if (chunk.usageMetadata !== undefined) {\n finalUsage = normalizeUsage(chunk.usageMetadata);\n }\n }\n } catch (err) {\n throw normalizeGeminiError(err);\n }\n\n if (finalUsage !== undefined) {\n yield { token: '', usage: finalUsage };\n }\n }\n\n async function structured<T>(\n messages: LlmMessage[],\n schema: { parse: (data: unknown) => T },\n options?: Partial<Pick<LlmClientConfig, 'model' | 'maxTokens' | 'temperature'>>\n ): Promise<LlmStructuredResponse<T>> {\n const augmentedMessages: LlmMessage[] = [\n {\n role: 'system',\n content:\n 'You must respond with valid JSON only. No explanations, no markdown code fences, no extra text. Your entire response must be valid JSON that can be parsed with JSON.parse().',\n },\n ...messages,\n ];\n\n const model = options?.model ?? config.model;\n const { system, contents } = buildGeminiContents(augmentedMessages);\n const start = Date.now();\n\n const rawResponse = await withRetry(async () => {\n try {\n const geminiConfig: GenerateContentConfig = {\n // Instruct Gemini to return JSON directly\n responseMimeType: 'application/json',\n };\n\n if (system !== undefined) geminiConfig.systemInstruction = system;\n const maxTokens = options?.maxTokens ?? config.maxTokens;\n if (maxTokens !== undefined) geminiConfig.maxOutputTokens = maxTokens;\n const temperature = options?.temperature ?? config.temperature;\n if (temperature !== undefined) geminiConfig.temperature = temperature;\n\n return await ai.models.generateContent({\n model,\n contents,\n config: geminiConfig,\n });\n } catch (err) {\n throw normalizeGeminiError(err);\n }\n }, retryOpts);\n\n const rawContent = rawResponse.text ?? '';\n\n let parsed: unknown;\n try {\n // Strip markdown code fences if the model included them despite the instruction\n const cleaned = rawContent\n .replace(/^```(?:json)?\\s*/i, '')\n .replace(/\\s*```$/, '')\n .trim();\n parsed = JSON.parse(cleaned);\n } catch (err) {\n throw new LlmError({\n message: `Gemini structured output: response is not valid JSON. Raw: ${rawContent.slice(0, 200)}`,\n provider: PROVIDER,\n retryable: false,\n cause: err,\n });\n }\n\n let data: T;\n try {\n data = schema.parse(parsed);\n } catch (err) {\n throw new LlmError({\n message: `Gemini structured output: response failed schema validation. ${String(err)}`,\n provider: PROVIDER,\n retryable: false,\n cause: err,\n });\n }\n\n return {\n data,\n usage: normalizeUsage(rawResponse.usageMetadata),\n latencyMs: Date.now() - start,\n };\n }\n\n return {\n config,\n complete,\n stream,\n structured,\n };\n}\n","/**\n * OpenAI provider for @diabolicallabs/llm-client.\n *\n * Implements: complete(), stream(), structured()\n *\n * Token normalization:\n * OpenAI: prompt_tokens / completion_tokens\n * → LlmUsage: inputTokens / outputTokens / totalTokens\n *\n * Error mapping:\n * APIStatusError.status → LlmError.statusCode + retryable flag\n * APIConnectionError → retryable: true\n *\n * Structured output uses OpenAI's response_format: { type: 'json_object' }.\n * For strict schema enforcement, the schema is described in the system prompt.\n */\n\nimport OpenAI from 'openai';\nimport { normalizeThrownError, withRetry } from '../retry.js';\nimport type {\n LlmClient,\n LlmClientConfig,\n LlmMessage,\n LlmResponse,\n LlmStreamChunk,\n LlmStructuredResponse,\n LlmUsage,\n} from '../types.js';\nimport { LlmError } from '../types.js';\n\nconst PROVIDER = 'openai';\n\n/** Normalize OpenAI's usage object to LlmUsage. */\nfunction normalizeUsage(usage: OpenAI.CompletionUsage | undefined | null): LlmUsage {\n const inputTokens = usage?.prompt_tokens ?? 0;\n const outputTokens = usage?.completion_tokens ?? 0;\n return {\n inputTokens,\n outputTokens,\n totalTokens: usage?.total_tokens ?? inputTokens + outputTokens,\n };\n}\n\n/** Convert LlmMessages to OpenAI's chat message format. */\nfunction buildOpenAIMessages(messages: LlmMessage[]): OpenAI.Chat.ChatCompletionMessageParam[] {\n return messages.map((m) => ({\n role: m.role,\n content: m.content,\n }));\n}\n\n/**\n * Normalize any OpenAI SDK error into LlmError.\n * Exported for direct unit testing of the normalization logic.\n */\nexport function normalizeOpenAIError(err: unknown): LlmError {\n if (err instanceof LlmError) return err;\n\n // OpenAI SDK v6+: uses OpenAI.APIError as the base class with a `.status` field.\n // APIConnectionError is a subclass of APIError with status: undefined — check it first\n // so network failures are always retryable regardless of the missing status code.\n if (typeof OpenAI.APIConnectionError === 'function' && err instanceof OpenAI.APIConnectionError) {\n return new LlmError({\n message: err.message,\n provider: PROVIDER,\n retryable: true,\n cause: err,\n });\n }\n\n // Catch all other APIError subclasses: RateLimitError (429), AuthenticationError (401),\n // InternalServerError (500), etc. Retryability is determined by HTTP status code.\n if (typeof OpenAI.APIError === 'function' && err instanceof OpenAI.APIError) {\n const status: number | undefined = err.status;\n if (status !== undefined) {\n const retryable = [429, 502, 503, 504].includes(status) || status >= 500;\n return new LlmError({\n message: err.message,\n provider: PROVIDER,\n statusCode: status,\n retryable,\n cause: err,\n });\n }\n return new LlmError({ message: err.message, provider: PROVIDER, retryable: false, cause: err });\n }\n\n return normalizeThrownError(err, PROVIDER);\n}\n\n/** Create the OpenAI provider implementation. */\nexport function createOpenAIProvider(config: LlmClientConfig): LlmClient {\n const client = new OpenAI({\n apiKey: config.apiKey,\n timeout: config.timeoutMs ?? 30_000,\n maxRetries: 0, // We manage retries ourselves via withRetry\n });\n\n const retryOpts = {\n maxRetries: config.maxRetries ?? 3,\n baseDelayMs: config.baseDelayMs ?? 1_000,\n provider: PROVIDER,\n };\n\n async function complete(\n messages: LlmMessage[],\n options?: Partial<Pick<LlmClientConfig, 'model' | 'maxTokens' | 'temperature'>>\n ): Promise<LlmResponse> {\n const model = options?.model ?? config.model;\n const openAIMessages = buildOpenAIMessages(messages);\n const start = Date.now();\n\n return withRetry(async () => {\n try {\n const params: OpenAI.Chat.ChatCompletionCreateParamsNonStreaming = {\n model,\n messages: openAIMessages,\n stream: false,\n };\n\n const maxTokens = options?.maxTokens ?? config.maxTokens;\n if (maxTokens !== undefined) params.max_tokens = maxTokens;\n\n const temperature = options?.temperature ?? config.temperature;\n if (temperature !== undefined) params.temperature = temperature;\n\n const response = await client.chat.completions.create(params);\n\n const content = response.choices.map((c) => c.message.content ?? '').join('');\n\n return {\n content,\n model: response.model,\n usage: normalizeUsage(response.usage),\n latencyMs: Date.now() - start,\n };\n } catch (err) {\n throw normalizeOpenAIError(err);\n }\n }, retryOpts);\n }\n\n async function* stream(\n messages: LlmMessage[],\n options?: Partial<Pick<LlmClientConfig, 'model' | 'maxTokens' | 'temperature'>>\n ): AsyncGenerator<LlmStreamChunk> {\n const model = options?.model ?? config.model;\n const openAIMessages = buildOpenAIMessages(messages);\n\n const params: OpenAI.Chat.ChatCompletionCreateParamsStreaming = {\n model,\n messages: openAIMessages,\n stream: true,\n stream_options: { include_usage: true },\n };\n\n const maxTokens = options?.maxTokens ?? config.maxTokens;\n if (maxTokens !== undefined) params.max_tokens = maxTokens;\n\n const temperature = options?.temperature ?? config.temperature;\n if (temperature !== undefined) params.temperature = temperature;\n\n let sdkStream: Awaited<ReturnType<typeof client.chat.completions.create>>;\n\n try {\n sdkStream = await client.chat.completions.create(params);\n } catch (err) {\n throw normalizeOpenAIError(err);\n }\n\n let finalUsage: LlmUsage | undefined;\n\n try {\n for await (const chunk of sdkStream) {\n // Token chunks arrive in choices[0].delta.content\n const delta = chunk.choices[0]?.delta.content;\n if (delta !== undefined && delta !== null && delta.length > 0) {\n yield { token: delta };\n }\n\n // Usage arrives in the final chunk (stream_options.include_usage must be true)\n if (chunk.usage !== undefined && chunk.usage !== null) {\n finalUsage = normalizeUsage(chunk.usage);\n }\n }\n } catch (err) {\n throw normalizeOpenAIError(err);\n }\n\n // Yield usage on the final sentinel chunk\n if (finalUsage !== undefined) {\n yield { token: '', usage: finalUsage };\n }\n }\n\n async function structured<T>(\n messages: LlmMessage[],\n schema: { parse: (data: unknown) => T },\n options?: Partial<Pick<LlmClientConfig, 'model' | 'maxTokens' | 'temperature'>>\n ): Promise<LlmStructuredResponse<T>> {\n // OpenAI JSON mode: response_format: { type: 'json_object' }\n // The system prompt must instruct the model to output JSON — OpenAI requires this.\n const jsonSystemInstruction: LlmMessage = {\n role: 'system',\n content:\n 'You must respond with valid JSON only. No explanations, no markdown code fences, no extra text. Your entire response must be valid JSON that can be parsed with JSON.parse().',\n };\n\n const augmentedMessages = [jsonSystemInstruction, ...messages];\n const model = options?.model ?? config.model;\n const openAIMessages = buildOpenAIMessages(augmentedMessages);\n const start = Date.now();\n\n const rawResponse = await withRetry(async () => {\n try {\n const params: OpenAI.Chat.ChatCompletionCreateParamsNonStreaming = {\n model,\n messages: openAIMessages,\n stream: false,\n response_format: { type: 'json_object' },\n };\n\n const maxTokens = options?.maxTokens ?? config.maxTokens;\n if (maxTokens !== undefined) params.max_tokens = maxTokens;\n\n const temperature = options?.temperature ?? config.temperature;\n if (temperature !== undefined) params.temperature = temperature;\n\n return await client.chat.completions.create(params);\n } catch (err) {\n throw normalizeOpenAIError(err);\n }\n }, retryOpts);\n\n const rawContent = rawResponse.choices[0]?.message.content ?? '';\n\n let parsed: unknown;\n try {\n parsed = JSON.parse(rawContent);\n } catch (err) {\n throw new LlmError({\n message: `OpenAI structured output: response is not valid JSON. Raw: ${rawContent.slice(0, 200)}`,\n provider: PROVIDER,\n retryable: false,\n cause: err,\n });\n }\n\n let data: T;\n try {\n data = schema.parse(parsed);\n } catch (err) {\n throw new LlmError({\n message: `OpenAI structured output: response failed schema validation. ${String(err)}`,\n provider: PROVIDER,\n retryable: false,\n cause: err,\n });\n }\n\n return {\n data,\n usage: normalizeUsage(rawResponse.usage),\n latencyMs: Date.now() - start,\n };\n }\n\n return {\n config,\n complete,\n stream,\n structured,\n };\n}\n","/**\n * Type-only stubs for providers not yet implemented.\n *\n * Perplexity — later week (internal tooling, separate decision)\n *\n * All stubs throw a clear \"not yet implemented\" LlmError.\n * They are registered in the factory so the switch statement is exhaustive\n * and unknown provider values are caught at runtime with a useful message.\n */\n\nimport type { LlmClient, LlmClientConfig, LlmStreamChunk } from '../types.js';\nimport { LlmError } from '../types.js';\n\n/**\n * Returns an AsyncGenerator that immediately rejects when iterated.\n * Implemented without generator syntax to avoid Biome's useYield lint rule —\n * a throw-only generator has no meaningful yield, which Biome correctly flags.\n * The returned object satisfies the AsyncGenerator<LlmStreamChunk> interface contract.\n */\nfunction rejectingStream(err: LlmError): AsyncGenerator<LlmStreamChunk> {\n const rejected = Promise.reject<IteratorResult<LlmStreamChunk>>(err);\n // Attach a no-op catch so Node does not emit an unhandledRejection warning\n // before the caller consumes the generator via for-await-of.\n rejected.catch(() => undefined);\n return {\n next: () => rejected,\n return: () => Promise.resolve({ value: undefined, done: true as const }),\n throw: () => Promise.reject(err),\n [Symbol.asyncIterator]() {\n return this;\n },\n [Symbol.asyncDispose]: async () => undefined,\n };\n}\n\nfunction notImplemented(provider: string): LlmClient {\n const err = new LlmError({\n message: `[dlabs-toolkit] Provider '${provider}' is not yet implemented. Anthropic, OpenAI, Gemini, and DeepSeek are available; Perplexity ships in a later week.`,\n provider,\n retryable: false,\n });\n\n // Return an object that throws on any method call.\n // The error is pre-constructed so stack traces point to the factory call site,\n // not the method call site — easier to debug misconfigured providers.\n return {\n get config(): LlmClientConfig {\n throw err;\n },\n complete: () => Promise.reject(err),\n stream: () => rejectingStream(err),\n structured: () => Promise.reject(err),\n };\n}\n\n/** Perplexity provider stub — later week. */\nexport function createPerplexityProvider(config: LlmClientConfig): LlmClient {\n void config;\n return notImplemented('perplexity');\n}\n","/**\n * Factory functions for LlmClient.\n *\n * createClient — dispatches to the correct provider implementation.\n * createClientFromEnv — convenience wrapper that reads API keys from env vars.\n *\n * Provider dispatch:\n * 'anthropic' → fully implemented (Week 2)\n * 'openai' → fully implemented (Week 2)\n * 'gemini' → fully implemented (Week 3)\n * 'deepseek' → fully implemented (Week 3)\n * 'perplexity' → stub, throws \"not yet implemented\" (later week)\n */\n\nimport { createAnthropicProvider } from './providers/anthropic.js';\nimport { createDeepSeekProvider } from './providers/deepseek.js';\nimport { createGeminiProvider } from './providers/gemini.js';\nimport { createOpenAIProvider } from './providers/openai.js';\nimport { createPerplexityProvider } from './providers/stubs.js';\nimport type { LlmClient, LlmClientConfig } from './types.js';\nimport { LlmError } from './types.js';\n\n/**\n * Create an LlmClient for the given provider and config.\n * Dispatches to the provider-specific implementation.\n *\n * Anthropic, OpenAI, Gemini, and DeepSeek are fully implemented.\n * Perplexity is a type-registered stub that throws \"not yet implemented\".\n */\nexport function createClient(config: LlmClientConfig): LlmClient {\n switch (config.provider) {\n case 'anthropic':\n return createAnthropicProvider(config);\n\n case 'openai':\n return createOpenAIProvider(config);\n\n case 'gemini':\n return createGeminiProvider(config);\n\n case 'deepseek':\n return createDeepSeekProvider(config);\n\n case 'perplexity':\n return createPerplexityProvider(config);\n\n default: {\n // TypeScript exhaustiveness check — if a new provider is added to the union\n // without a case here, this will be a compile-time error.\n const _exhaustive: never = config.provider;\n throw new LlmError({\n message: `[dlabs-toolkit] Unknown provider: ${String(_exhaustive)}`,\n provider: String(_exhaustive),\n retryable: false,\n });\n }\n }\n}\n\n/**\n * Convenience: create an LlmClient from environment variables.\n *\n * Reads API keys from the environment based on provider:\n * anthropic → ANTHROPIC_API_KEY\n * openai → OPENAI_API_KEY\n * gemini → GOOGLE_AI_API_KEY\n * deepseek → DEEPSEEK_API_KEY\n * perplexity → PERPLEXITY_API_KEY\n *\n * Throws LlmError if the required env var is not set.\n */\nexport function createClientFromEnv(\n provider: LlmClientConfig['provider'],\n model: string,\n overrides?: Partial<Omit<LlmClientConfig, 'provider' | 'model' | 'apiKey'>>\n): LlmClient {\n const apiKey = resolveApiKey(provider);\n return createClient({ provider, model, apiKey, ...overrides });\n}\n\n/** Read the API key for a given provider from environment variables. */\nfunction resolveApiKey(provider: LlmClientConfig['provider']): string {\n const envVarMap: Record<LlmClientConfig['provider'], string> = {\n anthropic: 'ANTHROPIC_API_KEY',\n openai: 'OPENAI_API_KEY',\n gemini: 'GOOGLE_AI_API_KEY',\n deepseek: 'DEEPSEEK_API_KEY',\n perplexity: 'PERPLEXITY_API_KEY',\n };\n\n const envVar = envVarMap[provider];\n const apiKey = process.env[envVar];\n\n if (apiKey === undefined || apiKey.trim() === '') {\n throw new LlmError({\n message: `[dlabs-toolkit] ${envVar} is not set. Set this environment variable to use the ${provider} provider.`,\n provider,\n retryable: false,\n });\n }\n\n return apiKey;\n}\n"],"mappings":";AAcA,OAAO,eAAe;;;ACmCf,IAAM,WAAN,cAAuB,MAAM;AAAA,EAChB,OAAO;AAAA,EAChB;AAAA,EACA;AAAA,EACA;AAAA;AAAA;AAAA,EAGS;AAAA,EAElB,YAAY,MAMT;AACD,UAAM,KAAK,SAAS,EAAE,OAAO,KAAK,MAAM,CAAC;AACzC,SAAK,WAAW,KAAK;AACrB,SAAK,aAAa,KAAK;AACvB,SAAK,YAAY,KAAK;AACtB,SAAK,QAAQ,KAAK;AAAA,EACpB;AACF;;;AC1DA,IAAM,0BAA0B,oBAAI,IAAI,CAAC,KAAK,KAAK,KAAK,GAAG,CAAC;AAG5D,IAAM,wBAAwB,oBAAI,IAAI,CAAC,cAAc,aAAa,cAAc,CAAC;AAGjF,IAAM,8BAA8B,oBAAI,IAAI,CAAC,KAAK,KAAK,KAAK,GAAG,CAAC;AAGzD,SAAS,kBAAkB,YAA6B;AAC7D,MAAI,wBAAwB,IAAI,UAAU,EAAG,QAAO;AACpD,MAAI,4BAA4B,IAAI,UAAU,EAAG,QAAO;AAExD,SAAO,cAAc;AACvB;AAGO,SAAS,qBAAqB,MAAuB;AAC1D,SAAO,sBAAsB,IAAI,IAAI;AACvC;AAGO,SAAS,iBAAiB,SAAiB,aAA6B;AAC7E,QAAM,UAAU,cAAc,KAAK;AACnC,SAAO,KAAK,OAAO,IAAI;AACzB;AAcA,eAAsB,UACpB,IACA,MACY;AACZ,MAAI;AAEJ,WAAS,UAAU,GAAG,WAAW,KAAK,YAAY,WAAW;AAC3D,QAAI;AACF,aAAO,MAAM,GAAG,OAAO;AAAA,IACzB,SAAS,KAAK;AACZ,YAAM,SAAS,qBAAqB,KAAK,KAAK,QAAQ;AAEtD,UAAI,CAAC,OAAO,aAAa,YAAY,KAAK,YAAY;AACpD,cAAM;AAAA,MACR;AAEA,kBAAY;AACZ,YAAM,UAAU,iBAAiB,SAAS,KAAK,WAAW;AAC1D,YAAM,MAAM,OAAO;AAAA,IACrB;AAAA,EACF;AAIA,QACE,aACA,IAAI,SAAS;AAAA,IACX,SAAS;AAAA,IACT,UAAU,KAAK;AAAA,IACf,WAAW;AAAA,EACb,CAAC;AAEL;AAGO,SAAS,qBAAqB,KAAc,UAA4B;AAC7E,MAAI,eAAe,SAAU,QAAO;AAEpC,MAAI,eAAe,OAAO;AACxB,UAAM,cAAc;AAEpB,UAAM,aAAa,YAAY,UAAU,YAAY;AAGrD,QAAI,YAAY,SAAS,UAAa,qBAAqB,YAAY,IAAI,GAAG;AAC5E,UAAI,eAAe,QAAW;AAC5B,eAAO,IAAI,SAAS;AAAA,UAClB,SAAS,IAAI;AAAA,UACb;AAAA,UACA;AAAA,UACA,WAAW;AAAA,UACX,OAAO;AAAA,QACT,CAAC;AAAA,MACH;AACA,aAAO,IAAI,SAAS,EAAE,SAAS,IAAI,SAAS,UAAU,WAAW,MAAM,OAAO,IAAI,CAAC;AAAA,IACrF;AAGA,QAAI,eAAe,QAAW;AAC5B,aAAO,IAAI,SAAS;AAAA,QAClB,SAAS,IAAI;AAAA,QACb;AAAA,QACA;AAAA,QACA,WAAW,kBAAkB,UAAU;AAAA,QACvC,OAAO;AAAA,MACT,CAAC;AAAA,IACH;AAEA,WAAO,IAAI,SAAS;AAAA,MAClB,SAAS,IAAI;AAAA,MACb;AAAA,MACA,WAAW;AAAA,MACX,OAAO;AAAA,IACT,CAAC;AAAA,EACH;AAEA,SAAO,IAAI,SAAS;AAAA,IAClB,SAAS,OAAO,GAAG;AAAA,IACnB;AAAA,IACA,WAAW;AAAA,IACX,OAAO;AAAA,EACT,CAAC;AACH;AAEA,SAAS,MAAM,IAA2B;AACxC,SAAO,IAAI,QAAQ,CAAC,YAAY,WAAW,SAAS,EAAE,CAAC;AACzD;;;AF/GA,IAAM,WAAW;AAGjB,SAAS,eAAe,OAA8C;AACpE,QAAM,cAAc,OAAO,gBAAgB;AAC3C,QAAM,eAAe,OAAO,iBAAiB;AAC7C,SAAO;AAAA,IACL;AAAA,IACA;AAAA,IACA,aAAa,cAAc;AAAA;AAAA;AAAA,IAG3B,qBAAsB,OAClB;AAAA,IACJ,iBAAkB,OACd;AAAA,EACN;AACF;AAGA,SAAS,uBAAuB,UAG9B;AACA,QAAM,iBAAiB,SAAS,OAAO,CAAC,MAAM,EAAE,SAAS,QAAQ;AACjE,QAAM,uBAAuB,SAAS,OAAO,CAAC,MAAM,EAAE,SAAS,QAAQ;AAEvE,QAAM,SACJ,eAAe,SAAS,IAAI,eAAe,IAAI,CAAC,MAAM,EAAE,OAAO,EAAE,KAAK,IAAI,IAAI;AAEhF,QAAM,oBAA8C,qBAAqB,IAAI,CAAC,OAAO;AAAA,IACnF,MAAM,EAAE;AAAA,IACR,SAAS,EAAE;AAAA,EACb,EAAE;AAEF,SAAO,EAAE,QAAQ,UAAU,kBAAkB;AAC/C;AAMO,SAAS,wBAAwB,KAAwB;AAC9D,MAAI,eAAe,SAAU,QAAO;AAKpC,MACE,OAAO,UAAU,uBAAuB,cACxC,eAAe,UAAU,oBACzB;AACA,WAAO,IAAI,SAAS;AAAA,MAClB,SAAS,IAAI;AAAA,MACb,UAAU;AAAA,MACV,WAAW;AAAA,MACX,OAAO;AAAA,IACT,CAAC;AAAA,EACH;AAIA,MAAI,OAAO,UAAU,aAAa,cAAc,eAAe,UAAU,UAAU;AACjF,UAAM,SAA6B,IAAI;AACvC,QAAI,WAAW,QAAW;AACxB,YAAM,YAAY,CAAC,KAAK,KAAK,KAAK,GAAG,EAAE,SAAS,MAAM,KAAK,UAAU;AACrE,aAAO,IAAI,SAAS;AAAA,QAClB,SAAS,IAAI;AAAA,QACb,UAAU;AAAA,QACV,YAAY;AAAA,QACZ;AAAA,QACA,OAAO;AAAA,MACT,CAAC;AAAA,IACH;AACA,WAAO,IAAI,SAAS,EAAE,SAAS,IAAI,SAAS,UAAU,UAAU,WAAW,OAAO,OAAO,IAAI,CAAC;AAAA,EAChG;AAEA,SAAO,qBAAqB,KAAK,QAAQ;AAC3C;AAGO,SAAS,wBAAwB,QAAoC;AAC1E,QAAM,SAAS,IAAI,UAAU;AAAA,IAC3B,QAAQ,OAAO;AAAA,IACf,SAAS,OAAO,aAAa;AAAA,IAC7B,YAAY;AAAA;AAAA,EACd,CAAC;AAED,QAAM,YAAY;AAAA,IAChB,YAAY,OAAO,cAAc;AAAA,IACjC,aAAa,OAAO,eAAe;AAAA,IACnC,UAAU;AAAA,EACZ;AAEA,iBAAe,SACb,UACA,SACsB;AACtB,UAAM,QAAQ,SAAS,SAAS,OAAO;AACvC,UAAM,EAAE,QAAQ,UAAU,kBAAkB,IAAI,uBAAuB,QAAQ;AAE/E,UAAM,QAAQ,KAAK,IAAI;AAEvB,WAAO,UAAU,YAAY;AAC3B,UAAI;AACF,cAAM,SAAoD;AAAA,UACxD;AAAA,UACA,UAAU;AAAA,UACV,YAAY,SAAS,aAAa,OAAO,aAAa;AAAA,QACxD;AAEA,YAAI,WAAW,OAAW,QAAO,SAAS;AAC1C,cAAM,cAAc,SAAS,eAAe,OAAO;AACnD,YAAI,gBAAgB,QAAW;AAC7B,iBAAO,cAAc;AAAA,QACvB;AAEA,cAAM,WAAW,MAAM,OAAO,SAAS,OAAO,MAAM;AAEpD,cAAM,UAAU,SAAS,QACtB,OAAO,CAAC,UAAwC,MAAM,SAAS,MAAM,EACrE,IAAI,CAAC,UAAU,MAAM,IAAI,EACzB,KAAK,EAAE;AAEV,eAAO;AAAA,UACL;AAAA,UACA,OAAO,SAAS;AAAA,UAChB,OAAO,eAAe,SAAS,KAAK;AAAA,UACpC,WAAW,KAAK,IAAI,IAAI;AAAA,QAC1B;AAAA,MACF,SAAS,KAAK;AACZ,cAAM,wBAAwB,GAAG;AAAA,MACnC;AAAA,IACF,GAAG,SAAS;AAAA,EACd;AAEA,kBAAgB,OACd,UACA,SACgC;AAChC,UAAM,QAAQ,SAAS,SAAS,OAAO;AACvC,UAAM,EAAE,QAAQ,UAAU,kBAAkB,IAAI,uBAAuB,QAAQ;AAE/E,UAAM,SAAwC;AAAA,MAC5C;AAAA,MACA,UAAU;AAAA,MACV,YAAY,SAAS,aAAa,OAAO,aAAa;AAAA,IACxD;AAEA,QAAI,WAAW,OAAW,QAAO,SAAS;AAC1C,UAAM,oBAAoB,SAAS,eAAe,OAAO;AACzD,QAAI,sBAAsB,QAAW;AACnC,aAAO,cAAc;AAAA,IACvB;AAEA,QAAI;AAEJ,QAAI;AACF,kBAAY,OAAO,SAAS,OAAO,MAAM;AAAA,IAC3C,SAAS,KAAK;AACZ,YAAM,wBAAwB,GAAG;AAAA,IACnC;AAGA,QAAI;AAEJ,QAAI;AACF,uBAAiB,SAAS,WAAW;AACnC,YAAI,MAAM,SAAS,yBAAyB,MAAM,MAAM,SAAS,cAAc;AAC7E,gBAAM,EAAE,OAAO,MAAM,MAAM,KAAK;AAAA,QAClC,WAAW,MAAM,SAAS,mBAAmB,WAAW,OAAO;AAE7D,gBAAM,QAAQ,MAAM,UAAU,aAAa;AAC3C,uBAAa,eAAe,MAAM,KAAK;AAAA,QACzC;AAAA,MACF;AAAA,IACF,SAAS,KAAK;AAGZ,YAAM,wBAAwB,GAAG;AAAA,IACnC;AAGA,QAAI,eAAe,QAAW;AAC5B,YAAM,EAAE,OAAO,IAAI,OAAO,WAAW;AAAA,IACvC;AAAA,EACF;AAEA,iBAAe,WACb,UACA,QACA,SACmC;AAGnC,UAAM,wBAAoC;AAAA,MACxC,MAAM;AAAA,MACN,SACE;AAAA,IACJ;AAEA,UAAM,oBAAoB,CAAC,uBAAuB,GAAG,QAAQ;AAC7D,UAAM,QAAQ,KAAK,IAAI;AAEvB,UAAM,WAAW,MAAM,SAAS,mBAAmB,OAAO;AAE1D,QAAI;AACJ,QAAI;AAEF,YAAM,UAAU,SAAS,QACtB,QAAQ,qBAAqB,EAAE,EAC/B,QAAQ,WAAW,EAAE,EACrB,KAAK;AACR,eAAS,KAAK,MAAM,OAAO;AAAA,IAC7B,SAAS,KAAK;AACZ,YAAM,IAAI,SAAS;AAAA,QACjB,SAAS,iEAAiE,SAAS,QAAQ,MAAM,GAAG,GAAG,CAAC;AAAA,QACxG,UAAU;AAAA,QACV,WAAW;AAAA,QACX,OAAO;AAAA,MACT,CAAC;AAAA,IACH;AAEA,QAAI;AACJ,QAAI;AACF,aAAO,OAAO,MAAM,MAAM;AAAA,IAC5B,SAAS,KAAK;AACZ,YAAM,IAAI,SAAS;AAAA,QACjB,SAAS,mEAAmE,OAAO,GAAG,CAAC;AAAA,QACvF,UAAU;AAAA,QACV,WAAW;AAAA,QACX,OAAO;AAAA,MACT,CAAC;AAAA,IACH;AAEA,WAAO;AAAA,MACL;AAAA,MACA,OAAO,SAAS;AAAA,MAChB,WAAW,KAAK,IAAI,IAAI;AAAA,IAC1B;AAAA,EACF;AAEA,SAAO;AAAA,IACL;AAAA,IACA;AAAA,IACA;AAAA,IACA;AAAA,EACF;AACF;;;AG1PA,OAAO,YAAY;AAanB,IAAMA,YAAW;AACjB,IAAM,oBAAoB;AAG1B,SAASC,gBAAe,OAA4D;AAClF,QAAM,cAAc,OAAO,iBAAiB;AAC5C,QAAM,eAAe,OAAO,qBAAqB;AACjD,SAAO;AAAA,IACL;AAAA,IACA;AAAA,IACA,aAAa,OAAO,gBAAgB,cAAc;AAAA,EACpD;AACF;AAGA,SAAS,cAAc,UAAkE;AACvF,SAAO,SAAS,IAAI,CAAC,OAAO;AAAA,IAC1B,MAAM,EAAE;AAAA,IACR,SAAS,EAAE;AAAA,EACb,EAAE;AACJ;AASO,SAAS,uBAAuB,KAAwB;AAC7D,MAAI,eAAe,SAAU,QAAO;AAIpC,MAAI,OAAO,OAAO,uBAAuB,cAAc,eAAe,OAAO,oBAAoB;AAC/F,WAAO,IAAI,SAAS;AAAA,MAClB,SAAS,IAAI;AAAA,MACb,UAAUD;AAAA,MACV,WAAW;AAAA,MACX,OAAO;AAAA,IACT,CAAC;AAAA,EACH;AAGA,MAAI,OAAO,OAAO,aAAa,cAAc,eAAe,OAAO,UAAU;AAC3E,UAAM,SAA6B,IAAI;AACvC,QAAI,WAAW,QAAW;AACxB,YAAM,YAAY,CAAC,KAAK,KAAK,KAAK,GAAG,EAAE,SAAS,MAAM,KAAK,UAAU;AACrE,aAAO,IAAI,SAAS;AAAA,QAClB,SAAS,IAAI;AAAA,QACb,UAAUA;AAAA,QACV,YAAY;AAAA,QACZ;AAAA,QACA,OAAO;AAAA,MACT,CAAC;AAAA,IACH;AACA,WAAO,IAAI,SAAS,EAAE,SAAS,IAAI,SAAS,UAAUA,WAAU,WAAW,OAAO,OAAO,IAAI,CAAC;AAAA,EAChG;AAEA,SAAO,qBAAqB,KAAKA,SAAQ;AAC3C;AAGO,SAAS,uBAAuB,QAAoC;AAEzE,QAAM,SAAS,IAAI,OAAO;AAAA,IACxB,QAAQ,OAAO;AAAA,IACf,SAAS;AAAA,IACT,SAAS,OAAO,aAAa;AAAA,IAC7B,YAAY;AAAA;AAAA,EACd,CAAC;AAED,QAAM,YAAY;AAAA,IAChB,YAAY,OAAO,cAAc;AAAA,IACjC,aAAa,OAAO,eAAe;AAAA,IACnC,UAAUA;AAAA,EACZ;AAEA,iBAAe,SACb,UACA,SACsB;AACtB,UAAM,QAAQ,SAAS,SAAS,OAAO;AACvC,UAAM,eAAe,cAAc,QAAQ;AAC3C,UAAM,QAAQ,KAAK,IAAI;AAEvB,WAAO,UAAU,YAAY;AAC3B,UAAI;AACF,cAAM,SAA6D;AAAA,UACjE;AAAA,UACA,UAAU;AAAA,UACV,QAAQ;AAAA,QACV;AAEA,cAAM,YAAY,SAAS,aAAa,OAAO;AAC/C,YAAI,cAAc,OAAW,QAAO,aAAa;AAEjD,cAAM,cAAc,SAAS,eAAe,OAAO;AACnD,YAAI,gBAAgB,OAAW,QAAO,cAAc;AAEpD,cAAM,WAAW,MAAM,OAAO,KAAK,YAAY,OAAO,MAAM;AAC5D,cAAM,UAAU,SAAS,QAAQ,IAAI,CAAC,MAAM,EAAE,QAAQ,WAAW,EAAE,EAAE,KAAK,EAAE;AAE5E,eAAO;AAAA,UACL;AAAA,UACA,OAAO,SAAS;AAAA,UAChB,OAAOC,gBAAe,SAAS,KAAK;AAAA,UACpC,WAAW,KAAK,IAAI,IAAI;AAAA,QAC1B;AAAA,MACF,SAAS,KAAK;AACZ,cAAM,uBAAuB,GAAG;AAAA,MAClC;AAAA,IACF,GAAG,SAAS;AAAA,EACd;AAEA,kBAAgB,OACd,UACA,SACgC;AAChC,UAAM,QAAQ,SAAS,SAAS,OAAO;AACvC,UAAM,eAAe,cAAc,QAAQ;AAE3C,UAAM,SAA0D;AAAA,MAC9D;AAAA,MACA,UAAU;AAAA,MACV,QAAQ;AAAA,MACR,gBAAgB,EAAE,eAAe,KAAK;AAAA,IACxC;AAEA,UAAM,YAAY,SAAS,aAAa,OAAO;AAC/C,QAAI,cAAc,OAAW,QAAO,aAAa;AAEjD,UAAM,cAAc,SAAS,eAAe,OAAO;AACnD,QAAI,gBAAgB,OAAW,QAAO,cAAc;AAEpD,QAAI;AAEJ,QAAI;AACF,kBAAY,MAAM,OAAO,KAAK,YAAY,OAAO,MAAM;AAAA,IACzD,SAAS,KAAK;AACZ,YAAM,uBAAuB,GAAG;AAAA,IAClC;AAEA,QAAI;AAEJ,QAAI;AACF,uBAAiB,SAAS,WAAW;AACnC,cAAM,QAAQ,MAAM,QAAQ,CAAC,GAAG,MAAM;AACtC,YAAI,UAAU,UAAa,UAAU,QAAQ,MAAM,SAAS,GAAG;AAC7D,gBAAM,EAAE,OAAO,MAAM;AAAA,QACvB;AAGA,YAAI,MAAM,UAAU,UAAa,MAAM,UAAU,MAAM;AACrD,uBAAaA,gBAAe,MAAM,KAAK;AAAA,QACzC;AAAA,MACF;AAAA,IACF,SAAS,KAAK;AACZ,YAAM,uBAAuB,GAAG;AAAA,IAClC;AAEA,QAAI,eAAe,QAAW;AAC5B,YAAM,EAAE,OAAO,IAAI,OAAO,WAAW;AAAA,IACvC;AAAA,EACF;AAEA,iBAAe,WACb,UACA,QACA,SACmC;AAGnC,UAAM,wBAAoC;AAAA,MACxC,MAAM;AAAA,MACN,SACE;AAAA,IACJ;AAEA,UAAM,oBAAoB,CAAC,uBAAuB,GAAG,QAAQ;AAC7D,UAAM,QAAQ,SAAS,SAAS,OAAO;AACvC,UAAM,eAAe,cAAc,iBAAiB;AACpD,UAAM,QAAQ,KAAK,IAAI;AAEvB,UAAM,cAAc,MAAM,UAAU,YAAY;AAC9C,UAAI;AACF,cAAM,SAA6D;AAAA,UACjE;AAAA,UACA,UAAU;AAAA,UACV,QAAQ;AAAA,QACV;AAEA,cAAM,YAAY,SAAS,aAAa,OAAO;AAC/C,YAAI,cAAc,OAAW,QAAO,aAAa;AAEjD,cAAM,cAAc,SAAS,eAAe,OAAO;AACnD,YAAI,gBAAgB,OAAW,QAAO,cAAc;AAEpD,eAAO,MAAM,OAAO,KAAK,YAAY,OAAO,MAAM;AAAA,MACpD,SAAS,KAAK;AACZ,cAAM,uBAAuB,GAAG;AAAA,MAClC;AAAA,IACF,GAAG,SAAS;AAEZ,UAAM,aAAa,YAAY,QAAQ,CAAC,GAAG,QAAQ,WAAW;AAE9D,QAAI;AACJ,QAAI;AAEF,YAAM,UAAU,WACb,QAAQ,qBAAqB,EAAE,EAC/B,QAAQ,WAAW,EAAE,EACrB,KAAK;AACR,eAAS,KAAK,MAAM,OAAO;AAAA,IAC7B,SAAS,KAAK;AACZ,YAAM,IAAI,SAAS;AAAA,QACjB,SAAS,gEAAgE,WAAW,MAAM,GAAG,GAAG,CAAC;AAAA,QACjG,UAAUD;AAAA,QACV,WAAW;AAAA,QACX,OAAO;AAAA,MACT,CAAC;AAAA,IACH;AAEA,QAAI;AACJ,QAAI;AACF,aAAO,OAAO,MAAM,MAAM;AAAA,IAC5B,SAAS,KAAK;AACZ,YAAM,IAAI,SAAS;AAAA,QACjB,SAAS,kEAAkE,OAAO,GAAG,CAAC;AAAA,QACtF,UAAUA;AAAA,QACV,WAAW;AAAA,QACX,OAAO;AAAA,MACT,CAAC;AAAA,IACH;AAEA,WAAO;AAAA,MACL;AAAA,MACA,OAAOC,gBAAe,YAAY,KAAK;AAAA,MACvC,WAAW,KAAK,IAAI,IAAI;AAAA,IAC1B;AAAA,EACF;AAEA,SAAO;AAAA,IACL;AAAA,IACA;AAAA,IACA;AAAA,IACA;AAAA,EACF;AACF;;;AC/PA;AAAA,EACE;AAAA,EAKA;AAAA,OACK;AAaP,IAAMC,YAAW;AAGjB,SAASC,gBAAe,MAAkE;AACxF,QAAM,cAAc,MAAM,oBAAoB;AAC9C,QAAM,eAAe,MAAM,wBAAwB;AACnD,SAAO;AAAA,IACL;AAAA,IACA;AAAA,IACA,aAAa,MAAM,mBAAmB,cAAc;AAAA,EACtD;AACF;AAOA,SAAS,oBAAoB,UAG3B;AACA,QAAM,iBAAiB,SAAS,OAAO,CAAC,MAAM,EAAE,SAAS,QAAQ;AACjE,QAAM,uBAAuB,SAAS,OAAO,CAAC,MAAM,EAAE,SAAS,QAAQ;AAEvE,QAAM,SACJ,eAAe,SAAS,IAAI,eAAe,IAAI,CAAC,MAAM,EAAE,OAAO,EAAE,KAAK,IAAI,IAAI;AAEhF,QAAM,WAAsB,qBAAqB,IAAI,CAAC,OAAO;AAAA,IAC3D,MAAM,EAAE,SAAS,cAAc,UAAU;AAAA,IACzC,OAAO,CAAC,EAAE,MAAM,EAAE,QAAQ,CAAC;AAAA,EAC7B,EAAE;AAEF,SAAO,EAAE,QAAQ,SAAS;AAC5B;AAUO,SAAS,qBAAqB,KAAwB;AAC3D,MAAI,eAAe,SAAU,QAAO;AAIpC,MAAI,eAAe,UAAU;AAC3B,UAAM,YAAY,IAAI,WAAW,OAAO,IAAI,UAAU;AACtD,WAAO,IAAI,SAAS;AAAA,MAClB,SAAS,IAAI;AAAA,MACb,UAAUD;AAAA,MACV,YAAY,IAAI;AAAA,MAChB;AAAA,MACA,OAAO;AAAA,IACT,CAAC;AAAA,EACH;AAIA,SAAO,qBAAqB,KAAKA,SAAQ;AAC3C;AAGO,SAAS,qBAAqB,QAAoC;AACvE,QAAM,KAAK,IAAI,YAAY;AAAA,IACzB,QAAQ,OAAO;AAAA,IACf,aAAa;AAAA,MACX,SAAS,OAAO,aAAa;AAAA,IAC/B;AAAA,EACF,CAAC;AAED,QAAM,YAAY;AAAA,IAChB,YAAY,OAAO,cAAc;AAAA,IACjC,aAAa,OAAO,eAAe;AAAA,IACnC,UAAUA;AAAA,EACZ;AAEA,iBAAe,SACb,UACA,SACsB;AACtB,UAAM,QAAQ,SAAS,SAAS,OAAO;AACvC,UAAM,EAAE,QAAQ,SAAS,IAAI,oBAAoB,QAAQ;AACzD,UAAM,QAAQ,KAAK,IAAI;AAEvB,WAAO,UAAU,YAAY;AAC3B,UAAI;AAEF,cAAM,eAAsC,CAAC;AAE7C,YAAI,WAAW,OAAW,cAAa,oBAAoB;AAC3D,cAAM,YAAY,SAAS,aAAa,OAAO;AAC/C,YAAI,cAAc,OAAW,cAAa,kBAAkB;AAC5D,cAAM,cAAc,SAAS,eAAe,OAAO;AACnD,YAAI,gBAAgB,OAAW,cAAa,cAAc;AAE1D,cAAM,WAAW,MAAM,GAAG,OAAO,gBAAgB;AAAA,UAC/C;AAAA,UACA;AAAA,UACA,QAAQ;AAAA,QACV,CAAC;AAED,eAAO;AAAA,UACL,SAAS,SAAS,QAAQ;AAAA,UAC1B;AAAA,UACA,OAAOC,gBAAe,SAAS,aAAa;AAAA,UAC5C,WAAW,KAAK,IAAI,IAAI;AAAA,QAC1B;AAAA,MACF,SAAS,KAAK;AACZ,cAAM,qBAAqB,GAAG;AAAA,MAChC;AAAA,IACF,GAAG,SAAS;AAAA,EACd;AAEA,kBAAgB,OACd,UACA,SACgC;AAChC,UAAM,QAAQ,SAAS,SAAS,OAAO;AACvC,UAAM,EAAE,QAAQ,SAAS,IAAI,oBAAoB,QAAQ;AAGzD,UAAM,eAAsC,CAAC;AAC7C,QAAI,WAAW,OAAW,cAAa,oBAAoB;AAC3D,UAAM,YAAY,SAAS,aAAa,OAAO;AAC/C,QAAI,cAAc,OAAW,cAAa,kBAAkB;AAC5D,UAAM,cAAc,SAAS,eAAe,OAAO;AACnD,QAAI,gBAAgB,OAAW,cAAa,cAAc;AAE1D,QAAI;AAEJ,QAAI;AACF,kBAAY,MAAM,GAAG,OAAO,sBAAsB;AAAA,QAChD;AAAA,QACA;AAAA,QACA,QAAQ;AAAA,MACV,CAAC;AAAA,IACH,SAAS,KAAK;AACZ,YAAM,qBAAqB,GAAG;AAAA,IAChC;AAEA,QAAI;AAEJ,QAAI;AACF,uBAAiB,SAAS,WAAW;AACnC,cAAM,OAAO,MAAM;AACnB,YAAI,SAAS,UAAa,KAAK,SAAS,GAAG;AACzC,gBAAM,EAAE,OAAO,KAAK;AAAA,QACtB;AAEA,YAAI,MAAM,kBAAkB,QAAW;AACrC,uBAAaA,gBAAe,MAAM,aAAa;AAAA,QACjD;AAAA,MACF;AAAA,IACF,SAAS,KAAK;AACZ,YAAM,qBAAqB,GAAG;AAAA,IAChC;AAEA,QAAI,eAAe,QAAW;AAC5B,YAAM,EAAE,OAAO,IAAI,OAAO,WAAW;AAAA,IACvC;AAAA,EACF;AAEA,iBAAe,WACb,UACA,QACA,SACmC;AACnC,UAAM,oBAAkC;AAAA,MACtC;AAAA,QACE,MAAM;AAAA,QACN,SACE;AAAA,MACJ;AAAA,MACA,GAAG;AAAA,IACL;AAEA,UAAM,QAAQ,SAAS,SAAS,OAAO;AACvC,UAAM,EAAE,QAAQ,SAAS,IAAI,oBAAoB,iBAAiB;AAClE,UAAM,QAAQ,KAAK,IAAI;AAEvB,UAAM,cAAc,MAAM,UAAU,YAAY;AAC9C,UAAI;AACF,cAAM,eAAsC;AAAA;AAAA,UAE1C,kBAAkB;AAAA,QACpB;AAEA,YAAI,WAAW,OAAW,cAAa,oBAAoB;AAC3D,cAAM,YAAY,SAAS,aAAa,OAAO;AAC/C,YAAI,cAAc,OAAW,cAAa,kBAAkB;AAC5D,cAAM,cAAc,SAAS,eAAe,OAAO;AACnD,YAAI,gBAAgB,OAAW,cAAa,cAAc;AAE1D,eAAO,MAAM,GAAG,OAAO,gBAAgB;AAAA,UACrC;AAAA,UACA;AAAA,UACA,QAAQ;AAAA,QACV,CAAC;AAAA,MACH,SAAS,KAAK;AACZ,cAAM,qBAAqB,GAAG;AAAA,MAChC;AAAA,IACF,GAAG,SAAS;AAEZ,UAAM,aAAa,YAAY,QAAQ;AAEvC,QAAI;AACJ,QAAI;AAEF,YAAM,UAAU,WACb,QAAQ,qBAAqB,EAAE,EAC/B,QAAQ,WAAW,EAAE,EACrB,KAAK;AACR,eAAS,KAAK,MAAM,OAAO;AAAA,IAC7B,SAAS,KAAK;AACZ,YAAM,IAAI,SAAS;AAAA,QACjB,SAAS,8DAA8D,WAAW,MAAM,GAAG,GAAG,CAAC;AAAA,QAC/F,UAAUD;AAAA,QACV,WAAW;AAAA,QACX,OAAO;AAAA,MACT,CAAC;AAAA,IACH;AAEA,QAAI;AACJ,QAAI;AACF,aAAO,OAAO,MAAM,MAAM;AAAA,IAC5B,SAAS,KAAK;AACZ,YAAM,IAAI,SAAS;AAAA,QACjB,SAAS,gEAAgE,OAAO,GAAG,CAAC;AAAA,QACpF,UAAUA;AAAA,QACV,WAAW;AAAA,QACX,OAAO;AAAA,MACT,CAAC;AAAA,IACH;AAEA,WAAO;AAAA,MACL;AAAA,MACA,OAAOC,gBAAe,YAAY,aAAa;AAAA,MAC/C,WAAW,KAAK,IAAI,IAAI;AAAA,IAC1B;AAAA,EACF;AAEA,SAAO;AAAA,IACL;AAAA,IACA;AAAA,IACA;AAAA,IACA;AAAA,EACF;AACF;;;AC7RA,OAAOC,aAAY;AAanB,IAAMC,YAAW;AAGjB,SAASC,gBAAe,OAA4D;AAClF,QAAM,cAAc,OAAO,iBAAiB;AAC5C,QAAM,eAAe,OAAO,qBAAqB;AACjD,SAAO;AAAA,IACL;AAAA,IACA;AAAA,IACA,aAAa,OAAO,gBAAgB,cAAc;AAAA,EACpD;AACF;AAGA,SAAS,oBAAoB,UAAkE;AAC7F,SAAO,SAAS,IAAI,CAAC,OAAO;AAAA,IAC1B,MAAM,EAAE;AAAA,IACR,SAAS,EAAE;AAAA,EACb,EAAE;AACJ;AAMO,SAAS,qBAAqB,KAAwB;AAC3D,MAAI,eAAe,SAAU,QAAO;AAKpC,MAAI,OAAOC,QAAO,uBAAuB,cAAc,eAAeA,QAAO,oBAAoB;AAC/F,WAAO,IAAI,SAAS;AAAA,MAClB,SAAS,IAAI;AAAA,MACb,UAAUF;AAAA,MACV,WAAW;AAAA,MACX,OAAO;AAAA,IACT,CAAC;AAAA,EACH;AAIA,MAAI,OAAOE,QAAO,aAAa,cAAc,eAAeA,QAAO,UAAU;AAC3E,UAAM,SAA6B,IAAI;AACvC,QAAI,WAAW,QAAW;AACxB,YAAM,YAAY,CAAC,KAAK,KAAK,KAAK,GAAG,EAAE,SAAS,MAAM,KAAK,UAAU;AACrE,aAAO,IAAI,SAAS;AAAA,QAClB,SAAS,IAAI;AAAA,QACb,UAAUF;AAAA,QACV,YAAY;AAAA,QACZ;AAAA,QACA,OAAO;AAAA,MACT,CAAC;AAAA,IACH;AACA,WAAO,IAAI,SAAS,EAAE,SAAS,IAAI,SAAS,UAAUA,WAAU,WAAW,OAAO,OAAO,IAAI,CAAC;AAAA,EAChG;AAEA,SAAO,qBAAqB,KAAKA,SAAQ;AAC3C;AAGO,SAAS,qBAAqB,QAAoC;AACvE,QAAM,SAAS,IAAIE,QAAO;AAAA,IACxB,QAAQ,OAAO;AAAA,IACf,SAAS,OAAO,aAAa;AAAA,IAC7B,YAAY;AAAA;AAAA,EACd,CAAC;AAED,QAAM,YAAY;AAAA,IAChB,YAAY,OAAO,cAAc;AAAA,IACjC,aAAa,OAAO,eAAe;AAAA,IACnC,UAAUF;AAAA,EACZ;AAEA,iBAAe,SACb,UACA,SACsB;AACtB,UAAM,QAAQ,SAAS,SAAS,OAAO;AACvC,UAAM,iBAAiB,oBAAoB,QAAQ;AACnD,UAAM,QAAQ,KAAK,IAAI;AAEvB,WAAO,UAAU,YAAY;AAC3B,UAAI;AACF,cAAM,SAA6D;AAAA,UACjE;AAAA,UACA,UAAU;AAAA,UACV,QAAQ;AAAA,QACV;AAEA,cAAM,YAAY,SAAS,aAAa,OAAO;AAC/C,YAAI,cAAc,OAAW,QAAO,aAAa;AAEjD,cAAM,cAAc,SAAS,eAAe,OAAO;AACnD,YAAI,gBAAgB,OAAW,QAAO,cAAc;AAEpD,cAAM,WAAW,MAAM,OAAO,KAAK,YAAY,OAAO,MAAM;AAE5D,cAAM,UAAU,SAAS,QAAQ,IAAI,CAAC,MAAM,EAAE,QAAQ,WAAW,EAAE,EAAE,KAAK,EAAE;AAE5E,eAAO;AAAA,UACL;AAAA,UACA,OAAO,SAAS;AAAA,UAChB,OAAOC,gBAAe,SAAS,KAAK;AAAA,UACpC,WAAW,KAAK,IAAI,IAAI;AAAA,QAC1B;AAAA,MACF,SAAS,KAAK;AACZ,cAAM,qBAAqB,GAAG;AAAA,MAChC;AAAA,IACF,GAAG,SAAS;AAAA,EACd;AAEA,kBAAgB,OACd,UACA,SACgC;AAChC,UAAM,QAAQ,SAAS,SAAS,OAAO;AACvC,UAAM,iBAAiB,oBAAoB,QAAQ;AAEnD,UAAM,SAA0D;AAAA,MAC9D;AAAA,MACA,UAAU;AAAA,MACV,QAAQ;AAAA,MACR,gBAAgB,EAAE,eAAe,KAAK;AAAA,IACxC;AAEA,UAAM,YAAY,SAAS,aAAa,OAAO;AAC/C,QAAI,cAAc,OAAW,QAAO,aAAa;AAEjD,UAAM,cAAc,SAAS,eAAe,OAAO;AACnD,QAAI,gBAAgB,OAAW,QAAO,cAAc;AAEpD,QAAI;AAEJ,QAAI;AACF,kBAAY,MAAM,OAAO,KAAK,YAAY,OAAO,MAAM;AAAA,IACzD,SAAS,KAAK;AACZ,YAAM,qBAAqB,GAAG;AAAA,IAChC;AAEA,QAAI;AAEJ,QAAI;AACF,uBAAiB,SAAS,WAAW;AAEnC,cAAM,QAAQ,MAAM,QAAQ,CAAC,GAAG,MAAM;AACtC,YAAI,UAAU,UAAa,UAAU,QAAQ,MAAM,SAAS,GAAG;AAC7D,gBAAM,EAAE,OAAO,MAAM;AAAA,QACvB;AAGA,YAAI,MAAM,UAAU,UAAa,MAAM,UAAU,MAAM;AACrD,uBAAaA,gBAAe,MAAM,KAAK;AAAA,QACzC;AAAA,MACF;AAAA,IACF,SAAS,KAAK;AACZ,YAAM,qBAAqB,GAAG;AAAA,IAChC;AAGA,QAAI,eAAe,QAAW;AAC5B,YAAM,EAAE,OAAO,IAAI,OAAO,WAAW;AAAA,IACvC;AAAA,EACF;AAEA,iBAAe,WACb,UACA,QACA,SACmC;AAGnC,UAAM,wBAAoC;AAAA,MACxC,MAAM;AAAA,MACN,SACE;AAAA,IACJ;AAEA,UAAM,oBAAoB,CAAC,uBAAuB,GAAG,QAAQ;AAC7D,UAAM,QAAQ,SAAS,SAAS,OAAO;AACvC,UAAM,iBAAiB,oBAAoB,iBAAiB;AAC5D,UAAM,QAAQ,KAAK,IAAI;AAEvB,UAAM,cAAc,MAAM,UAAU,YAAY;AAC9C,UAAI;AACF,cAAM,SAA6D;AAAA,UACjE;AAAA,UACA,UAAU;AAAA,UACV,QAAQ;AAAA,UACR,iBAAiB,EAAE,MAAM,cAAc;AAAA,QACzC;AAEA,cAAM,YAAY,SAAS,aAAa,OAAO;AAC/C,YAAI,cAAc,OAAW,QAAO,aAAa;AAEjD,cAAM,cAAc,SAAS,eAAe,OAAO;AACnD,YAAI,gBAAgB,OAAW,QAAO,cAAc;AAEpD,eAAO,MAAM,OAAO,KAAK,YAAY,OAAO,MAAM;AAAA,MACpD,SAAS,KAAK;AACZ,cAAM,qBAAqB,GAAG;AAAA,MAChC;AAAA,IACF,GAAG,SAAS;AAEZ,UAAM,aAAa,YAAY,QAAQ,CAAC,GAAG,QAAQ,WAAW;AAE9D,QAAI;AACJ,QAAI;AACF,eAAS,KAAK,MAAM,UAAU;AAAA,IAChC,SAAS,KAAK;AACZ,YAAM,IAAI,SAAS;AAAA,QACjB,SAAS,8DAA8D,WAAW,MAAM,GAAG,GAAG,CAAC;AAAA,QAC/F,UAAUD;AAAA,QACV,WAAW;AAAA,QACX,OAAO;AAAA,MACT,CAAC;AAAA,IACH;AAEA,QAAI;AACJ,QAAI;AACF,aAAO,OAAO,MAAM,MAAM;AAAA,IAC5B,SAAS,KAAK;AACZ,YAAM,IAAI,SAAS;AAAA,QACjB,SAAS,gEAAgE,OAAO,GAAG,CAAC;AAAA,QACpF,UAAUA;AAAA,QACV,WAAW;AAAA,QACX,OAAO;AAAA,MACT,CAAC;AAAA,IACH;AAEA,WAAO;AAAA,MACL;AAAA,MACA,OAAOC,gBAAe,YAAY,KAAK;AAAA,MACvC,WAAW,KAAK,IAAI,IAAI;AAAA,IAC1B;AAAA,EACF;AAEA,SAAO;AAAA,IACL;AAAA,IACA;AAAA,IACA;AAAA,IACA;AAAA,EACF;AACF;;;AC9PA,SAAS,gBAAgB,KAA+C;AACtE,QAAM,WAAW,QAAQ,OAAuC,GAAG;AAGnE,WAAS,MAAM,MAAM,MAAS;AAC9B,SAAO;AAAA,IACL,MAAM,MAAM;AAAA,IACZ,QAAQ,MAAM,QAAQ,QAAQ,EAAE,OAAO,QAAW,MAAM,KAAc,CAAC;AAAA,IACvE,OAAO,MAAM,QAAQ,OAAO,GAAG;AAAA,IAC/B,CAAC,OAAO,aAAa,IAAI;AACvB,aAAO;AAAA,IACT;AAAA,IACA,CAAC,OAAO,YAAY,GAAG,YAAY;AAAA,EACrC;AACF;AAEA,SAAS,eAAe,UAA6B;AACnD,QAAM,MAAM,IAAI,SAAS;AAAA,IACvB,SAAS,6BAA6B,QAAQ;AAAA,IAC9C;AAAA,IACA,WAAW;AAAA,EACb,CAAC;AAKD,SAAO;AAAA,IACL,IAAI,SAA0B;AAC5B,YAAM;AAAA,IACR;AAAA,IACA,UAAU,MAAM,QAAQ,OAAO,GAAG;AAAA,IAClC,QAAQ,MAAM,gBAAgB,GAAG;AAAA,IACjC,YAAY,MAAM,QAAQ,OAAO,GAAG;AAAA,EACtC;AACF;AAGO,SAAS,yBAAyB,QAAoC;AAC3E,OAAK;AACL,SAAO,eAAe,YAAY;AACpC;;;AC9BO,SAAS,aAAa,QAAoC;AAC/D,UAAQ,OAAO,UAAU;AAAA,IACvB,KAAK;AACH,aAAO,wBAAwB,MAAM;AAAA,IAEvC,KAAK;AACH,aAAO,qBAAqB,MAAM;AAAA,IAEpC,KAAK;AACH,aAAO,qBAAqB,MAAM;AAAA,IAEpC,KAAK;AACH,aAAO,uBAAuB,MAAM;AAAA,IAEtC,KAAK;AACH,aAAO,yBAAyB,MAAM;AAAA,IAExC,SAAS;AAGP,YAAM,cAAqB,OAAO;AAClC,YAAM,IAAI,SAAS;AAAA,QACjB,SAAS,qCAAqC,OAAO,WAAW,CAAC;AAAA,QACjE,UAAU,OAAO,WAAW;AAAA,QAC5B,WAAW;AAAA,MACb,CAAC;AAAA,IACH;AAAA,EACF;AACF;AAcO,SAAS,oBACd,UACA,OACA,WACW;AACX,QAAM,SAAS,cAAc,QAAQ;AACrC,SAAO,aAAa,EAAE,UAAU,OAAO,QAAQ,GAAG,UAAU,CAAC;AAC/D;AAGA,SAAS,cAAc,UAA+C;AACpE,QAAM,YAAyD;AAAA,IAC7D,WAAW;AAAA,IACX,QAAQ;AAAA,IACR,QAAQ;AAAA,IACR,UAAU;AAAA,IACV,YAAY;AAAA,EACd;AAEA,QAAM,SAAS,UAAU,QAAQ;AACjC,QAAM,SAAS,QAAQ,IAAI,MAAM;AAEjC,MAAI,WAAW,UAAa,OAAO,KAAK,MAAM,IAAI;AAChD,UAAM,IAAI,SAAS;AAAA,MACjB,SAAS,mBAAmB,MAAM,yDAAyD,QAAQ;AAAA,MACnG;AAAA,MACA,WAAW;AAAA,IACb,CAAC;AAAA,EACH;AAEA,SAAO;AACT;","names":["PROVIDER","normalizeUsage","PROVIDER","normalizeUsage","OpenAI","PROVIDER","normalizeUsage","OpenAI"]}
1
+ {"version":3,"sources":["../src/providers/anthropic.ts","../src/types.ts","../src/retry.ts","../src/providers/deepseek.ts","../src/providers/gemini.ts","../src/providers/openai.ts","../src/providers/perplexity.ts","../src/client.ts"],"sourcesContent":["/**\n * Anthropic Claude provider for @diabolicallabs/llm-client.\n *\n * Implements: complete(), stream(), structured()\n *\n * Token normalization:\n * Anthropic: input_tokens / output_tokens / cache_creation_input_tokens / cache_read_input_tokens\n * → LlmUsage: inputTokens / outputTokens / totalTokens / cacheCreationTokens / cacheReadTokens\n *\n * Error mapping:\n * APIStatusError.status → LlmError.statusCode + retryable flag\n * APIConnectionError → retryable: true\n */\n\nimport Anthropic from '@anthropic-ai/sdk';\nimport { normalizeThrownError, withRetry } from '../retry.js';\nimport type {\n LlmCallOptions,\n LlmClient,\n LlmClientConfig,\n LlmMessage,\n LlmResponse,\n LlmStreamChunk,\n LlmStructuredResponse,\n LlmUsage,\n} from '../types.js';\nimport { LlmError } from '../types.js';\n\nconst PROVIDER = 'anthropic';\n\n/** Normalize Anthropic's usage object to LlmUsage. */\nfunction normalizeUsage(usage: Anthropic.Usage | undefined): LlmUsage {\n const inputTokens = usage?.input_tokens ?? 0;\n const outputTokens = usage?.output_tokens ?? 0;\n return {\n inputTokens,\n outputTokens,\n totalTokens: inputTokens + outputTokens,\n // cache_creation_input_tokens and cache_read_input_tokens are present on\n // extended usage objects from prompt caching — cast to access them safely.\n cacheCreationTokens: (usage as Anthropic.Usage & { cache_creation_input_tokens?: number })\n ?.cache_creation_input_tokens,\n cacheReadTokens: (usage as Anthropic.Usage & { cache_read_input_tokens?: number })\n ?.cache_read_input_tokens,\n };\n}\n\n/** Convert LlmMessages to Anthropic's message format. Extracts system prompt. */\nfunction buildAnthropicMessages(messages: LlmMessage[]): {\n system: string | undefined;\n messages: Anthropic.MessageParam[];\n} {\n const systemMessages = messages.filter((m) => m.role === 'system');\n const conversationMessages = messages.filter((m) => m.role !== 'system');\n\n const system =\n systemMessages.length > 0 ? systemMessages.map((m) => m.content).join('\\n') : undefined;\n\n const anthropicMessages: Anthropic.MessageParam[] = conversationMessages.map((m) => ({\n role: m.role as 'user' | 'assistant',\n content: m.content,\n }));\n\n return { system, messages: anthropicMessages };\n}\n\n/**\n * Normalize any Anthropic SDK error into LlmError.\n * Exported for direct unit testing of the normalization logic.\n */\nexport function normalizeAnthropicError(err: unknown): LlmError {\n if (err instanceof LlmError) return err;\n\n // Anthropic SDK v0.94+: uses Anthropic.APIError as the base class with a `.status` field.\n // APIConnectionError is a subclass of APIError with status: undefined — check it first\n // so network failures are always retryable regardless of the missing status code.\n if (\n typeof Anthropic.APIConnectionError === 'function' &&\n err instanceof Anthropic.APIConnectionError\n ) {\n return new LlmError({\n message: err.message,\n provider: PROVIDER,\n retryable: true,\n cause: err,\n });\n }\n\n // Catch all other APIError subclasses: RateLimitError (429), AuthenticationError (401),\n // InternalServerError (500), etc. Retryability is determined by HTTP status code.\n if (typeof Anthropic.APIError === 'function' && err instanceof Anthropic.APIError) {\n const status: number | undefined = err.status;\n if (status !== undefined) {\n const retryable = [429, 502, 503, 504].includes(status) || status >= 500;\n return new LlmError({\n message: err.message,\n provider: PROVIDER,\n statusCode: status,\n retryable,\n cause: err,\n });\n }\n return new LlmError({ message: err.message, provider: PROVIDER, retryable: false, cause: err });\n }\n\n return normalizeThrownError(err, PROVIDER);\n}\n\n/** Create the Anthropic provider implementation. */\nexport function createAnthropicProvider(config: LlmClientConfig): LlmClient {\n const client = new Anthropic({\n apiKey: config.apiKey,\n timeout: config.timeoutMs ?? 30_000,\n maxRetries: 0, // We manage retries ourselves via withRetry\n });\n\n const retryOpts = {\n maxRetries: config.maxRetries ?? 3,\n baseDelayMs: config.baseDelayMs ?? 1_000,\n provider: PROVIDER,\n };\n\n async function complete(messages: LlmMessage[], options?: LlmCallOptions): Promise<LlmResponse> {\n const model = options?.model ?? config.model;\n const { system, messages: anthropicMessages } = buildAnthropicMessages(messages);\n\n const start = Date.now();\n\n return withRetry(async () => {\n try {\n const params: Anthropic.MessageCreateParamsNonStreaming = {\n model,\n messages: anthropicMessages,\n max_tokens: options?.maxTokens ?? config.maxTokens ?? 1024,\n };\n\n if (system !== undefined) params.system = system;\n const temperature = options?.temperature ?? config.temperature;\n if (temperature !== undefined) {\n params.temperature = temperature;\n }\n\n const response = await client.messages.create(params);\n\n const content = response.content\n .filter((block): block is Anthropic.TextBlock => block.type === 'text')\n .map((block) => block.text)\n .join('');\n\n return {\n content,\n model: response.model,\n usage: normalizeUsage(response.usage),\n latencyMs: Date.now() - start,\n };\n } catch (err) {\n throw normalizeAnthropicError(err);\n }\n }, retryOpts);\n }\n\n async function* stream(\n messages: LlmMessage[],\n options?: LlmCallOptions\n ): AsyncGenerator<LlmStreamChunk> {\n const model = options?.model ?? config.model;\n const { system, messages: anthropicMessages } = buildAnthropicMessages(messages);\n\n const params: Anthropic.MessageStreamParams = {\n model,\n messages: anthropicMessages,\n max_tokens: options?.maxTokens ?? config.maxTokens ?? 1024,\n };\n\n if (system !== undefined) params.system = system;\n const streamTemperature = options?.temperature ?? config.temperature;\n if (streamTemperature !== undefined) {\n params.temperature = streamTemperature;\n }\n\n let sdkStream: Awaited<ReturnType<typeof client.messages.stream>>;\n\n try {\n sdkStream = client.messages.stream(params);\n } catch (err) {\n throw normalizeAnthropicError(err);\n }\n\n // Accumulate usage — Anthropic sends it in the message_delta event at stream end\n let finalUsage: LlmUsage | undefined;\n\n try {\n for await (const event of sdkStream) {\n if (event.type === 'content_block_delta' && event.delta.type === 'text_delta') {\n yield { token: event.delta.text };\n } else if (event.type === 'message_delta' && 'usage' in event) {\n // Merge input tokens from message_start with output tokens from message_delta\n const accum = await sdkStream.finalMessage();\n finalUsage = normalizeUsage(accum.usage);\n }\n }\n } catch (err) {\n // Propagate as a normalized LlmError regardless of whether streaming had started.\n // Partial stream errors cannot be recovered from — the consumer must handle them.\n throw normalizeAnthropicError(err);\n }\n\n // Yield usage on the final empty chunk\n if (finalUsage !== undefined) {\n yield { token: '', usage: finalUsage };\n }\n }\n\n async function structured<T>(\n messages: LlmMessage[],\n schema: { parse: (data: unknown) => T },\n options?: LlmCallOptions\n ): Promise<LlmStructuredResponse<T>> {\n // Anthropic JSON mode: append a system instruction to return only JSON.\n // We inject this into the messages so the provider returns parseable output.\n const jsonSystemInstruction: LlmMessage = {\n role: 'system',\n content:\n 'You must respond with valid JSON only. No explanations, no markdown code fences, no extra text. Your entire response must be valid JSON that can be parsed with JSON.parse().',\n };\n\n const augmentedMessages = [jsonSystemInstruction, ...messages];\n const start = Date.now();\n\n const response = await complete(augmentedMessages, options);\n\n let parsed: unknown;\n try {\n // Strip markdown code fences if the model included them despite the instruction\n const cleaned = response.content\n .replace(/^```(?:json)?\\s*/i, '')\n .replace(/\\s*```$/, '')\n .trim();\n parsed = JSON.parse(cleaned);\n } catch (err) {\n throw new LlmError({\n message: `Anthropic structured output: response is not valid JSON. Raw: ${response.content.slice(0, 200)}`,\n provider: PROVIDER,\n retryable: false,\n cause: err,\n });\n }\n\n let data: T;\n try {\n data = schema.parse(parsed);\n } catch (err) {\n throw new LlmError({\n message: `Anthropic structured output: response failed schema validation. ${String(err)}`,\n provider: PROVIDER,\n retryable: false,\n cause: err,\n });\n }\n\n return {\n data,\n usage: response.usage,\n latencyMs: Date.now() - start,\n };\n }\n\n return {\n config,\n complete,\n stream,\n structured,\n };\n}\n","/**\n * Core type definitions for @diabolicallabs/llm-client.\n * These are the stable public API surface — implementation is in Week 2.\n * Types here match the spec in briefs/brief-platform.md §4.1 exactly.\n *\n * Week 5 additions:\n * LlmResponse.citations — populated by the Perplexity provider; undefined for all others.\n * LlmCallOptions — per-call options type extracted for reuse; adds providerOptions escape hatch.\n */\n\n// The canonical message format shared across all providers\nexport interface LlmMessage {\n role: 'system' | 'user' | 'assistant';\n content: string;\n}\n\n// Config passed to createClient\nexport interface LlmClientConfig {\n // Full 5-provider union — gemini, deepseek, perplexity are type-only stubs in Week 2\n provider: 'anthropic' | 'openai' | 'gemini' | 'deepseek' | 'perplexity';\n model: string; // e.g. 'claude-sonnet-4-6', 'gpt-4o', 'gemini-2.5-flash'\n apiKey: string;\n maxRetries?: number; // default: 3\n baseDelayMs?: number; // default: 1000 — exponential backoff base\n maxTokens?: number; // provider default if omitted\n temperature?: number; // provider default if omitted\n timeoutMs?: number; // default: 30000\n}\n\n// Normalized token usage — same shape regardless of provider\nexport interface LlmUsage {\n inputTokens: number;\n outputTokens: number;\n totalTokens: number;\n cacheCreationTokens?: number; // Anthropic prompt cache write tokens\n cacheReadTokens?: number; // Anthropic prompt cache read tokens\n}\n\n// Non-streaming response\nexport interface LlmResponse {\n content: string;\n model: string; // model ID actually used (may differ from requested)\n usage: LlmUsage;\n latencyMs: number;\n /**\n * Web citations returned by the Perplexity provider.\n * Populated only when the Perplexity API returns source references.\n * Always undefined for Anthropic, OpenAI, Gemini, and DeepSeek.\n * Deduplicated by URL within a single response.\n */\n citations?: Array<{\n url: string;\n title?: string;\n }>;\n}\n\n/**\n * Per-call options shared across complete(), stream(), and structured().\n * Extends the standard model/maxTokens/temperature overrides with:\n * providerOptions — generic escape hatch for provider-specific parameters.\n * The Perplexity provider reads search_domain_filter and\n * search_recency_filter from this field; other providers ignore it.\n * Unknown fields are passed through unchanged.\n */\nexport interface LlmCallOptions\n extends Partial<Pick<LlmClientConfig, 'model' | 'maxTokens' | 'temperature'>> {\n providerOptions?: Record<string, unknown>;\n}\n\n// Streaming chunk\nexport interface LlmStreamChunk {\n token: string;\n usage?: LlmUsage; // present only on the final chunk\n}\n\n// Normalized error — wraps provider-specific errors\nexport class LlmError extends Error {\n override readonly name = 'LlmError';\n readonly provider: string;\n readonly statusCode: number | undefined;\n readonly retryable: boolean;\n // `cause` is declared on Error in lib.es2022.error.d.ts as `cause?: unknown`\n // We override it here to make it always present (not optional) after construction.\n override readonly cause: unknown;\n\n constructor(opts: {\n message: string;\n provider: string;\n statusCode?: number;\n retryable: boolean;\n cause?: unknown;\n }) {\n super(opts.message, { cause: opts.cause });\n this.provider = opts.provider;\n this.statusCode = opts.statusCode;\n this.retryable = opts.retryable;\n this.cause = opts.cause;\n }\n}\n\n// Structured output — Zod schema inference\nexport type LlmStructuredResponse<T> = {\n data: T;\n usage: LlmUsage;\n latencyMs: number;\n};\n\n// The LlmClient interface — what consumers program against\nexport interface LlmClient {\n readonly config: Readonly<LlmClientConfig>;\n\n // Non-streaming completion\n complete(messages: LlmMessage[], options?: LlmCallOptions): Promise<LlmResponse>;\n\n // Streaming completion — async generator of chunks\n stream(messages: LlmMessage[], options?: LlmCallOptions): AsyncGenerator<LlmStreamChunk>;\n\n // Structured output — parses and validates the response against a Zod schema\n // Forces JSON mode on providers that support it; falls back to parse-and-validate\n structured<T>(\n messages: LlmMessage[],\n // Using a narrower interface than the full ZodType to avoid a hard zod dependency at types level\n schema: { parse: (data: unknown) => T },\n options?: LlmCallOptions\n ): Promise<LlmStructuredResponse<T>>;\n}\n","/**\n * Exponential backoff with full jitter — shared across all providers.\n *\n * Formula: delay = random(0, baseDelayMs * 2^attempt)\n *\n * Retryable HTTP statuses: 429 (rate limit), 502/503/504 (server errors).\n * Retryable network codes: ECONNRESET, ETIMEDOUT.\n * Non-retryable: 400 (bad request), 401/403 (auth), 404.\n */\n\nimport { LlmError } from './types.js';\n\n// HTTP status codes that should trigger a retry\nconst RETRYABLE_HTTP_STATUSES = new Set([429, 502, 503, 504]);\n\n// Network error codes that should trigger a retry\nconst RETRYABLE_ERROR_CODES = new Set(['ECONNRESET', 'ETIMEDOUT', 'ECONNABORTED']);\n\n// HTTP status codes that should never retry (fail immediately)\nconst NON_RETRYABLE_HTTP_STATUSES = new Set([400, 401, 403, 404]);\n\n/** Determine if an HTTP status code is retryable. */\nexport function isRetryableStatus(statusCode: number): boolean {\n if (RETRYABLE_HTTP_STATUSES.has(statusCode)) return true;\n if (NON_RETRYABLE_HTTP_STATUSES.has(statusCode)) return false;\n // Treat any 5xx not explicitly handled as retryable\n return statusCode >= 500;\n}\n\n/** Determine if a network error code is retryable. */\nexport function isRetryableErrorCode(code: string): boolean {\n return RETRYABLE_ERROR_CODES.has(code);\n}\n\n/** Compute the delay in ms for attempt N (0-indexed). Full jitter. */\nexport function computeBackoffMs(attempt: number, baseDelayMs: number): number {\n const ceiling = baseDelayMs * 2 ** attempt;\n return Math.random() * ceiling;\n}\n\nexport interface RetryOptions {\n maxRetries: number;\n baseDelayMs: number;\n provider: string;\n}\n\n/**\n * Execute `fn` with retry logic. Wraps the result in structured error normalization.\n * `fn` receives the current attempt number (0-indexed).\n *\n * Throws LlmError after all retries are exhausted.\n */\nexport async function withRetry<T>(\n fn: (attempt: number) => Promise<T>,\n opts: RetryOptions\n): Promise<T> {\n let lastError: LlmError | undefined;\n\n for (let attempt = 0; attempt <= opts.maxRetries; attempt++) {\n try {\n return await fn(attempt);\n } catch (err) {\n const llmErr = normalizeThrownError(err, opts.provider);\n\n if (!llmErr.retryable || attempt === opts.maxRetries) {\n throw llmErr;\n }\n\n lastError = llmErr;\n const delayMs = computeBackoffMs(attempt, opts.baseDelayMs);\n await sleep(delayMs);\n }\n }\n\n // This path is unreachable — the loop always throws or returns.\n // TypeScript needs this for exhaustiveness.\n throw (\n lastError ??\n new LlmError({\n message: 'Unexpected retry exhaustion',\n provider: opts.provider,\n retryable: false,\n })\n );\n}\n\n/** Normalize any thrown value into an LlmError. */\nexport function normalizeThrownError(err: unknown, provider: string): LlmError {\n if (err instanceof LlmError) return err;\n\n if (err instanceof Error) {\n const errWithCode = err as Error & { status?: number; statusCode?: number; code?: string };\n\n const statusCode = errWithCode.status ?? errWithCode.statusCode;\n\n // Check for retryable network error codes\n if (errWithCode.code !== undefined && isRetryableErrorCode(errWithCode.code)) {\n if (statusCode !== undefined) {\n return new LlmError({\n message: err.message,\n provider,\n statusCode,\n retryable: true,\n cause: err,\n });\n }\n return new LlmError({ message: err.message, provider, retryable: true, cause: err });\n }\n\n // Check for retryable HTTP status codes\n if (statusCode !== undefined) {\n return new LlmError({\n message: err.message,\n provider,\n statusCode,\n retryable: isRetryableStatus(statusCode),\n cause: err,\n });\n }\n\n return new LlmError({\n message: err.message,\n provider,\n retryable: false,\n cause: err,\n });\n }\n\n return new LlmError({\n message: String(err),\n provider,\n retryable: false,\n cause: err,\n });\n}\n\nfunction sleep(ms: number): Promise<void> {\n return new Promise((resolve) => setTimeout(resolve, ms));\n}\n","/**\n * DeepSeek provider for @diabolicallabs/llm-client.\n *\n * DeepSeek's chat completions API is fully OpenAI-compatible, so this provider\n * uses the OpenAI SDK pointed at DeepSeek's base URL.\n *\n * API base URL: https://api.deepseek.com\n * Docs: https://platform.deepseek.com/api-docs/\n *\n * Implements: complete(), stream(), structured()\n *\n * Token normalization:\n * DeepSeek returns standard OpenAI-format usage: prompt_tokens / completion_tokens / total_tokens\n * → LlmUsage: inputTokens / outputTokens / totalTokens\n *\n * Error mapping:\n * APIConnectionError → retryable: true\n * APIError with status 429 / 5xx → retryable: true\n * Other APIErrors → non-retryable\n *\n * Note: DeepSeek does not support the json_object response_format on all models.\n * structured() injects a system prompt and parses the raw response. If the model\n * includes markdown fences, they are stripped before parsing.\n */\n\nimport OpenAI from 'openai';\nimport { normalizeThrownError, withRetry } from '../retry.js';\nimport type {\n LlmCallOptions,\n LlmClient,\n LlmClientConfig,\n LlmMessage,\n LlmResponse,\n LlmStreamChunk,\n LlmStructuredResponse,\n LlmUsage,\n} from '../types.js';\nimport { LlmError } from '../types.js';\n\nconst PROVIDER = 'deepseek';\nconst DEEPSEEK_BASE_URL = 'https://api.deepseek.com';\n\n/** Normalize OpenAI-format usage object to LlmUsage. */\nfunction normalizeUsage(usage: OpenAI.CompletionUsage | undefined | null): LlmUsage {\n const inputTokens = usage?.prompt_tokens ?? 0;\n const outputTokens = usage?.completion_tokens ?? 0;\n return {\n inputTokens,\n outputTokens,\n totalTokens: usage?.total_tokens ?? inputTokens + outputTokens,\n };\n}\n\n/** Convert LlmMessages to OpenAI-format chat message params (compatible with DeepSeek). */\nfunction buildMessages(messages: LlmMessage[]): OpenAI.Chat.ChatCompletionMessageParam[] {\n return messages.map((m) => ({\n role: m.role,\n content: m.content,\n }));\n}\n\n/**\n * Normalize any DeepSeek / OpenAI SDK error into LlmError.\n * Exported for direct unit testing of the normalization logic.\n *\n * Uses the same OpenAI SDK error hierarchy (APIConnectionError before APIError)\n * since the client is an OpenAI instance pointed at DeepSeek's API.\n */\nexport function normalizeDeepSeekError(err: unknown): LlmError {\n if (err instanceof LlmError) return err;\n\n // APIConnectionError is a subclass of APIError with status: undefined —\n // check it first so network failures are always retryable.\n if (typeof OpenAI.APIConnectionError === 'function' && err instanceof OpenAI.APIConnectionError) {\n return new LlmError({\n message: err.message,\n provider: PROVIDER,\n retryable: true,\n cause: err,\n });\n }\n\n // Catch all other APIError subclasses: RateLimitError (429), AuthenticationError (401), etc.\n if (typeof OpenAI.APIError === 'function' && err instanceof OpenAI.APIError) {\n const status: number | undefined = err.status;\n if (status !== undefined) {\n const retryable = [429, 502, 503, 504].includes(status) || status >= 500;\n return new LlmError({\n message: err.message,\n provider: PROVIDER,\n statusCode: status,\n retryable,\n cause: err,\n });\n }\n return new LlmError({ message: err.message, provider: PROVIDER, retryable: false, cause: err });\n }\n\n return normalizeThrownError(err, PROVIDER);\n}\n\n/** Create the DeepSeek provider implementation. */\nexport function createDeepSeekProvider(config: LlmClientConfig): LlmClient {\n // OpenAI SDK pointed at DeepSeek's OpenAI-compatible endpoint\n const client = new OpenAI({\n apiKey: config.apiKey,\n baseURL: DEEPSEEK_BASE_URL,\n timeout: config.timeoutMs ?? 30_000,\n maxRetries: 0, // Retries managed by withRetry\n });\n\n const retryOpts = {\n maxRetries: config.maxRetries ?? 3,\n baseDelayMs: config.baseDelayMs ?? 1_000,\n provider: PROVIDER,\n };\n\n async function complete(messages: LlmMessage[], options?: LlmCallOptions): Promise<LlmResponse> {\n const model = options?.model ?? config.model;\n const chatMessages = buildMessages(messages);\n const start = Date.now();\n\n return withRetry(async () => {\n try {\n const params: OpenAI.Chat.ChatCompletionCreateParamsNonStreaming = {\n model,\n messages: chatMessages,\n stream: false,\n };\n\n const maxTokens = options?.maxTokens ?? config.maxTokens;\n if (maxTokens !== undefined) params.max_tokens = maxTokens;\n\n const temperature = options?.temperature ?? config.temperature;\n if (temperature !== undefined) params.temperature = temperature;\n\n const response = await client.chat.completions.create(params);\n const content = response.choices.map((c) => c.message.content ?? '').join('');\n\n return {\n content,\n model: response.model,\n usage: normalizeUsage(response.usage),\n latencyMs: Date.now() - start,\n };\n } catch (err) {\n throw normalizeDeepSeekError(err);\n }\n }, retryOpts);\n }\n\n async function* stream(\n messages: LlmMessage[],\n options?: LlmCallOptions\n ): AsyncGenerator<LlmStreamChunk> {\n const model = options?.model ?? config.model;\n const chatMessages = buildMessages(messages);\n\n const params: OpenAI.Chat.ChatCompletionCreateParamsStreaming = {\n model,\n messages: chatMessages,\n stream: true,\n stream_options: { include_usage: true },\n };\n\n const maxTokens = options?.maxTokens ?? config.maxTokens;\n if (maxTokens !== undefined) params.max_tokens = maxTokens;\n\n const temperature = options?.temperature ?? config.temperature;\n if (temperature !== undefined) params.temperature = temperature;\n\n let sdkStream: Awaited<ReturnType<typeof client.chat.completions.create>>;\n\n try {\n sdkStream = await client.chat.completions.create(params);\n } catch (err) {\n throw normalizeDeepSeekError(err);\n }\n\n let finalUsage: LlmUsage | undefined;\n\n try {\n for await (const chunk of sdkStream) {\n const delta = chunk.choices[0]?.delta.content;\n if (delta !== undefined && delta !== null && delta.length > 0) {\n yield { token: delta };\n }\n\n // Usage arrives in the final chunk when stream_options.include_usage is true\n if (chunk.usage !== undefined && chunk.usage !== null) {\n finalUsage = normalizeUsage(chunk.usage);\n }\n }\n } catch (err) {\n throw normalizeDeepSeekError(err);\n }\n\n if (finalUsage !== undefined) {\n yield { token: '', usage: finalUsage };\n }\n }\n\n async function structured<T>(\n messages: LlmMessage[],\n schema: { parse: (data: unknown) => T },\n options?: LlmCallOptions\n ): Promise<LlmStructuredResponse<T>> {\n // Inject JSON-only system instruction. DeepSeek does not guarantee json_object\n // response_format support across all models, so we rely on prompt-level enforcement.\n const jsonSystemInstruction: LlmMessage = {\n role: 'system',\n content:\n 'You must respond with valid JSON only. No explanations, no markdown code fences, no extra text. Your entire response must be valid JSON that can be parsed with JSON.parse().',\n };\n\n const augmentedMessages = [jsonSystemInstruction, ...messages];\n const model = options?.model ?? config.model;\n const chatMessages = buildMessages(augmentedMessages);\n const start = Date.now();\n\n const rawResponse = await withRetry(async () => {\n try {\n const params: OpenAI.Chat.ChatCompletionCreateParamsNonStreaming = {\n model,\n messages: chatMessages,\n stream: false,\n };\n\n const maxTokens = options?.maxTokens ?? config.maxTokens;\n if (maxTokens !== undefined) params.max_tokens = maxTokens;\n\n const temperature = options?.temperature ?? config.temperature;\n if (temperature !== undefined) params.temperature = temperature;\n\n return await client.chat.completions.create(params);\n } catch (err) {\n throw normalizeDeepSeekError(err);\n }\n }, retryOpts);\n\n const rawContent = rawResponse.choices[0]?.message.content ?? '';\n\n let parsed: unknown;\n try {\n // Strip markdown fences if the model included them despite the instruction\n const cleaned = rawContent\n .replace(/^```(?:json)?\\s*/i, '')\n .replace(/\\s*```$/, '')\n .trim();\n parsed = JSON.parse(cleaned);\n } catch (err) {\n throw new LlmError({\n message: `DeepSeek structured output: response is not valid JSON. Raw: ${rawContent.slice(0, 200)}`,\n provider: PROVIDER,\n retryable: false,\n cause: err,\n });\n }\n\n let data: T;\n try {\n data = schema.parse(parsed);\n } catch (err) {\n throw new LlmError({\n message: `DeepSeek structured output: response failed schema validation. ${String(err)}`,\n provider: PROVIDER,\n retryable: false,\n cause: err,\n });\n }\n\n return {\n data,\n usage: normalizeUsage(rawResponse.usage),\n latencyMs: Date.now() - start,\n };\n }\n\n return {\n config,\n complete,\n stream,\n structured,\n };\n}\n","/**\n * Google Gemini provider for @diabolicallabs/llm-client.\n *\n * Uses the @google/genai SDK (v1.x — not the deprecated @google/generative-ai).\n *\n * Implements: complete(), stream(), structured()\n *\n * Token normalization:\n * Gemini: usageMetadata.promptTokenCount / candidatesTokenCount / totalTokenCount\n * → LlmUsage: inputTokens / outputTokens / totalTokens\n *\n * Error mapping:\n * ApiError (public SDK class, status: number always defined):\n * retryable for 429 / 5xx\n * non-retryable for 4xx (except 429)\n * Other errors → normalizeThrownError (handles ECONNRESET / ETIMEDOUT as retryable)\n *\n * API notes:\n * - System instructions are passed via config.systemInstruction (not mixed into contents)\n * - Role mapping: 'user' → 'user', 'assistant' → 'model'\n * - Streaming via ai.models.generateContentStream() returns AsyncGenerator<GenerateContentResponse>\n * - Text is accessed via response.text getter on GenerateContentResponse\n * - Structured output: responseMimeType: 'application/json' in GenerateContentConfig\n *\n * SDK error class note:\n * The @google/genai public API exports only ApiError (lowercase 'a'), which has status: number.\n * Internal APIError / APIConnectionError classes (uppercase) are NOT exported from the package\n * root and must not be imported from internal dist paths.\n * Network errors (ECONNRESET, ETIMEDOUT) arrive as plain Error objects caught by normalizeThrownError.\n */\n\nimport {\n ApiError,\n type Content,\n type GenerateContentConfig,\n type GenerateContentResponse,\n type GenerateContentResponseUsageMetadata,\n GoogleGenAI,\n} from '@google/genai';\nimport { normalizeThrownError, withRetry } from '../retry.js';\nimport type {\n LlmCallOptions,\n LlmClient,\n LlmClientConfig,\n LlmMessage,\n LlmResponse,\n LlmStreamChunk,\n LlmStructuredResponse,\n LlmUsage,\n} from '../types.js';\nimport { LlmError } from '../types.js';\n\nconst PROVIDER = 'gemini';\n\n/** Normalize Gemini's usageMetadata to LlmUsage. */\nfunction normalizeUsage(meta: GenerateContentResponseUsageMetadata | undefined): LlmUsage {\n const inputTokens = meta?.promptTokenCount ?? 0;\n const outputTokens = meta?.candidatesTokenCount ?? 0;\n return {\n inputTokens,\n outputTokens,\n totalTokens: meta?.totalTokenCount ?? inputTokens + outputTokens,\n };\n}\n\n/**\n * Convert LlmMessages to Gemini's Content array format.\n * Extracts system message — Gemini treats system instructions separately from contents.\n * Role mapping: 'user' → 'user', 'assistant' → 'model' (Gemini API requires 'model').\n */\nfunction buildGeminiContents(messages: LlmMessage[]): {\n system: string | undefined;\n contents: Content[];\n} {\n const systemMessages = messages.filter((m) => m.role === 'system');\n const conversationMessages = messages.filter((m) => m.role !== 'system');\n\n const system =\n systemMessages.length > 0 ? systemMessages.map((m) => m.content).join('\\n') : undefined;\n\n const contents: Content[] = conversationMessages.map((m) => ({\n role: m.role === 'assistant' ? 'model' : 'user',\n parts: [{ text: m.content }],\n }));\n\n return { system, contents };\n}\n\n/**\n * Normalize any Gemini SDK error into LlmError.\n * Exported for direct unit testing of the normalization logic.\n *\n * ApiError (public SDK class) always has status: number, so there is no undefined-status branch.\n * Network errors (no HTTP status) arrive as plain Error objects; normalizeThrownError\n * handles retryable error codes (ECONNRESET, ETIMEDOUT, etc.).\n */\nexport function normalizeGeminiError(err: unknown): LlmError {\n if (err instanceof LlmError) return err;\n\n // ApiError is the only publicly-exported SDK error class.\n // status is always number (not undefined) per the ApiError type definition.\n if (err instanceof ApiError) {\n const retryable = err.status === 429 || err.status >= 500;\n return new LlmError({\n message: err.message,\n provider: PROVIDER,\n statusCode: err.status,\n retryable,\n cause: err,\n });\n }\n\n // Network errors (ECONNRESET, ETIMEDOUT, etc.) arrive as plain Error objects.\n // normalizeThrownError classifies retryable codes and handles the unknown-error case.\n return normalizeThrownError(err, PROVIDER);\n}\n\n/** Create the Gemini provider implementation. */\nexport function createGeminiProvider(config: LlmClientConfig): LlmClient {\n const ai = new GoogleGenAI({\n apiKey: config.apiKey,\n httpOptions: {\n timeout: config.timeoutMs ?? 30_000,\n },\n });\n\n const retryOpts = {\n maxRetries: config.maxRetries ?? 3,\n baseDelayMs: config.baseDelayMs ?? 1_000,\n provider: PROVIDER,\n };\n\n async function complete(messages: LlmMessage[], options?: LlmCallOptions): Promise<LlmResponse> {\n const model = options?.model ?? config.model;\n const { system, contents } = buildGeminiContents(messages);\n const start = Date.now();\n\n return withRetry(async () => {\n try {\n // Build config object — always passed (empty object is valid GenerateContentConfig)\n const geminiConfig: GenerateContentConfig = {};\n\n if (system !== undefined) geminiConfig.systemInstruction = system;\n const maxTokens = options?.maxTokens ?? config.maxTokens;\n if (maxTokens !== undefined) geminiConfig.maxOutputTokens = maxTokens;\n const temperature = options?.temperature ?? config.temperature;\n if (temperature !== undefined) geminiConfig.temperature = temperature;\n\n const response = await ai.models.generateContent({\n model,\n contents,\n config: geminiConfig,\n });\n\n return {\n content: response.text ?? '',\n model,\n usage: normalizeUsage(response.usageMetadata),\n latencyMs: Date.now() - start,\n };\n } catch (err) {\n throw normalizeGeminiError(err);\n }\n }, retryOpts);\n }\n\n async function* stream(\n messages: LlmMessage[],\n options?: LlmCallOptions\n ): AsyncGenerator<LlmStreamChunk> {\n const model = options?.model ?? config.model;\n const { system, contents } = buildGeminiContents(messages);\n\n // Build config — always passed (empty object is valid GenerateContentConfig)\n const geminiConfig: GenerateContentConfig = {};\n if (system !== undefined) geminiConfig.systemInstruction = system;\n const maxTokens = options?.maxTokens ?? config.maxTokens;\n if (maxTokens !== undefined) geminiConfig.maxOutputTokens = maxTokens;\n const temperature = options?.temperature ?? config.temperature;\n if (temperature !== undefined) geminiConfig.temperature = temperature;\n\n let sdkStream: AsyncGenerator<GenerateContentResponse>;\n\n try {\n sdkStream = await ai.models.generateContentStream({\n model,\n contents,\n config: geminiConfig,\n });\n } catch (err) {\n throw normalizeGeminiError(err);\n }\n\n let finalUsage: LlmUsage | undefined;\n\n try {\n for await (const chunk of sdkStream) {\n const text = chunk.text;\n if (text !== undefined && text.length > 0) {\n yield { token: text };\n }\n // Capture usage from each chunk — the final chunk has the complete totals\n if (chunk.usageMetadata !== undefined) {\n finalUsage = normalizeUsage(chunk.usageMetadata);\n }\n }\n } catch (err) {\n throw normalizeGeminiError(err);\n }\n\n if (finalUsage !== undefined) {\n yield { token: '', usage: finalUsage };\n }\n }\n\n async function structured<T>(\n messages: LlmMessage[],\n schema: { parse: (data: unknown) => T },\n options?: LlmCallOptions\n ): Promise<LlmStructuredResponse<T>> {\n const augmentedMessages: LlmMessage[] = [\n {\n role: 'system',\n content:\n 'You must respond with valid JSON only. No explanations, no markdown code fences, no extra text. Your entire response must be valid JSON that can be parsed with JSON.parse().',\n },\n ...messages,\n ];\n\n const model = options?.model ?? config.model;\n const { system, contents } = buildGeminiContents(augmentedMessages);\n const start = Date.now();\n\n const rawResponse = await withRetry(async () => {\n try {\n const geminiConfig: GenerateContentConfig = {\n // Instruct Gemini to return JSON directly\n responseMimeType: 'application/json',\n };\n\n if (system !== undefined) geminiConfig.systemInstruction = system;\n const maxTokens = options?.maxTokens ?? config.maxTokens;\n if (maxTokens !== undefined) geminiConfig.maxOutputTokens = maxTokens;\n const temperature = options?.temperature ?? config.temperature;\n if (temperature !== undefined) geminiConfig.temperature = temperature;\n\n return await ai.models.generateContent({\n model,\n contents,\n config: geminiConfig,\n });\n } catch (err) {\n throw normalizeGeminiError(err);\n }\n }, retryOpts);\n\n const rawContent = rawResponse.text ?? '';\n\n let parsed: unknown;\n try {\n // Strip markdown code fences if the model included them despite the instruction\n const cleaned = rawContent\n .replace(/^```(?:json)?\\s*/i, '')\n .replace(/\\s*```$/, '')\n .trim();\n parsed = JSON.parse(cleaned);\n } catch (err) {\n throw new LlmError({\n message: `Gemini structured output: response is not valid JSON. Raw: ${rawContent.slice(0, 200)}`,\n provider: PROVIDER,\n retryable: false,\n cause: err,\n });\n }\n\n let data: T;\n try {\n data = schema.parse(parsed);\n } catch (err) {\n throw new LlmError({\n message: `Gemini structured output: response failed schema validation. ${String(err)}`,\n provider: PROVIDER,\n retryable: false,\n cause: err,\n });\n }\n\n return {\n data,\n usage: normalizeUsage(rawResponse.usageMetadata),\n latencyMs: Date.now() - start,\n };\n }\n\n return {\n config,\n complete,\n stream,\n structured,\n };\n}\n","/**\n * OpenAI provider for @diabolicallabs/llm-client.\n *\n * Implements: complete(), stream(), structured()\n *\n * Token normalization:\n * OpenAI: prompt_tokens / completion_tokens\n * → LlmUsage: inputTokens / outputTokens / totalTokens\n *\n * Error mapping:\n * APIStatusError.status → LlmError.statusCode + retryable flag\n * APIConnectionError → retryable: true\n *\n * Structured output uses OpenAI's response_format: { type: 'json_object' }.\n * For strict schema enforcement, the schema is described in the system prompt.\n */\n\nimport OpenAI from 'openai';\nimport { normalizeThrownError, withRetry } from '../retry.js';\nimport type {\n LlmCallOptions,\n LlmClient,\n LlmClientConfig,\n LlmMessage,\n LlmResponse,\n LlmStreamChunk,\n LlmStructuredResponse,\n LlmUsage,\n} from '../types.js';\nimport { LlmError } from '../types.js';\n\nconst PROVIDER = 'openai';\n\n/** Normalize OpenAI's usage object to LlmUsage. */\nfunction normalizeUsage(usage: OpenAI.CompletionUsage | undefined | null): LlmUsage {\n const inputTokens = usage?.prompt_tokens ?? 0;\n const outputTokens = usage?.completion_tokens ?? 0;\n return {\n inputTokens,\n outputTokens,\n totalTokens: usage?.total_tokens ?? inputTokens + outputTokens,\n };\n}\n\n/** Convert LlmMessages to OpenAI's chat message format. */\nfunction buildOpenAIMessages(messages: LlmMessage[]): OpenAI.Chat.ChatCompletionMessageParam[] {\n return messages.map((m) => ({\n role: m.role,\n content: m.content,\n }));\n}\n\n/**\n * Normalize any OpenAI SDK error into LlmError.\n * Exported for direct unit testing of the normalization logic.\n */\nexport function normalizeOpenAIError(err: unknown): LlmError {\n if (err instanceof LlmError) return err;\n\n // OpenAI SDK v6+: uses OpenAI.APIError as the base class with a `.status` field.\n // APIConnectionError is a subclass of APIError with status: undefined — check it first\n // so network failures are always retryable regardless of the missing status code.\n if (typeof OpenAI.APIConnectionError === 'function' && err instanceof OpenAI.APIConnectionError) {\n return new LlmError({\n message: err.message,\n provider: PROVIDER,\n retryable: true,\n cause: err,\n });\n }\n\n // Catch all other APIError subclasses: RateLimitError (429), AuthenticationError (401),\n // InternalServerError (500), etc. Retryability is determined by HTTP status code.\n if (typeof OpenAI.APIError === 'function' && err instanceof OpenAI.APIError) {\n const status: number | undefined = err.status;\n if (status !== undefined) {\n const retryable = [429, 502, 503, 504].includes(status) || status >= 500;\n return new LlmError({\n message: err.message,\n provider: PROVIDER,\n statusCode: status,\n retryable,\n cause: err,\n });\n }\n return new LlmError({ message: err.message, provider: PROVIDER, retryable: false, cause: err });\n }\n\n return normalizeThrownError(err, PROVIDER);\n}\n\n/** Create the OpenAI provider implementation. */\nexport function createOpenAIProvider(config: LlmClientConfig): LlmClient {\n const client = new OpenAI({\n apiKey: config.apiKey,\n timeout: config.timeoutMs ?? 30_000,\n maxRetries: 0, // We manage retries ourselves via withRetry\n });\n\n const retryOpts = {\n maxRetries: config.maxRetries ?? 3,\n baseDelayMs: config.baseDelayMs ?? 1_000,\n provider: PROVIDER,\n };\n\n async function complete(messages: LlmMessage[], options?: LlmCallOptions): Promise<LlmResponse> {\n const model = options?.model ?? config.model;\n const openAIMessages = buildOpenAIMessages(messages);\n const start = Date.now();\n\n return withRetry(async () => {\n try {\n const params: OpenAI.Chat.ChatCompletionCreateParamsNonStreaming = {\n model,\n messages: openAIMessages,\n stream: false,\n };\n\n const maxTokens = options?.maxTokens ?? config.maxTokens;\n if (maxTokens !== undefined) params.max_tokens = maxTokens;\n\n const temperature = options?.temperature ?? config.temperature;\n if (temperature !== undefined) params.temperature = temperature;\n\n const response = await client.chat.completions.create(params);\n\n const content = response.choices.map((c) => c.message.content ?? '').join('');\n\n return {\n content,\n model: response.model,\n usage: normalizeUsage(response.usage),\n latencyMs: Date.now() - start,\n };\n } catch (err) {\n throw normalizeOpenAIError(err);\n }\n }, retryOpts);\n }\n\n async function* stream(\n messages: LlmMessage[],\n options?: LlmCallOptions\n ): AsyncGenerator<LlmStreamChunk> {\n const model = options?.model ?? config.model;\n const openAIMessages = buildOpenAIMessages(messages);\n\n const params: OpenAI.Chat.ChatCompletionCreateParamsStreaming = {\n model,\n messages: openAIMessages,\n stream: true,\n stream_options: { include_usage: true },\n };\n\n const maxTokens = options?.maxTokens ?? config.maxTokens;\n if (maxTokens !== undefined) params.max_tokens = maxTokens;\n\n const temperature = options?.temperature ?? config.temperature;\n if (temperature !== undefined) params.temperature = temperature;\n\n let sdkStream: Awaited<ReturnType<typeof client.chat.completions.create>>;\n\n try {\n sdkStream = await client.chat.completions.create(params);\n } catch (err) {\n throw normalizeOpenAIError(err);\n }\n\n let finalUsage: LlmUsage | undefined;\n\n try {\n for await (const chunk of sdkStream) {\n // Token chunks arrive in choices[0].delta.content\n const delta = chunk.choices[0]?.delta.content;\n if (delta !== undefined && delta !== null && delta.length > 0) {\n yield { token: delta };\n }\n\n // Usage arrives in the final chunk (stream_options.include_usage must be true)\n if (chunk.usage !== undefined && chunk.usage !== null) {\n finalUsage = normalizeUsage(chunk.usage);\n }\n }\n } catch (err) {\n throw normalizeOpenAIError(err);\n }\n\n // Yield usage on the final sentinel chunk\n if (finalUsage !== undefined) {\n yield { token: '', usage: finalUsage };\n }\n }\n\n async function structured<T>(\n messages: LlmMessage[],\n schema: { parse: (data: unknown) => T },\n options?: LlmCallOptions\n ): Promise<LlmStructuredResponse<T>> {\n // OpenAI JSON mode: response_format: { type: 'json_object' }\n // The system prompt must instruct the model to output JSON — OpenAI requires this.\n const jsonSystemInstruction: LlmMessage = {\n role: 'system',\n content:\n 'You must respond with valid JSON only. No explanations, no markdown code fences, no extra text. Your entire response must be valid JSON that can be parsed with JSON.parse().',\n };\n\n const augmentedMessages = [jsonSystemInstruction, ...messages];\n const model = options?.model ?? config.model;\n const openAIMessages = buildOpenAIMessages(augmentedMessages);\n const start = Date.now();\n\n const rawResponse = await withRetry(async () => {\n try {\n const params: OpenAI.Chat.ChatCompletionCreateParamsNonStreaming = {\n model,\n messages: openAIMessages,\n stream: false,\n response_format: { type: 'json_object' },\n };\n\n const maxTokens = options?.maxTokens ?? config.maxTokens;\n if (maxTokens !== undefined) params.max_tokens = maxTokens;\n\n const temperature = options?.temperature ?? config.temperature;\n if (temperature !== undefined) params.temperature = temperature;\n\n return await client.chat.completions.create(params);\n } catch (err) {\n throw normalizeOpenAIError(err);\n }\n }, retryOpts);\n\n const rawContent = rawResponse.choices[0]?.message.content ?? '';\n\n let parsed: unknown;\n try {\n parsed = JSON.parse(rawContent);\n } catch (err) {\n throw new LlmError({\n message: `OpenAI structured output: response is not valid JSON. Raw: ${rawContent.slice(0, 200)}`,\n provider: PROVIDER,\n retryable: false,\n cause: err,\n });\n }\n\n let data: T;\n try {\n data = schema.parse(parsed);\n } catch (err) {\n throw new LlmError({\n message: `OpenAI structured output: response failed schema validation. ${String(err)}`,\n provider: PROVIDER,\n retryable: false,\n cause: err,\n });\n }\n\n return {\n data,\n usage: normalizeUsage(rawResponse.usage),\n latencyMs: Date.now() - start,\n };\n }\n\n return {\n config,\n complete,\n stream,\n structured,\n };\n}\n","/**\n * Perplexity provider for @diabolicallabs/llm-client.\n *\n * Perplexity's chat completions API is OpenAI-compatible, so this provider\n * uses the OpenAI SDK pointed at Perplexity's base URL — same pattern as DeepSeek.\n *\n * API base URL: https://api.perplexity.ai\n * Docs: https://docs.perplexity.ai\n *\n * Implements: complete(), stream(), structured()\n *\n * Key Perplexity behaviors:\n * - Responses include a `citations` field: string[] of source URLs.\n * We map each URL to { url: string } and deduplicate by URL before returning.\n * - Citations are only available on non-streaming responses. The streaming API\n * does not include citations in individual chunks; consumers needing citations\n * must use complete(), not stream().\n * - Default model: 'sonar' — the lightweight search model (sonar-reasoning was\n * deprecated Dec 2025; sonar-reasoning-pro is its replacement).\n *\n * Model notes (confirmed against live API 2026-05-08):\n * - sonar — lightweight search, web-grounded\n * - sonar-pro — advanced search, more citations\n * - sonar-reasoning-pro — chain-of-thought reasoning (sonar-reasoning deprecated)\n * - sonar-deep-research — exhaustive research; supports async jobs. Perplexity's\n * docs note this model \"supports asynchronous jobs\" which\n * may mean a different response shape. We treat it as a\n * standard synchronous model; if the API returns an\n * incompatible shape, complete() will throw a clear LlmError\n * directing users to sonar-reasoning-pro or the async API.\n *\n * providerOptions (Wave 2 escape hatch):\n * The Perplexity API supports search-specific parameters not present on other providers.\n * Pass them via options.providerOptions:\n * search_recency_filter: 'month' | 'week' | 'day' | 'hour'\n * search_domain_filter: string[] — allowlist of domains to source from\n * Unknown fields are passed through unchanged to support future Perplexity API additions.\n *\n * structured() strategy:\n * Perplexity's response_format handling has limitations (especially on reasoning models\n * where reasoning tokens appear before JSON output). We use system-prompt JSON instruction\n * (same as DeepSeek) and strip both <think>...</think> reasoning blocks (sonar-reasoning-pro)\n * and markdown fences before JSON.parse().\n *\n * Token normalization:\n * Perplexity returns standard OpenAI-format usage: prompt_tokens / completion_tokens / total_tokens\n * → LlmUsage: inputTokens / outputTokens / totalTokens\n *\n * Error mapping:\n * APIConnectionError → retryable: true\n * APIError with status 429 / 5xx → retryable: true\n * Other APIErrors → non-retryable\n */\n\nimport OpenAI from 'openai';\nimport { normalizeThrownError, withRetry } from '../retry.js';\nimport type {\n LlmCallOptions,\n LlmClient,\n LlmClientConfig,\n LlmMessage,\n LlmResponse,\n LlmStreamChunk,\n LlmStructuredResponse,\n LlmUsage,\n} from '../types.js';\nimport { LlmError } from '../types.js';\n\nconst PROVIDER = 'perplexity';\nconst PERPLEXITY_BASE_URL = 'https://api.perplexity.ai';\n\n/**\n * Perplexity-specific fields that may appear on the OpenAI-compatible response object.\n * The SDK types don't include these; we cast and extract them safely.\n */\ninterface PerplexityResponseExtensions {\n citations?: string[];\n}\n\n/** Normalize OpenAI-format usage object to LlmUsage. */\nfunction normalizeUsage(usage: OpenAI.CompletionUsage | undefined | null): LlmUsage {\n const inputTokens = usage?.prompt_tokens ?? 0;\n const outputTokens = usage?.completion_tokens ?? 0;\n return {\n inputTokens,\n outputTokens,\n totalTokens: usage?.total_tokens ?? inputTokens + outputTokens,\n };\n}\n\n/** Convert LlmMessages to OpenAI-format chat message params. */\nfunction buildMessages(messages: LlmMessage[]): OpenAI.Chat.ChatCompletionMessageParam[] {\n return messages.map((m) => ({\n role: m.role,\n content: m.content,\n }));\n}\n\n/**\n * Extract and deduplicate citations from a Perplexity response.\n *\n * Perplexity returns citations as string[] of URLs on the response object\n * (not in the OpenAI SDK types — accessed via cast). Deduplication is by URL.\n * Returns undefined if no citations are present or the array is empty.\n */\nfunction extractCitations(\n response: OpenAI.Chat.ChatCompletion & PerplexityResponseExtensions\n): LlmResponse['citations'] {\n const rawCitations = response.citations;\n if (rawCitations === undefined || rawCitations.length === 0) return undefined;\n\n const seen = new Set<string>();\n const deduped: Array<{ url: string; title?: string }> = [];\n\n for (const url of rawCitations) {\n if (!seen.has(url)) {\n seen.add(url);\n deduped.push({ url });\n }\n }\n\n return deduped.length > 0 ? deduped : undefined;\n}\n\n/**\n * Extract known Perplexity search filter fields from providerOptions.\n * Unknown fields are passed through to the API params unchanged.\n *\n * Known fields at time of implementation (2026-05-08):\n * search_recency_filter: 'month' | 'week' | 'day' | 'hour'\n * search_domain_filter: string[]\n */\nfunction extractProviderOptions(\n providerOptions: Record<string, unknown> | undefined\n): Record<string, unknown> {\n if (providerOptions === undefined) return {};\n // Pass all fields through — Perplexity may add new filters; unknown fields\n // are forwarded unchanged so consumers don't need a toolkit update to use them.\n return { ...providerOptions };\n}\n\n/**\n * Normalize any Perplexity / OpenAI SDK error into LlmError.\n * Exported for direct unit testing of the normalization logic.\n *\n * Uses the same OpenAI SDK error hierarchy since the client is an OpenAI\n * instance pointed at Perplexity's API.\n */\nexport function normalizePerplexityError(err: unknown): LlmError {\n if (err instanceof LlmError) return err;\n\n // APIConnectionError is a subclass of APIError with status: undefined —\n // check it first so network failures are always retryable.\n if (typeof OpenAI.APIConnectionError === 'function' && err instanceof OpenAI.APIConnectionError) {\n return new LlmError({\n message: err.message,\n provider: PROVIDER,\n retryable: true,\n cause: err,\n });\n }\n\n // Catch all other APIError subclasses: RateLimitError (429), AuthenticationError (401), etc.\n if (typeof OpenAI.APIError === 'function' && err instanceof OpenAI.APIError) {\n const status: number | undefined = err.status;\n if (status !== undefined) {\n const retryable = [429, 502, 503, 504].includes(status) || status >= 500;\n return new LlmError({\n message: err.message,\n provider: PROVIDER,\n statusCode: status,\n retryable,\n cause: err,\n });\n }\n return new LlmError({ message: err.message, provider: PROVIDER, retryable: false, cause: err });\n }\n\n return normalizeThrownError(err, PROVIDER);\n}\n\n/** Create the Perplexity provider implementation. */\nexport function createPerplexityProvider(config: LlmClientConfig): LlmClient {\n // OpenAI SDK pointed at Perplexity's OpenAI-compatible endpoint\n const client = new OpenAI({\n apiKey: config.apiKey,\n baseURL: PERPLEXITY_BASE_URL,\n timeout: config.timeoutMs ?? 30_000,\n maxRetries: 0, // Retries managed by withRetry\n });\n\n const retryOpts = {\n maxRetries: config.maxRetries ?? 3,\n baseDelayMs: config.baseDelayMs ?? 1_000,\n provider: PROVIDER,\n };\n\n async function complete(messages: LlmMessage[], options?: LlmCallOptions): Promise<LlmResponse> {\n const model = options?.model ?? config.model;\n const chatMessages = buildMessages(messages);\n const start = Date.now();\n const extraParams = extractProviderOptions(options?.providerOptions);\n\n return withRetry(async () => {\n try {\n const params: OpenAI.Chat.ChatCompletionCreateParamsNonStreaming & Record<string, unknown> =\n {\n model,\n messages: chatMessages,\n stream: false,\n ...extraParams,\n };\n\n const maxTokens = options?.maxTokens ?? config.maxTokens;\n if (maxTokens !== undefined) params.max_tokens = maxTokens;\n\n const temperature = options?.temperature ?? config.temperature;\n if (temperature !== undefined) params.temperature = temperature;\n\n const rawResponse = await client.chat.completions.create(\n params as OpenAI.Chat.ChatCompletionCreateParamsNonStreaming\n );\n\n // Cast to access Perplexity-specific extensions not present in OpenAI SDK types\n const response = rawResponse as OpenAI.Chat.ChatCompletion & PerplexityResponseExtensions;\n\n const content = response.choices.map((c) => c.message.content ?? '').join('');\n\n const result: LlmResponse = {\n content,\n model: response.model,\n usage: normalizeUsage(response.usage),\n latencyMs: Date.now() - start,\n };\n\n const citations = extractCitations(response);\n if (citations !== undefined) result.citations = citations;\n\n return result;\n } catch (err) {\n throw normalizePerplexityError(err);\n }\n }, retryOpts);\n }\n\n async function* stream(\n messages: LlmMessage[],\n options?: LlmCallOptions\n ): AsyncGenerator<LlmStreamChunk> {\n const model = options?.model ?? config.model;\n const chatMessages = buildMessages(messages);\n const extraParams = extractProviderOptions(options?.providerOptions);\n\n const params: OpenAI.Chat.ChatCompletionCreateParamsStreaming & Record<string, unknown> = {\n model,\n messages: chatMessages,\n stream: true,\n stream_options: { include_usage: true },\n ...extraParams,\n };\n\n const maxTokens = options?.maxTokens ?? config.maxTokens;\n if (maxTokens !== undefined) params.max_tokens = maxTokens;\n\n const temperature = options?.temperature ?? config.temperature;\n if (temperature !== undefined) params.temperature = temperature;\n\n let sdkStream: Awaited<ReturnType<typeof client.chat.completions.create>>;\n\n try {\n sdkStream = await client.chat.completions.create(\n params as OpenAI.Chat.ChatCompletionCreateParamsStreaming\n );\n } catch (err) {\n throw normalizePerplexityError(err);\n }\n\n let finalUsage: LlmUsage | undefined;\n\n try {\n for await (const chunk of sdkStream) {\n const delta = chunk.choices[0]?.delta.content;\n if (delta !== undefined && delta !== null && delta.length > 0) {\n yield { token: delta };\n }\n\n // Usage arrives in the final chunk when stream_options.include_usage is true\n if (chunk.usage !== undefined && chunk.usage !== null) {\n finalUsage = normalizeUsage(chunk.usage);\n }\n }\n } catch (err) {\n throw normalizePerplexityError(err);\n }\n\n // Note: citations are NOT available in streaming mode. Perplexity's streaming\n // API does not include citations in the chunk stream. Use complete() if citations\n // are required for your use case.\n if (finalUsage !== undefined) {\n yield { token: '', usage: finalUsage };\n }\n }\n\n async function structured<T>(\n messages: LlmMessage[],\n schema: { parse: (data: unknown) => T },\n options?: LlmCallOptions\n ): Promise<LlmStructuredResponse<T>> {\n // Perplexity's response_format has limitations with reasoning models (reasoning tokens\n // appear before JSON output). Use system-prompt JSON instruction + fence stripping,\n // same as DeepSeek.\n const jsonSystemInstruction: LlmMessage = {\n role: 'system',\n content:\n 'You must respond with valid JSON only. No explanations, no markdown code fences, no extra text. Your entire response must be valid JSON that can be parsed with JSON.parse().',\n };\n\n const augmentedMessages = [jsonSystemInstruction, ...messages];\n const model = options?.model ?? config.model;\n const chatMessages = buildMessages(augmentedMessages);\n const start = Date.now();\n const extraParams = extractProviderOptions(options?.providerOptions);\n\n const rawResponse = await withRetry(async () => {\n try {\n const params: OpenAI.Chat.ChatCompletionCreateParamsNonStreaming & Record<string, unknown> =\n {\n model,\n messages: chatMessages,\n stream: false,\n ...extraParams,\n };\n\n const maxTokens = options?.maxTokens ?? config.maxTokens;\n if (maxTokens !== undefined) params.max_tokens = maxTokens;\n\n const temperature = options?.temperature ?? config.temperature;\n if (temperature !== undefined) params.temperature = temperature;\n\n return await client.chat.completions.create(\n params as OpenAI.Chat.ChatCompletionCreateParamsNonStreaming\n );\n } catch (err) {\n throw normalizePerplexityError(err);\n }\n }, retryOpts);\n\n const rawContent = rawResponse.choices[0]?.message.content ?? '';\n\n let parsed: unknown;\n try {\n // sonar-reasoning-pro emits reasoning tokens inside <think>...</think> before the JSON.\n // Strip them first, then strip any markdown fences.\n const cleaned = rawContent\n .replace(/<think>[\\s\\S]*?<\\/think>/i, '') // strip reasoning block (sonar-reasoning-pro)\n .replace(/^```(?:json)?\\s*/i, '') // strip opening fence\n .replace(/\\s*```$/, '') // strip closing fence\n .trim();\n parsed = JSON.parse(cleaned);\n } catch (err) {\n throw new LlmError({\n message: `Perplexity structured output: response is not valid JSON. Raw: ${rawContent.slice(0, 200)}`,\n provider: PROVIDER,\n retryable: false,\n cause: err,\n });\n }\n\n let data: T;\n try {\n data = schema.parse(parsed);\n } catch (err) {\n throw new LlmError({\n message: `Perplexity structured output: response failed schema validation. ${String(err)}`,\n provider: PROVIDER,\n retryable: false,\n cause: err,\n });\n }\n\n return {\n data,\n usage: normalizeUsage(rawResponse.usage),\n latencyMs: Date.now() - start,\n };\n }\n\n return {\n config,\n complete,\n stream,\n structured,\n };\n}\n","/**\n * Factory functions for LlmClient.\n *\n * createClient — dispatches to the correct provider implementation.\n * createClientFromEnv — convenience wrapper that reads API keys from env vars.\n *\n * Provider dispatch:\n * 'anthropic' → fully implemented (Week 2)\n * 'openai' → fully implemented (Week 2)\n * 'gemini' → fully implemented (Week 3)\n * 'deepseek' → fully implemented (Week 3)\n * 'perplexity' → fully implemented (Week 5) — search-grounded, citations, providerOptions\n */\n\nimport { createAnthropicProvider } from './providers/anthropic.js';\nimport { createDeepSeekProvider } from './providers/deepseek.js';\nimport { createGeminiProvider } from './providers/gemini.js';\nimport { createOpenAIProvider } from './providers/openai.js';\nimport { createPerplexityProvider } from './providers/perplexity.js';\nimport type { LlmClient, LlmClientConfig } from './types.js';\nimport { LlmError } from './types.js';\n\n/**\n * Create an LlmClient for the given provider and config.\n * Dispatches to the provider-specific implementation.\n * All five providers are fully implemented.\n */\nexport function createClient(config: LlmClientConfig): LlmClient {\n switch (config.provider) {\n case 'anthropic':\n return createAnthropicProvider(config);\n\n case 'openai':\n return createOpenAIProvider(config);\n\n case 'gemini':\n return createGeminiProvider(config);\n\n case 'deepseek':\n return createDeepSeekProvider(config);\n\n case 'perplexity':\n return createPerplexityProvider(config);\n\n default: {\n // TypeScript exhaustiveness check — if a new provider is added to the union\n // without a case here, this will be a compile-time error.\n const _exhaustive: never = config.provider;\n throw new LlmError({\n message: `[dlabs-toolkit] Unknown provider: ${String(_exhaustive)}`,\n provider: String(_exhaustive),\n retryable: false,\n });\n }\n }\n}\n\n/**\n * Convenience: create an LlmClient from environment variables.\n *\n * Reads API keys from the environment based on provider:\n * anthropic → ANTHROPIC_API_KEY\n * openai → OPENAI_API_KEY\n * gemini → GOOGLE_AI_API_KEY\n * deepseek → DEEPSEEK_API_KEY\n * perplexity → PERPLEXITY_API_KEY — recommended default model: 'sonar'\n *\n * Throws LlmError if the required env var is not set.\n */\nexport function createClientFromEnv(\n provider: LlmClientConfig['provider'],\n model: string,\n overrides?: Partial<Omit<LlmClientConfig, 'provider' | 'model' | 'apiKey'>>\n): LlmClient {\n const apiKey = resolveApiKey(provider);\n return createClient({ provider, model, apiKey, ...overrides });\n}\n\n/** Read the API key for a given provider from environment variables. */\nfunction resolveApiKey(provider: LlmClientConfig['provider']): string {\n const envVarMap: Record<LlmClientConfig['provider'], string> = {\n anthropic: 'ANTHROPIC_API_KEY',\n openai: 'OPENAI_API_KEY',\n gemini: 'GOOGLE_AI_API_KEY',\n deepseek: 'DEEPSEEK_API_KEY',\n perplexity: 'PERPLEXITY_API_KEY',\n };\n\n const envVar = envVarMap[provider];\n const apiKey = process.env[envVar];\n\n if (apiKey === undefined || apiKey.trim() === '') {\n throw new LlmError({\n message: `[dlabs-toolkit] ${envVar} is not set. Set this environment variable to use the ${provider} provider.`,\n provider,\n retryable: false,\n });\n }\n\n return apiKey;\n}\n"],"mappings":";AAcA,OAAO,eAAe;;;AC8Df,IAAM,WAAN,cAAuB,MAAM;AAAA,EAChB,OAAO;AAAA,EAChB;AAAA,EACA;AAAA,EACA;AAAA;AAAA;AAAA,EAGS;AAAA,EAElB,YAAY,MAMT;AACD,UAAM,KAAK,SAAS,EAAE,OAAO,KAAK,MAAM,CAAC;AACzC,SAAK,WAAW,KAAK;AACrB,SAAK,aAAa,KAAK;AACvB,SAAK,YAAY,KAAK;AACtB,SAAK,QAAQ,KAAK;AAAA,EACpB;AACF;;;ACrFA,IAAM,0BAA0B,oBAAI,IAAI,CAAC,KAAK,KAAK,KAAK,GAAG,CAAC;AAG5D,IAAM,wBAAwB,oBAAI,IAAI,CAAC,cAAc,aAAa,cAAc,CAAC;AAGjF,IAAM,8BAA8B,oBAAI,IAAI,CAAC,KAAK,KAAK,KAAK,GAAG,CAAC;AAGzD,SAAS,kBAAkB,YAA6B;AAC7D,MAAI,wBAAwB,IAAI,UAAU,EAAG,QAAO;AACpD,MAAI,4BAA4B,IAAI,UAAU,EAAG,QAAO;AAExD,SAAO,cAAc;AACvB;AAGO,SAAS,qBAAqB,MAAuB;AAC1D,SAAO,sBAAsB,IAAI,IAAI;AACvC;AAGO,SAAS,iBAAiB,SAAiB,aAA6B;AAC7E,QAAM,UAAU,cAAc,KAAK;AACnC,SAAO,KAAK,OAAO,IAAI;AACzB;AAcA,eAAsB,UACpB,IACA,MACY;AACZ,MAAI;AAEJ,WAAS,UAAU,GAAG,WAAW,KAAK,YAAY,WAAW;AAC3D,QAAI;AACF,aAAO,MAAM,GAAG,OAAO;AAAA,IACzB,SAAS,KAAK;AACZ,YAAM,SAAS,qBAAqB,KAAK,KAAK,QAAQ;AAEtD,UAAI,CAAC,OAAO,aAAa,YAAY,KAAK,YAAY;AACpD,cAAM;AAAA,MACR;AAEA,kBAAY;AACZ,YAAM,UAAU,iBAAiB,SAAS,KAAK,WAAW;AAC1D,YAAM,MAAM,OAAO;AAAA,IACrB;AAAA,EACF;AAIA,QACE,aACA,IAAI,SAAS;AAAA,IACX,SAAS;AAAA,IACT,UAAU,KAAK;AAAA,IACf,WAAW;AAAA,EACb,CAAC;AAEL;AAGO,SAAS,qBAAqB,KAAc,UAA4B;AAC7E,MAAI,eAAe,SAAU,QAAO;AAEpC,MAAI,eAAe,OAAO;AACxB,UAAM,cAAc;AAEpB,UAAM,aAAa,YAAY,UAAU,YAAY;AAGrD,QAAI,YAAY,SAAS,UAAa,qBAAqB,YAAY,IAAI,GAAG;AAC5E,UAAI,eAAe,QAAW;AAC5B,eAAO,IAAI,SAAS;AAAA,UAClB,SAAS,IAAI;AAAA,UACb;AAAA,UACA;AAAA,UACA,WAAW;AAAA,UACX,OAAO;AAAA,QACT,CAAC;AAAA,MACH;AACA,aAAO,IAAI,SAAS,EAAE,SAAS,IAAI,SAAS,UAAU,WAAW,MAAM,OAAO,IAAI,CAAC;AAAA,IACrF;AAGA,QAAI,eAAe,QAAW;AAC5B,aAAO,IAAI,SAAS;AAAA,QAClB,SAAS,IAAI;AAAA,QACb;AAAA,QACA;AAAA,QACA,WAAW,kBAAkB,UAAU;AAAA,QACvC,OAAO;AAAA,MACT,CAAC;AAAA,IACH;AAEA,WAAO,IAAI,SAAS;AAAA,MAClB,SAAS,IAAI;AAAA,MACb;AAAA,MACA,WAAW;AAAA,MACX,OAAO;AAAA,IACT,CAAC;AAAA,EACH;AAEA,SAAO,IAAI,SAAS;AAAA,IAClB,SAAS,OAAO,GAAG;AAAA,IACnB;AAAA,IACA,WAAW;AAAA,IACX,OAAO;AAAA,EACT,CAAC;AACH;AAEA,SAAS,MAAM,IAA2B;AACxC,SAAO,IAAI,QAAQ,CAAC,YAAY,WAAW,SAAS,EAAE,CAAC;AACzD;;;AF9GA,IAAM,WAAW;AAGjB,SAAS,eAAe,OAA8C;AACpE,QAAM,cAAc,OAAO,gBAAgB;AAC3C,QAAM,eAAe,OAAO,iBAAiB;AAC7C,SAAO;AAAA,IACL;AAAA,IACA;AAAA,IACA,aAAa,cAAc;AAAA;AAAA;AAAA,IAG3B,qBAAsB,OAClB;AAAA,IACJ,iBAAkB,OACd;AAAA,EACN;AACF;AAGA,SAAS,uBAAuB,UAG9B;AACA,QAAM,iBAAiB,SAAS,OAAO,CAAC,MAAM,EAAE,SAAS,QAAQ;AACjE,QAAM,uBAAuB,SAAS,OAAO,CAAC,MAAM,EAAE,SAAS,QAAQ;AAEvE,QAAM,SACJ,eAAe,SAAS,IAAI,eAAe,IAAI,CAAC,MAAM,EAAE,OAAO,EAAE,KAAK,IAAI,IAAI;AAEhF,QAAM,oBAA8C,qBAAqB,IAAI,CAAC,OAAO;AAAA,IACnF,MAAM,EAAE;AAAA,IACR,SAAS,EAAE;AAAA,EACb,EAAE;AAEF,SAAO,EAAE,QAAQ,UAAU,kBAAkB;AAC/C;AAMO,SAAS,wBAAwB,KAAwB;AAC9D,MAAI,eAAe,SAAU,QAAO;AAKpC,MACE,OAAO,UAAU,uBAAuB,cACxC,eAAe,UAAU,oBACzB;AACA,WAAO,IAAI,SAAS;AAAA,MAClB,SAAS,IAAI;AAAA,MACb,UAAU;AAAA,MACV,WAAW;AAAA,MACX,OAAO;AAAA,IACT,CAAC;AAAA,EACH;AAIA,MAAI,OAAO,UAAU,aAAa,cAAc,eAAe,UAAU,UAAU;AACjF,UAAM,SAA6B,IAAI;AACvC,QAAI,WAAW,QAAW;AACxB,YAAM,YAAY,CAAC,KAAK,KAAK,KAAK,GAAG,EAAE,SAAS,MAAM,KAAK,UAAU;AACrE,aAAO,IAAI,SAAS;AAAA,QAClB,SAAS,IAAI;AAAA,QACb,UAAU;AAAA,QACV,YAAY;AAAA,QACZ;AAAA,QACA,OAAO;AAAA,MACT,CAAC;AAAA,IACH;AACA,WAAO,IAAI,SAAS,EAAE,SAAS,IAAI,SAAS,UAAU,UAAU,WAAW,OAAO,OAAO,IAAI,CAAC;AAAA,EAChG;AAEA,SAAO,qBAAqB,KAAK,QAAQ;AAC3C;AAGO,SAAS,wBAAwB,QAAoC;AAC1E,QAAM,SAAS,IAAI,UAAU;AAAA,IAC3B,QAAQ,OAAO;AAAA,IACf,SAAS,OAAO,aAAa;AAAA,IAC7B,YAAY;AAAA;AAAA,EACd,CAAC;AAED,QAAM,YAAY;AAAA,IAChB,YAAY,OAAO,cAAc;AAAA,IACjC,aAAa,OAAO,eAAe;AAAA,IACnC,UAAU;AAAA,EACZ;AAEA,iBAAe,SAAS,UAAwB,SAAgD;AAC9F,UAAM,QAAQ,SAAS,SAAS,OAAO;AACvC,UAAM,EAAE,QAAQ,UAAU,kBAAkB,IAAI,uBAAuB,QAAQ;AAE/E,UAAM,QAAQ,KAAK,IAAI;AAEvB,WAAO,UAAU,YAAY;AAC3B,UAAI;AACF,cAAM,SAAoD;AAAA,UACxD;AAAA,UACA,UAAU;AAAA,UACV,YAAY,SAAS,aAAa,OAAO,aAAa;AAAA,QACxD;AAEA,YAAI,WAAW,OAAW,QAAO,SAAS;AAC1C,cAAM,cAAc,SAAS,eAAe,OAAO;AACnD,YAAI,gBAAgB,QAAW;AAC7B,iBAAO,cAAc;AAAA,QACvB;AAEA,cAAM,WAAW,MAAM,OAAO,SAAS,OAAO,MAAM;AAEpD,cAAM,UAAU,SAAS,QACtB,OAAO,CAAC,UAAwC,MAAM,SAAS,MAAM,EACrE,IAAI,CAAC,UAAU,MAAM,IAAI,EACzB,KAAK,EAAE;AAEV,eAAO;AAAA,UACL;AAAA,UACA,OAAO,SAAS;AAAA,UAChB,OAAO,eAAe,SAAS,KAAK;AAAA,UACpC,WAAW,KAAK,IAAI,IAAI;AAAA,QAC1B;AAAA,MACF,SAAS,KAAK;AACZ,cAAM,wBAAwB,GAAG;AAAA,MACnC;AAAA,IACF,GAAG,SAAS;AAAA,EACd;AAEA,kBAAgB,OACd,UACA,SACgC;AAChC,UAAM,QAAQ,SAAS,SAAS,OAAO;AACvC,UAAM,EAAE,QAAQ,UAAU,kBAAkB,IAAI,uBAAuB,QAAQ;AAE/E,UAAM,SAAwC;AAAA,MAC5C;AAAA,MACA,UAAU;AAAA,MACV,YAAY,SAAS,aAAa,OAAO,aAAa;AAAA,IACxD;AAEA,QAAI,WAAW,OAAW,QAAO,SAAS;AAC1C,UAAM,oBAAoB,SAAS,eAAe,OAAO;AACzD,QAAI,sBAAsB,QAAW;AACnC,aAAO,cAAc;AAAA,IACvB;AAEA,QAAI;AAEJ,QAAI;AACF,kBAAY,OAAO,SAAS,OAAO,MAAM;AAAA,IAC3C,SAAS,KAAK;AACZ,YAAM,wBAAwB,GAAG;AAAA,IACnC;AAGA,QAAI;AAEJ,QAAI;AACF,uBAAiB,SAAS,WAAW;AACnC,YAAI,MAAM,SAAS,yBAAyB,MAAM,MAAM,SAAS,cAAc;AAC7E,gBAAM,EAAE,OAAO,MAAM,MAAM,KAAK;AAAA,QAClC,WAAW,MAAM,SAAS,mBAAmB,WAAW,OAAO;AAE7D,gBAAM,QAAQ,MAAM,UAAU,aAAa;AAC3C,uBAAa,eAAe,MAAM,KAAK;AAAA,QACzC;AAAA,MACF;AAAA,IACF,SAAS,KAAK;AAGZ,YAAM,wBAAwB,GAAG;AAAA,IACnC;AAGA,QAAI,eAAe,QAAW;AAC5B,YAAM,EAAE,OAAO,IAAI,OAAO,WAAW;AAAA,IACvC;AAAA,EACF;AAEA,iBAAe,WACb,UACA,QACA,SACmC;AAGnC,UAAM,wBAAoC;AAAA,MACxC,MAAM;AAAA,MACN,SACE;AAAA,IACJ;AAEA,UAAM,oBAAoB,CAAC,uBAAuB,GAAG,QAAQ;AAC7D,UAAM,QAAQ,KAAK,IAAI;AAEvB,UAAM,WAAW,MAAM,SAAS,mBAAmB,OAAO;AAE1D,QAAI;AACJ,QAAI;AAEF,YAAM,UAAU,SAAS,QACtB,QAAQ,qBAAqB,EAAE,EAC/B,QAAQ,WAAW,EAAE,EACrB,KAAK;AACR,eAAS,KAAK,MAAM,OAAO;AAAA,IAC7B,SAAS,KAAK;AACZ,YAAM,IAAI,SAAS;AAAA,QACjB,SAAS,iEAAiE,SAAS,QAAQ,MAAM,GAAG,GAAG,CAAC;AAAA,QACxG,UAAU;AAAA,QACV,WAAW;AAAA,QACX,OAAO;AAAA,MACT,CAAC;AAAA,IACH;AAEA,QAAI;AACJ,QAAI;AACF,aAAO,OAAO,MAAM,MAAM;AAAA,IAC5B,SAAS,KAAK;AACZ,YAAM,IAAI,SAAS;AAAA,QACjB,SAAS,mEAAmE,OAAO,GAAG,CAAC;AAAA,QACvF,UAAU;AAAA,QACV,WAAW;AAAA,QACX,OAAO;AAAA,MACT,CAAC;AAAA,IACH;AAEA,WAAO;AAAA,MACL;AAAA,MACA,OAAO,SAAS;AAAA,MAChB,WAAW,KAAK,IAAI,IAAI;AAAA,IAC1B;AAAA,EACF;AAEA,SAAO;AAAA,IACL;AAAA,IACA;AAAA,IACA;AAAA,IACA;AAAA,EACF;AACF;;;AGxPA,OAAO,YAAY;AAcnB,IAAMA,YAAW;AACjB,IAAM,oBAAoB;AAG1B,SAASC,gBAAe,OAA4D;AAClF,QAAM,cAAc,OAAO,iBAAiB;AAC5C,QAAM,eAAe,OAAO,qBAAqB;AACjD,SAAO;AAAA,IACL;AAAA,IACA;AAAA,IACA,aAAa,OAAO,gBAAgB,cAAc;AAAA,EACpD;AACF;AAGA,SAAS,cAAc,UAAkE;AACvF,SAAO,SAAS,IAAI,CAAC,OAAO;AAAA,IAC1B,MAAM,EAAE;AAAA,IACR,SAAS,EAAE;AAAA,EACb,EAAE;AACJ;AASO,SAAS,uBAAuB,KAAwB;AAC7D,MAAI,eAAe,SAAU,QAAO;AAIpC,MAAI,OAAO,OAAO,uBAAuB,cAAc,eAAe,OAAO,oBAAoB;AAC/F,WAAO,IAAI,SAAS;AAAA,MAClB,SAAS,IAAI;AAAA,MACb,UAAUD;AAAA,MACV,WAAW;AAAA,MACX,OAAO;AAAA,IACT,CAAC;AAAA,EACH;AAGA,MAAI,OAAO,OAAO,aAAa,cAAc,eAAe,OAAO,UAAU;AAC3E,UAAM,SAA6B,IAAI;AACvC,QAAI,WAAW,QAAW;AACxB,YAAM,YAAY,CAAC,KAAK,KAAK,KAAK,GAAG,EAAE,SAAS,MAAM,KAAK,UAAU;AACrE,aAAO,IAAI,SAAS;AAAA,QAClB,SAAS,IAAI;AAAA,QACb,UAAUA;AAAA,QACV,YAAY;AAAA,QACZ;AAAA,QACA,OAAO;AAAA,MACT,CAAC;AAAA,IACH;AACA,WAAO,IAAI,SAAS,EAAE,SAAS,IAAI,SAAS,UAAUA,WAAU,WAAW,OAAO,OAAO,IAAI,CAAC;AAAA,EAChG;AAEA,SAAO,qBAAqB,KAAKA,SAAQ;AAC3C;AAGO,SAAS,uBAAuB,QAAoC;AAEzE,QAAM,SAAS,IAAI,OAAO;AAAA,IACxB,QAAQ,OAAO;AAAA,IACf,SAAS;AAAA,IACT,SAAS,OAAO,aAAa;AAAA,IAC7B,YAAY;AAAA;AAAA,EACd,CAAC;AAED,QAAM,YAAY;AAAA,IAChB,YAAY,OAAO,cAAc;AAAA,IACjC,aAAa,OAAO,eAAe;AAAA,IACnC,UAAUA;AAAA,EACZ;AAEA,iBAAe,SAAS,UAAwB,SAAgD;AAC9F,UAAM,QAAQ,SAAS,SAAS,OAAO;AACvC,UAAM,eAAe,cAAc,QAAQ;AAC3C,UAAM,QAAQ,KAAK,IAAI;AAEvB,WAAO,UAAU,YAAY;AAC3B,UAAI;AACF,cAAM,SAA6D;AAAA,UACjE;AAAA,UACA,UAAU;AAAA,UACV,QAAQ;AAAA,QACV;AAEA,cAAM,YAAY,SAAS,aAAa,OAAO;AAC/C,YAAI,cAAc,OAAW,QAAO,aAAa;AAEjD,cAAM,cAAc,SAAS,eAAe,OAAO;AACnD,YAAI,gBAAgB,OAAW,QAAO,cAAc;AAEpD,cAAM,WAAW,MAAM,OAAO,KAAK,YAAY,OAAO,MAAM;AAC5D,cAAM,UAAU,SAAS,QAAQ,IAAI,CAAC,MAAM,EAAE,QAAQ,WAAW,EAAE,EAAE,KAAK,EAAE;AAE5E,eAAO;AAAA,UACL;AAAA,UACA,OAAO,SAAS;AAAA,UAChB,OAAOC,gBAAe,SAAS,KAAK;AAAA,UACpC,WAAW,KAAK,IAAI,IAAI;AAAA,QAC1B;AAAA,MACF,SAAS,KAAK;AACZ,cAAM,uBAAuB,GAAG;AAAA,MAClC;AAAA,IACF,GAAG,SAAS;AAAA,EACd;AAEA,kBAAgB,OACd,UACA,SACgC;AAChC,UAAM,QAAQ,SAAS,SAAS,OAAO;AACvC,UAAM,eAAe,cAAc,QAAQ;AAE3C,UAAM,SAA0D;AAAA,MAC9D;AAAA,MACA,UAAU;AAAA,MACV,QAAQ;AAAA,MACR,gBAAgB,EAAE,eAAe,KAAK;AAAA,IACxC;AAEA,UAAM,YAAY,SAAS,aAAa,OAAO;AAC/C,QAAI,cAAc,OAAW,QAAO,aAAa;AAEjD,UAAM,cAAc,SAAS,eAAe,OAAO;AACnD,QAAI,gBAAgB,OAAW,QAAO,cAAc;AAEpD,QAAI;AAEJ,QAAI;AACF,kBAAY,MAAM,OAAO,KAAK,YAAY,OAAO,MAAM;AAAA,IACzD,SAAS,KAAK;AACZ,YAAM,uBAAuB,GAAG;AAAA,IAClC;AAEA,QAAI;AAEJ,QAAI;AACF,uBAAiB,SAAS,WAAW;AACnC,cAAM,QAAQ,MAAM,QAAQ,CAAC,GAAG,MAAM;AACtC,YAAI,UAAU,UAAa,UAAU,QAAQ,MAAM,SAAS,GAAG;AAC7D,gBAAM,EAAE,OAAO,MAAM;AAAA,QACvB;AAGA,YAAI,MAAM,UAAU,UAAa,MAAM,UAAU,MAAM;AACrD,uBAAaA,gBAAe,MAAM,KAAK;AAAA,QACzC;AAAA,MACF;AAAA,IACF,SAAS,KAAK;AACZ,YAAM,uBAAuB,GAAG;AAAA,IAClC;AAEA,QAAI,eAAe,QAAW;AAC5B,YAAM,EAAE,OAAO,IAAI,OAAO,WAAW;AAAA,IACvC;AAAA,EACF;AAEA,iBAAe,WACb,UACA,QACA,SACmC;AAGnC,UAAM,wBAAoC;AAAA,MACxC,MAAM;AAAA,MACN,SACE;AAAA,IACJ;AAEA,UAAM,oBAAoB,CAAC,uBAAuB,GAAG,QAAQ;AAC7D,UAAM,QAAQ,SAAS,SAAS,OAAO;AACvC,UAAM,eAAe,cAAc,iBAAiB;AACpD,UAAM,QAAQ,KAAK,IAAI;AAEvB,UAAM,cAAc,MAAM,UAAU,YAAY;AAC9C,UAAI;AACF,cAAM,SAA6D;AAAA,UACjE;AAAA,UACA,UAAU;AAAA,UACV,QAAQ;AAAA,QACV;AAEA,cAAM,YAAY,SAAS,aAAa,OAAO;AAC/C,YAAI,cAAc,OAAW,QAAO,aAAa;AAEjD,cAAM,cAAc,SAAS,eAAe,OAAO;AACnD,YAAI,gBAAgB,OAAW,QAAO,cAAc;AAEpD,eAAO,MAAM,OAAO,KAAK,YAAY,OAAO,MAAM;AAAA,MACpD,SAAS,KAAK;AACZ,cAAM,uBAAuB,GAAG;AAAA,MAClC;AAAA,IACF,GAAG,SAAS;AAEZ,UAAM,aAAa,YAAY,QAAQ,CAAC,GAAG,QAAQ,WAAW;AAE9D,QAAI;AACJ,QAAI;AAEF,YAAM,UAAU,WACb,QAAQ,qBAAqB,EAAE,EAC/B,QAAQ,WAAW,EAAE,EACrB,KAAK;AACR,eAAS,KAAK,MAAM,OAAO;AAAA,IAC7B,SAAS,KAAK;AACZ,YAAM,IAAI,SAAS;AAAA,QACjB,SAAS,gEAAgE,WAAW,MAAM,GAAG,GAAG,CAAC;AAAA,QACjG,UAAUD;AAAA,QACV,WAAW;AAAA,QACX,OAAO;AAAA,MACT,CAAC;AAAA,IACH;AAEA,QAAI;AACJ,QAAI;AACF,aAAO,OAAO,MAAM,MAAM;AAAA,IAC5B,SAAS,KAAK;AACZ,YAAM,IAAI,SAAS;AAAA,QACjB,SAAS,kEAAkE,OAAO,GAAG,CAAC;AAAA,QACtF,UAAUA;AAAA,QACV,WAAW;AAAA,QACX,OAAO;AAAA,MACT,CAAC;AAAA,IACH;AAEA,WAAO;AAAA,MACL;AAAA,MACA,OAAOC,gBAAe,YAAY,KAAK;AAAA,MACvC,WAAW,KAAK,IAAI,IAAI;AAAA,IAC1B;AAAA,EACF;AAEA,SAAO;AAAA,IACL;AAAA,IACA;AAAA,IACA;AAAA,IACA;AAAA,EACF;AACF;;;AC7PA;AAAA,EACE;AAAA,EAKA;AAAA,OACK;AAcP,IAAMC,YAAW;AAGjB,SAASC,gBAAe,MAAkE;AACxF,QAAM,cAAc,MAAM,oBAAoB;AAC9C,QAAM,eAAe,MAAM,wBAAwB;AACnD,SAAO;AAAA,IACL;AAAA,IACA;AAAA,IACA,aAAa,MAAM,mBAAmB,cAAc;AAAA,EACtD;AACF;AAOA,SAAS,oBAAoB,UAG3B;AACA,QAAM,iBAAiB,SAAS,OAAO,CAAC,MAAM,EAAE,SAAS,QAAQ;AACjE,QAAM,uBAAuB,SAAS,OAAO,CAAC,MAAM,EAAE,SAAS,QAAQ;AAEvE,QAAM,SACJ,eAAe,SAAS,IAAI,eAAe,IAAI,CAAC,MAAM,EAAE,OAAO,EAAE,KAAK,IAAI,IAAI;AAEhF,QAAM,WAAsB,qBAAqB,IAAI,CAAC,OAAO;AAAA,IAC3D,MAAM,EAAE,SAAS,cAAc,UAAU;AAAA,IACzC,OAAO,CAAC,EAAE,MAAM,EAAE,QAAQ,CAAC;AAAA,EAC7B,EAAE;AAEF,SAAO,EAAE,QAAQ,SAAS;AAC5B;AAUO,SAAS,qBAAqB,KAAwB;AAC3D,MAAI,eAAe,SAAU,QAAO;AAIpC,MAAI,eAAe,UAAU;AAC3B,UAAM,YAAY,IAAI,WAAW,OAAO,IAAI,UAAU;AACtD,WAAO,IAAI,SAAS;AAAA,MAClB,SAAS,IAAI;AAAA,MACb,UAAUD;AAAA,MACV,YAAY,IAAI;AAAA,MAChB;AAAA,MACA,OAAO;AAAA,IACT,CAAC;AAAA,EACH;AAIA,SAAO,qBAAqB,KAAKA,SAAQ;AAC3C;AAGO,SAAS,qBAAqB,QAAoC;AACvE,QAAM,KAAK,IAAI,YAAY;AAAA,IACzB,QAAQ,OAAO;AAAA,IACf,aAAa;AAAA,MACX,SAAS,OAAO,aAAa;AAAA,IAC/B;AAAA,EACF,CAAC;AAED,QAAM,YAAY;AAAA,IAChB,YAAY,OAAO,cAAc;AAAA,IACjC,aAAa,OAAO,eAAe;AAAA,IACnC,UAAUA;AAAA,EACZ;AAEA,iBAAe,SAAS,UAAwB,SAAgD;AAC9F,UAAM,QAAQ,SAAS,SAAS,OAAO;AACvC,UAAM,EAAE,QAAQ,SAAS,IAAI,oBAAoB,QAAQ;AACzD,UAAM,QAAQ,KAAK,IAAI;AAEvB,WAAO,UAAU,YAAY;AAC3B,UAAI;AAEF,cAAM,eAAsC,CAAC;AAE7C,YAAI,WAAW,OAAW,cAAa,oBAAoB;AAC3D,cAAM,YAAY,SAAS,aAAa,OAAO;AAC/C,YAAI,cAAc,OAAW,cAAa,kBAAkB;AAC5D,cAAM,cAAc,SAAS,eAAe,OAAO;AACnD,YAAI,gBAAgB,OAAW,cAAa,cAAc;AAE1D,cAAM,WAAW,MAAM,GAAG,OAAO,gBAAgB;AAAA,UAC/C;AAAA,UACA;AAAA,UACA,QAAQ;AAAA,QACV,CAAC;AAED,eAAO;AAAA,UACL,SAAS,SAAS,QAAQ;AAAA,UAC1B;AAAA,UACA,OAAOC,gBAAe,SAAS,aAAa;AAAA,UAC5C,WAAW,KAAK,IAAI,IAAI;AAAA,QAC1B;AAAA,MACF,SAAS,KAAK;AACZ,cAAM,qBAAqB,GAAG;AAAA,MAChC;AAAA,IACF,GAAG,SAAS;AAAA,EACd;AAEA,kBAAgB,OACd,UACA,SACgC;AAChC,UAAM,QAAQ,SAAS,SAAS,OAAO;AACvC,UAAM,EAAE,QAAQ,SAAS,IAAI,oBAAoB,QAAQ;AAGzD,UAAM,eAAsC,CAAC;AAC7C,QAAI,WAAW,OAAW,cAAa,oBAAoB;AAC3D,UAAM,YAAY,SAAS,aAAa,OAAO;AAC/C,QAAI,cAAc,OAAW,cAAa,kBAAkB;AAC5D,UAAM,cAAc,SAAS,eAAe,OAAO;AACnD,QAAI,gBAAgB,OAAW,cAAa,cAAc;AAE1D,QAAI;AAEJ,QAAI;AACF,kBAAY,MAAM,GAAG,OAAO,sBAAsB;AAAA,QAChD;AAAA,QACA;AAAA,QACA,QAAQ;AAAA,MACV,CAAC;AAAA,IACH,SAAS,KAAK;AACZ,YAAM,qBAAqB,GAAG;AAAA,IAChC;AAEA,QAAI;AAEJ,QAAI;AACF,uBAAiB,SAAS,WAAW;AACnC,cAAM,OAAO,MAAM;AACnB,YAAI,SAAS,UAAa,KAAK,SAAS,GAAG;AACzC,gBAAM,EAAE,OAAO,KAAK;AAAA,QACtB;AAEA,YAAI,MAAM,kBAAkB,QAAW;AACrC,uBAAaA,gBAAe,MAAM,aAAa;AAAA,QACjD;AAAA,MACF;AAAA,IACF,SAAS,KAAK;AACZ,YAAM,qBAAqB,GAAG;AAAA,IAChC;AAEA,QAAI,eAAe,QAAW;AAC5B,YAAM,EAAE,OAAO,IAAI,OAAO,WAAW;AAAA,IACvC;AAAA,EACF;AAEA,iBAAe,WACb,UACA,QACA,SACmC;AACnC,UAAM,oBAAkC;AAAA,MACtC;AAAA,QACE,MAAM;AAAA,QACN,SACE;AAAA,MACJ;AAAA,MACA,GAAG;AAAA,IACL;AAEA,UAAM,QAAQ,SAAS,SAAS,OAAO;AACvC,UAAM,EAAE,QAAQ,SAAS,IAAI,oBAAoB,iBAAiB;AAClE,UAAM,QAAQ,KAAK,IAAI;AAEvB,UAAM,cAAc,MAAM,UAAU,YAAY;AAC9C,UAAI;AACF,cAAM,eAAsC;AAAA;AAAA,UAE1C,kBAAkB;AAAA,QACpB;AAEA,YAAI,WAAW,OAAW,cAAa,oBAAoB;AAC3D,cAAM,YAAY,SAAS,aAAa,OAAO;AAC/C,YAAI,cAAc,OAAW,cAAa,kBAAkB;AAC5D,cAAM,cAAc,SAAS,eAAe,OAAO;AACnD,YAAI,gBAAgB,OAAW,cAAa,cAAc;AAE1D,eAAO,MAAM,GAAG,OAAO,gBAAgB;AAAA,UACrC;AAAA,UACA;AAAA,UACA,QAAQ;AAAA,QACV,CAAC;AAAA,MACH,SAAS,KAAK;AACZ,cAAM,qBAAqB,GAAG;AAAA,MAChC;AAAA,IACF,GAAG,SAAS;AAEZ,UAAM,aAAa,YAAY,QAAQ;AAEvC,QAAI;AACJ,QAAI;AAEF,YAAM,UAAU,WACb,QAAQ,qBAAqB,EAAE,EAC/B,QAAQ,WAAW,EAAE,EACrB,KAAK;AACR,eAAS,KAAK,MAAM,OAAO;AAAA,IAC7B,SAAS,KAAK;AACZ,YAAM,IAAI,SAAS;AAAA,QACjB,SAAS,8DAA8D,WAAW,MAAM,GAAG,GAAG,CAAC;AAAA,QAC/F,UAAUD;AAAA,QACV,WAAW;AAAA,QACX,OAAO;AAAA,MACT,CAAC;AAAA,IACH;AAEA,QAAI;AACJ,QAAI;AACF,aAAO,OAAO,MAAM,MAAM;AAAA,IAC5B,SAAS,KAAK;AACZ,YAAM,IAAI,SAAS;AAAA,QACjB,SAAS,gEAAgE,OAAO,GAAG,CAAC;AAAA,QACpF,UAAUA;AAAA,QACV,WAAW;AAAA,QACX,OAAO;AAAA,MACT,CAAC;AAAA,IACH;AAEA,WAAO;AAAA,MACL;AAAA,MACA,OAAOC,gBAAe,YAAY,aAAa;AAAA,MAC/C,WAAW,KAAK,IAAI,IAAI;AAAA,IAC1B;AAAA,EACF;AAEA,SAAO;AAAA,IACL;AAAA,IACA;AAAA,IACA;AAAA,IACA;AAAA,EACF;AACF;;;AC3RA,OAAOC,aAAY;AAcnB,IAAMC,YAAW;AAGjB,SAASC,gBAAe,OAA4D;AAClF,QAAM,cAAc,OAAO,iBAAiB;AAC5C,QAAM,eAAe,OAAO,qBAAqB;AACjD,SAAO;AAAA,IACL;AAAA,IACA;AAAA,IACA,aAAa,OAAO,gBAAgB,cAAc;AAAA,EACpD;AACF;AAGA,SAAS,oBAAoB,UAAkE;AAC7F,SAAO,SAAS,IAAI,CAAC,OAAO;AAAA,IAC1B,MAAM,EAAE;AAAA,IACR,SAAS,EAAE;AAAA,EACb,EAAE;AACJ;AAMO,SAAS,qBAAqB,KAAwB;AAC3D,MAAI,eAAe,SAAU,QAAO;AAKpC,MAAI,OAAOC,QAAO,uBAAuB,cAAc,eAAeA,QAAO,oBAAoB;AAC/F,WAAO,IAAI,SAAS;AAAA,MAClB,SAAS,IAAI;AAAA,MACb,UAAUF;AAAA,MACV,WAAW;AAAA,MACX,OAAO;AAAA,IACT,CAAC;AAAA,EACH;AAIA,MAAI,OAAOE,QAAO,aAAa,cAAc,eAAeA,QAAO,UAAU;AAC3E,UAAM,SAA6B,IAAI;AACvC,QAAI,WAAW,QAAW;AACxB,YAAM,YAAY,CAAC,KAAK,KAAK,KAAK,GAAG,EAAE,SAAS,MAAM,KAAK,UAAU;AACrE,aAAO,IAAI,SAAS;AAAA,QAClB,SAAS,IAAI;AAAA,QACb,UAAUF;AAAA,QACV,YAAY;AAAA,QACZ;AAAA,QACA,OAAO;AAAA,MACT,CAAC;AAAA,IACH;AACA,WAAO,IAAI,SAAS,EAAE,SAAS,IAAI,SAAS,UAAUA,WAAU,WAAW,OAAO,OAAO,IAAI,CAAC;AAAA,EAChG;AAEA,SAAO,qBAAqB,KAAKA,SAAQ;AAC3C;AAGO,SAAS,qBAAqB,QAAoC;AACvE,QAAM,SAAS,IAAIE,QAAO;AAAA,IACxB,QAAQ,OAAO;AAAA,IACf,SAAS,OAAO,aAAa;AAAA,IAC7B,YAAY;AAAA;AAAA,EACd,CAAC;AAED,QAAM,YAAY;AAAA,IAChB,YAAY,OAAO,cAAc;AAAA,IACjC,aAAa,OAAO,eAAe;AAAA,IACnC,UAAUF;AAAA,EACZ;AAEA,iBAAe,SAAS,UAAwB,SAAgD;AAC9F,UAAM,QAAQ,SAAS,SAAS,OAAO;AACvC,UAAM,iBAAiB,oBAAoB,QAAQ;AACnD,UAAM,QAAQ,KAAK,IAAI;AAEvB,WAAO,UAAU,YAAY;AAC3B,UAAI;AACF,cAAM,SAA6D;AAAA,UACjE;AAAA,UACA,UAAU;AAAA,UACV,QAAQ;AAAA,QACV;AAEA,cAAM,YAAY,SAAS,aAAa,OAAO;AAC/C,YAAI,cAAc,OAAW,QAAO,aAAa;AAEjD,cAAM,cAAc,SAAS,eAAe,OAAO;AACnD,YAAI,gBAAgB,OAAW,QAAO,cAAc;AAEpD,cAAM,WAAW,MAAM,OAAO,KAAK,YAAY,OAAO,MAAM;AAE5D,cAAM,UAAU,SAAS,QAAQ,IAAI,CAAC,MAAM,EAAE,QAAQ,WAAW,EAAE,EAAE,KAAK,EAAE;AAE5E,eAAO;AAAA,UACL;AAAA,UACA,OAAO,SAAS;AAAA,UAChB,OAAOC,gBAAe,SAAS,KAAK;AAAA,UACpC,WAAW,KAAK,IAAI,IAAI;AAAA,QAC1B;AAAA,MACF,SAAS,KAAK;AACZ,cAAM,qBAAqB,GAAG;AAAA,MAChC;AAAA,IACF,GAAG,SAAS;AAAA,EACd;AAEA,kBAAgB,OACd,UACA,SACgC;AAChC,UAAM,QAAQ,SAAS,SAAS,OAAO;AACvC,UAAM,iBAAiB,oBAAoB,QAAQ;AAEnD,UAAM,SAA0D;AAAA,MAC9D;AAAA,MACA,UAAU;AAAA,MACV,QAAQ;AAAA,MACR,gBAAgB,EAAE,eAAe,KAAK;AAAA,IACxC;AAEA,UAAM,YAAY,SAAS,aAAa,OAAO;AAC/C,QAAI,cAAc,OAAW,QAAO,aAAa;AAEjD,UAAM,cAAc,SAAS,eAAe,OAAO;AACnD,QAAI,gBAAgB,OAAW,QAAO,cAAc;AAEpD,QAAI;AAEJ,QAAI;AACF,kBAAY,MAAM,OAAO,KAAK,YAAY,OAAO,MAAM;AAAA,IACzD,SAAS,KAAK;AACZ,YAAM,qBAAqB,GAAG;AAAA,IAChC;AAEA,QAAI;AAEJ,QAAI;AACF,uBAAiB,SAAS,WAAW;AAEnC,cAAM,QAAQ,MAAM,QAAQ,CAAC,GAAG,MAAM;AACtC,YAAI,UAAU,UAAa,UAAU,QAAQ,MAAM,SAAS,GAAG;AAC7D,gBAAM,EAAE,OAAO,MAAM;AAAA,QACvB;AAGA,YAAI,MAAM,UAAU,UAAa,MAAM,UAAU,MAAM;AACrD,uBAAaA,gBAAe,MAAM,KAAK;AAAA,QACzC;AAAA,MACF;AAAA,IACF,SAAS,KAAK;AACZ,YAAM,qBAAqB,GAAG;AAAA,IAChC;AAGA,QAAI,eAAe,QAAW;AAC5B,YAAM,EAAE,OAAO,IAAI,OAAO,WAAW;AAAA,IACvC;AAAA,EACF;AAEA,iBAAe,WACb,UACA,QACA,SACmC;AAGnC,UAAM,wBAAoC;AAAA,MACxC,MAAM;AAAA,MACN,SACE;AAAA,IACJ;AAEA,UAAM,oBAAoB,CAAC,uBAAuB,GAAG,QAAQ;AAC7D,UAAM,QAAQ,SAAS,SAAS,OAAO;AACvC,UAAM,iBAAiB,oBAAoB,iBAAiB;AAC5D,UAAM,QAAQ,KAAK,IAAI;AAEvB,UAAM,cAAc,MAAM,UAAU,YAAY;AAC9C,UAAI;AACF,cAAM,SAA6D;AAAA,UACjE;AAAA,UACA,UAAU;AAAA,UACV,QAAQ;AAAA,UACR,iBAAiB,EAAE,MAAM,cAAc;AAAA,QACzC;AAEA,cAAM,YAAY,SAAS,aAAa,OAAO;AAC/C,YAAI,cAAc,OAAW,QAAO,aAAa;AAEjD,cAAM,cAAc,SAAS,eAAe,OAAO;AACnD,YAAI,gBAAgB,OAAW,QAAO,cAAc;AAEpD,eAAO,MAAM,OAAO,KAAK,YAAY,OAAO,MAAM;AAAA,MACpD,SAAS,KAAK;AACZ,cAAM,qBAAqB,GAAG;AAAA,MAChC;AAAA,IACF,GAAG,SAAS;AAEZ,UAAM,aAAa,YAAY,QAAQ,CAAC,GAAG,QAAQ,WAAW;AAE9D,QAAI;AACJ,QAAI;AACF,eAAS,KAAK,MAAM,UAAU;AAAA,IAChC,SAAS,KAAK;AACZ,YAAM,IAAI,SAAS;AAAA,QACjB,SAAS,8DAA8D,WAAW,MAAM,GAAG,GAAG,CAAC;AAAA,QAC/F,UAAUD;AAAA,QACV,WAAW;AAAA,QACX,OAAO;AAAA,MACT,CAAC;AAAA,IACH;AAEA,QAAI;AACJ,QAAI;AACF,aAAO,OAAO,MAAM,MAAM;AAAA,IAC5B,SAAS,KAAK;AACZ,YAAM,IAAI,SAAS;AAAA,QACjB,SAAS,gEAAgE,OAAO,GAAG,CAAC;AAAA,QACpF,UAAUA;AAAA,QACV,WAAW;AAAA,QACX,OAAO;AAAA,MACT,CAAC;AAAA,IACH;AAEA,WAAO;AAAA,MACL;AAAA,MACA,OAAOC,gBAAe,YAAY,KAAK;AAAA,MACvC,WAAW,KAAK,IAAI,IAAI;AAAA,IAC1B;AAAA,EACF;AAEA,SAAO;AAAA,IACL;AAAA,IACA;AAAA,IACA;AAAA,IACA;AAAA,EACF;AACF;;;ACzNA,OAAOE,aAAY;AAcnB,IAAMC,YAAW;AACjB,IAAM,sBAAsB;AAW5B,SAASC,gBAAe,OAA4D;AAClF,QAAM,cAAc,OAAO,iBAAiB;AAC5C,QAAM,eAAe,OAAO,qBAAqB;AACjD,SAAO;AAAA,IACL;AAAA,IACA;AAAA,IACA,aAAa,OAAO,gBAAgB,cAAc;AAAA,EACpD;AACF;AAGA,SAASC,eAAc,UAAkE;AACvF,SAAO,SAAS,IAAI,CAAC,OAAO;AAAA,IAC1B,MAAM,EAAE;AAAA,IACR,SAAS,EAAE;AAAA,EACb,EAAE;AACJ;AASA,SAAS,iBACP,UAC0B;AAC1B,QAAM,eAAe,SAAS;AAC9B,MAAI,iBAAiB,UAAa,aAAa,WAAW,EAAG,QAAO;AAEpE,QAAM,OAAO,oBAAI,IAAY;AAC7B,QAAM,UAAkD,CAAC;AAEzD,aAAW,OAAO,cAAc;AAC9B,QAAI,CAAC,KAAK,IAAI,GAAG,GAAG;AAClB,WAAK,IAAI,GAAG;AACZ,cAAQ,KAAK,EAAE,IAAI,CAAC;AAAA,IACtB;AAAA,EACF;AAEA,SAAO,QAAQ,SAAS,IAAI,UAAU;AACxC;AAUA,SAAS,uBACP,iBACyB;AACzB,MAAI,oBAAoB,OAAW,QAAO,CAAC;AAG3C,SAAO,EAAE,GAAG,gBAAgB;AAC9B;AASO,SAAS,yBAAyB,KAAwB;AAC/D,MAAI,eAAe,SAAU,QAAO;AAIpC,MAAI,OAAOC,QAAO,uBAAuB,cAAc,eAAeA,QAAO,oBAAoB;AAC/F,WAAO,IAAI,SAAS;AAAA,MAClB,SAAS,IAAI;AAAA,MACb,UAAUH;AAAA,MACV,WAAW;AAAA,MACX,OAAO;AAAA,IACT,CAAC;AAAA,EACH;AAGA,MAAI,OAAOG,QAAO,aAAa,cAAc,eAAeA,QAAO,UAAU;AAC3E,UAAM,SAA6B,IAAI;AACvC,QAAI,WAAW,QAAW;AACxB,YAAM,YAAY,CAAC,KAAK,KAAK,KAAK,GAAG,EAAE,SAAS,MAAM,KAAK,UAAU;AACrE,aAAO,IAAI,SAAS;AAAA,QAClB,SAAS,IAAI;AAAA,QACb,UAAUH;AAAA,QACV,YAAY;AAAA,QACZ;AAAA,QACA,OAAO;AAAA,MACT,CAAC;AAAA,IACH;AACA,WAAO,IAAI,SAAS,EAAE,SAAS,IAAI,SAAS,UAAUA,WAAU,WAAW,OAAO,OAAO,IAAI,CAAC;AAAA,EAChG;AAEA,SAAO,qBAAqB,KAAKA,SAAQ;AAC3C;AAGO,SAAS,yBAAyB,QAAoC;AAE3E,QAAM,SAAS,IAAIG,QAAO;AAAA,IACxB,QAAQ,OAAO;AAAA,IACf,SAAS;AAAA,IACT,SAAS,OAAO,aAAa;AAAA,IAC7B,YAAY;AAAA;AAAA,EACd,CAAC;AAED,QAAM,YAAY;AAAA,IAChB,YAAY,OAAO,cAAc;AAAA,IACjC,aAAa,OAAO,eAAe;AAAA,IACnC,UAAUH;AAAA,EACZ;AAEA,iBAAe,SAAS,UAAwB,SAAgD;AAC9F,UAAM,QAAQ,SAAS,SAAS,OAAO;AACvC,UAAM,eAAeE,eAAc,QAAQ;AAC3C,UAAM,QAAQ,KAAK,IAAI;AACvB,UAAM,cAAc,uBAAuB,SAAS,eAAe;AAEnE,WAAO,UAAU,YAAY;AAC3B,UAAI;AACF,cAAM,SACJ;AAAA,UACE;AAAA,UACA,UAAU;AAAA,UACV,QAAQ;AAAA,UACR,GAAG;AAAA,QACL;AAEF,cAAM,YAAY,SAAS,aAAa,OAAO;AAC/C,YAAI,cAAc,OAAW,QAAO,aAAa;AAEjD,cAAM,cAAc,SAAS,eAAe,OAAO;AACnD,YAAI,gBAAgB,OAAW,QAAO,cAAc;AAEpD,cAAM,cAAc,MAAM,OAAO,KAAK,YAAY;AAAA,UAChD;AAAA,QACF;AAGA,cAAM,WAAW;AAEjB,cAAM,UAAU,SAAS,QAAQ,IAAI,CAAC,MAAM,EAAE,QAAQ,WAAW,EAAE,EAAE,KAAK,EAAE;AAE5E,cAAM,SAAsB;AAAA,UAC1B;AAAA,UACA,OAAO,SAAS;AAAA,UAChB,OAAOD,gBAAe,SAAS,KAAK;AAAA,UACpC,WAAW,KAAK,IAAI,IAAI;AAAA,QAC1B;AAEA,cAAM,YAAY,iBAAiB,QAAQ;AAC3C,YAAI,cAAc,OAAW,QAAO,YAAY;AAEhD,eAAO;AAAA,MACT,SAAS,KAAK;AACZ,cAAM,yBAAyB,GAAG;AAAA,MACpC;AAAA,IACF,GAAG,SAAS;AAAA,EACd;AAEA,kBAAgB,OACd,UACA,SACgC;AAChC,UAAM,QAAQ,SAAS,SAAS,OAAO;AACvC,UAAM,eAAeC,eAAc,QAAQ;AAC3C,UAAM,cAAc,uBAAuB,SAAS,eAAe;AAEnE,UAAM,SAAoF;AAAA,MACxF;AAAA,MACA,UAAU;AAAA,MACV,QAAQ;AAAA,MACR,gBAAgB,EAAE,eAAe,KAAK;AAAA,MACtC,GAAG;AAAA,IACL;AAEA,UAAM,YAAY,SAAS,aAAa,OAAO;AAC/C,QAAI,cAAc,OAAW,QAAO,aAAa;AAEjD,UAAM,cAAc,SAAS,eAAe,OAAO;AACnD,QAAI,gBAAgB,OAAW,QAAO,cAAc;AAEpD,QAAI;AAEJ,QAAI;AACF,kBAAY,MAAM,OAAO,KAAK,YAAY;AAAA,QACxC;AAAA,MACF;AAAA,IACF,SAAS,KAAK;AACZ,YAAM,yBAAyB,GAAG;AAAA,IACpC;AAEA,QAAI;AAEJ,QAAI;AACF,uBAAiB,SAAS,WAAW;AACnC,cAAM,QAAQ,MAAM,QAAQ,CAAC,GAAG,MAAM;AACtC,YAAI,UAAU,UAAa,UAAU,QAAQ,MAAM,SAAS,GAAG;AAC7D,gBAAM,EAAE,OAAO,MAAM;AAAA,QACvB;AAGA,YAAI,MAAM,UAAU,UAAa,MAAM,UAAU,MAAM;AACrD,uBAAaD,gBAAe,MAAM,KAAK;AAAA,QACzC;AAAA,MACF;AAAA,IACF,SAAS,KAAK;AACZ,YAAM,yBAAyB,GAAG;AAAA,IACpC;AAKA,QAAI,eAAe,QAAW;AAC5B,YAAM,EAAE,OAAO,IAAI,OAAO,WAAW;AAAA,IACvC;AAAA,EACF;AAEA,iBAAe,WACb,UACA,QACA,SACmC;AAInC,UAAM,wBAAoC;AAAA,MACxC,MAAM;AAAA,MACN,SACE;AAAA,IACJ;AAEA,UAAM,oBAAoB,CAAC,uBAAuB,GAAG,QAAQ;AAC7D,UAAM,QAAQ,SAAS,SAAS,OAAO;AACvC,UAAM,eAAeC,eAAc,iBAAiB;AACpD,UAAM,QAAQ,KAAK,IAAI;AACvB,UAAM,cAAc,uBAAuB,SAAS,eAAe;AAEnE,UAAM,cAAc,MAAM,UAAU,YAAY;AAC9C,UAAI;AACF,cAAM,SACJ;AAAA,UACE;AAAA,UACA,UAAU;AAAA,UACV,QAAQ;AAAA,UACR,GAAG;AAAA,QACL;AAEF,cAAM,YAAY,SAAS,aAAa,OAAO;AAC/C,YAAI,cAAc,OAAW,QAAO,aAAa;AAEjD,cAAM,cAAc,SAAS,eAAe,OAAO;AACnD,YAAI,gBAAgB,OAAW,QAAO,cAAc;AAEpD,eAAO,MAAM,OAAO,KAAK,YAAY;AAAA,UACnC;AAAA,QACF;AAAA,MACF,SAAS,KAAK;AACZ,cAAM,yBAAyB,GAAG;AAAA,MACpC;AAAA,IACF,GAAG,SAAS;AAEZ,UAAM,aAAa,YAAY,QAAQ,CAAC,GAAG,QAAQ,WAAW;AAE9D,QAAI;AACJ,QAAI;AAGF,YAAM,UAAU,WACb,QAAQ,6BAA6B,EAAE,EACvC,QAAQ,qBAAqB,EAAE,EAC/B,QAAQ,WAAW,EAAE,EACrB,KAAK;AACR,eAAS,KAAK,MAAM,OAAO;AAAA,IAC7B,SAAS,KAAK;AACZ,YAAM,IAAI,SAAS;AAAA,QACjB,SAAS,kEAAkE,WAAW,MAAM,GAAG,GAAG,CAAC;AAAA,QACnG,UAAUF;AAAA,QACV,WAAW;AAAA,QACX,OAAO;AAAA,MACT,CAAC;AAAA,IACH;AAEA,QAAI;AACJ,QAAI;AACF,aAAO,OAAO,MAAM,MAAM;AAAA,IAC5B,SAAS,KAAK;AACZ,YAAM,IAAI,SAAS;AAAA,QACjB,SAAS,oEAAoE,OAAO,GAAG,CAAC;AAAA,QACxF,UAAUA;AAAA,QACV,WAAW;AAAA,QACX,OAAO;AAAA,MACT,CAAC;AAAA,IACH;AAEA,WAAO;AAAA,MACL;AAAA,MACA,OAAOC,gBAAe,YAAY,KAAK;AAAA,MACvC,WAAW,KAAK,IAAI,IAAI;AAAA,IAC1B;AAAA,EACF;AAEA,SAAO;AAAA,IACL;AAAA,IACA;AAAA,IACA;AAAA,IACA;AAAA,EACF;AACF;;;AC9WO,SAAS,aAAa,QAAoC;AAC/D,UAAQ,OAAO,UAAU;AAAA,IACvB,KAAK;AACH,aAAO,wBAAwB,MAAM;AAAA,IAEvC,KAAK;AACH,aAAO,qBAAqB,MAAM;AAAA,IAEpC,KAAK;AACH,aAAO,qBAAqB,MAAM;AAAA,IAEpC,KAAK;AACH,aAAO,uBAAuB,MAAM;AAAA,IAEtC,KAAK;AACH,aAAO,yBAAyB,MAAM;AAAA,IAExC,SAAS;AAGP,YAAM,cAAqB,OAAO;AAClC,YAAM,IAAI,SAAS;AAAA,QACjB,SAAS,qCAAqC,OAAO,WAAW,CAAC;AAAA,QACjE,UAAU,OAAO,WAAW;AAAA,QAC5B,WAAW;AAAA,MACb,CAAC;AAAA,IACH;AAAA,EACF;AACF;AAcO,SAAS,oBACd,UACA,OACA,WACW;AACX,QAAM,SAAS,cAAc,QAAQ;AACrC,SAAO,aAAa,EAAE,UAAU,OAAO,QAAQ,GAAG,UAAU,CAAC;AAC/D;AAGA,SAAS,cAAc,UAA+C;AACpE,QAAM,YAAyD;AAAA,IAC7D,WAAW;AAAA,IACX,QAAQ;AAAA,IACR,QAAQ;AAAA,IACR,UAAU;AAAA,IACV,YAAY;AAAA,EACd;AAEA,QAAM,SAAS,UAAU,QAAQ;AACjC,QAAM,SAAS,QAAQ,IAAI,MAAM;AAEjC,MAAI,WAAW,UAAa,OAAO,KAAK,MAAM,IAAI;AAChD,UAAM,IAAI,SAAS;AAAA,MACjB,SAAS,mBAAmB,MAAM,yDAAyD,QAAQ;AAAA,MACnG;AAAA,MACA,WAAW;AAAA,IACb,CAAC;AAAA,EACH;AAEA,SAAO;AACT;","names":["PROVIDER","normalizeUsage","PROVIDER","normalizeUsage","OpenAI","PROVIDER","normalizeUsage","OpenAI","OpenAI","PROVIDER","normalizeUsage","buildMessages","OpenAI"]}
package/package.json CHANGED
@@ -1,6 +1,6 @@
1
1
  {
2
2
  "name": "@diabolicallabs/llm-client",
3
- "version": "0.1.1",
3
+ "version": "0.2.0",
4
4
  "description": "Unified LLM API for Anthropic, OpenAI, Google, and DeepSeek. Streaming, retry/backoff, structured output, token normalization. © Diabolical Labs",
5
5
  "author": "Diana Ismail <diana@deeismail.com> (https://deeismail.com)",
6
6
  "publisher": "Diabolical Labs",