llm-models 1.0.0 → 1.2.3

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
package/README.md CHANGED
@@ -2,7 +2,7 @@
2
2
 
3
3
  Fetch latest LLM models from [OpenRouter](https://openrouter.ai) and [models.dev](https://models.dev) APIs.
4
4
 
5
- Validates API responses with [Zod](https://zod.dev) schemas to detect breaking changes.
5
+ Validates API responses with [Zod](https://zod.dev) schemas to detect breaking changes. Provides smart wrapper functions for model discovery, comparison, cost estimation, recommendations, statistics, and provider info. Features local caching, MCP server integration, multiple output formats, and a fluent query builder API.
6
6
 
7
7
  ## Installation
8
8
 
@@ -32,60 +32,315 @@ Download the latest binary from [GitHub Releases](https://github.com/maxgfr/llm-
32
32
 
33
33
  ## CLI Usage
34
34
 
35
- ### OpenRouter
35
+ ### Smart Commands
36
+
37
+ #### Find models
38
+
39
+ Discover models across both APIs with smart filtering:
40
+
41
+ ```bash
42
+ # Find cheapest reasoning models
43
+ llm-models find -C reasoning --sort cost_input -n 10
44
+
45
+ # Find models accepting images, sorted by cost
46
+ llm-models find -m image --sort cost_input
47
+
48
+ # Find OpenAI models with large context
49
+ llm-models find -p openai --min-context 200000
50
+
51
+ # Filter by family
52
+ llm-models find -f gpt --sort cost_input
53
+
54
+ # Filter by status
55
+ llm-models find --status active -C reasoning
56
+
57
+ # Search by name
58
+ llm-models find -s "claude" --sort cost_input
59
+
60
+ # Sort by cost-effectiveness (context per dollar)
61
+ llm-models find --sort value --desc -n 10
62
+
63
+ # Count matching models
64
+ llm-models find -C reasoning --count
65
+
66
+ # Output IDs only (for piping)
67
+ llm-models find -p openai --ids-only
68
+
69
+ # Output as CSV or Markdown
70
+ llm-models find -p anthropic --format csv
71
+ llm-models find -p anthropic --format markdown
72
+
73
+ # Output as JSON
74
+ llm-models find -p google --json
75
+ ```
76
+
77
+ #### Compare models
78
+
79
+ Side-by-side model comparison with context, output limit, costs, knowledge cutoff, and capabilities:
80
+
81
+ ```bash
82
+ llm-models compare openai/gpt-4o anthropic/claude-sonnet-4 google/gemini-2.0-flash
83
+ ```
84
+
85
+ #### Model info
86
+
87
+ Detailed information about a single model (description, costs, capabilities, parameters, tokenizer):
88
+
89
+ ```bash
90
+ llm-models info openai/gpt-4o
91
+ ```
92
+
93
+ #### Cost estimation
94
+
95
+ Estimate costs for specific token volumes:
96
+
97
+ ```bash
98
+ # Basic cost estimation
99
+ llm-models cost openai/gpt-4o anthropic/claude-sonnet-4 -i 1M -o 100K
100
+
101
+ # With daily/monthly projections
102
+ llm-models cost openai/gpt-4o -i 1M -o 100K --daily 1000
103
+
104
+ # Using a workload profile
105
+ llm-models cost openai/gpt-4o -i 1K -o 1K -P chatbot --daily 5000
106
+ ```
107
+
108
+ #### Cheapest models
109
+
110
+ Find the cheapest models matching your criteria:
111
+
112
+ ```bash
113
+ llm-models cheapest -C reasoning --min-context 100000
114
+ ```
115
+
116
+ #### Recommendations
117
+
118
+ Get model recommendations for a specific use case:
119
+
120
+ ```bash
121
+ # List available use cases
122
+ llm-models recommend --list
123
+
124
+ # Get recommendations
125
+ llm-models recommend reasoning
126
+ llm-models recommend code-gen --max-cost 5
127
+ llm-models recommend vision -n 5
128
+ ```
129
+
130
+ Available use cases: `code-gen`, `vision`, `cheap-chatbot`, `reasoning`, `long-context`, `open-source`, `audio`, `tool-use`.
131
+
132
+ #### Statistics
133
+
134
+ Get aggregate statistics about the LLM model landscape:
135
+
136
+ ```bash
137
+ llm-models stats
138
+ llm-models stats --json
139
+ ```
140
+
141
+ Shows: total models/providers, top providers, capabilities distribution, modality distribution, cost percentiles, context length stats, newest models.
142
+
143
+ #### Diff
144
+
145
+ Show changes since last cached fetch:
36
146
 
37
147
  ```bash
38
- # List all models
39
- llm-models openrouter
148
+ llm-models diff
149
+ ```
40
150
 
41
- # Filter by provider
42
- llm-models openrouter --provider google
151
+ Shows: new models added, models removed, price changes, status changes.
43
152
 
44
- # Search by name or ID
45
- llm-models openrouter --search "claude"
153
+ #### Provider info
154
+
155
+ Get provider details (SDK package, env vars, API endpoint, models):
156
+
157
+ ```bash
158
+ # List all providers
159
+ llm-models provider --list
46
160
 
47
- # Show model count
48
- llm-models openrouter --count
161
+ # Get details for a specific provider
162
+ llm-models provider openai
49
163
  ```
50
164
 
51
- ### models.dev
165
+ ### Raw API Commands
166
+
167
+ #### OpenRouter
168
+
169
+ ```bash
170
+ llm-models openrouter # All models (JSON)
171
+ llm-models openrouter --provider google # Filter by provider
172
+ llm-models openrouter --search "claude" # Search
173
+ llm-models openrouter --count # Count only
174
+ ```
175
+
176
+ #### models.dev
52
177
 
53
178
  ```bash
54
- # List all providers and models
55
- llm-models models-dev
179
+ llm-models models-dev # All providers (JSON)
180
+ llm-models models-dev --provider openai # Filter by provider
181
+ llm-models models-dev --count # Count only
182
+ ```
56
183
 
57
- # Filter by provider
58
- llm-models models-dev --provider openai
184
+ ### Utility Commands
59
185
 
60
- # Search models
61
- llm-models models-dev --search "gpt-4"
186
+ #### Cache management
62
187
 
63
- # Show count
64
- llm-models models-dev --count
188
+ ```bash
189
+ llm-models cache # Show cache info
190
+ llm-models cache clear # Clear cache
191
+ ```
192
+
193
+ API responses are cached for 1 hour in `~/.cache/llm-models/`. Use `--no-cache` to bypass.
194
+
195
+ #### Configuration
196
+
197
+ ```bash
198
+ llm-models config # Show current config
199
+ llm-models config init # Create default config file
200
+ llm-models config path # Show config file path
201
+ llm-models config profiles # List workload profiles
65
202
  ```
66
203
 
67
- ### Options
204
+ Config file locations: `.llm-modelsrc` (project) or `~/.config/llm-models/config.json` (global).
205
+
206
+ #### Shell completions
207
+
208
+ ```bash
209
+ # Bash
210
+ llm-models completion bash >> ~/.bashrc
211
+
212
+ # Zsh
213
+ llm-models completion zsh >> ~/.zshrc
214
+
215
+ # Fish
216
+ llm-models completion fish > ~/.config/fish/completions/llm-models.fish
217
+ ```
218
+
219
+ #### MCP Server
220
+
221
+ Start an MCP (Model Context Protocol) server for AI agent integration:
222
+
223
+ ```bash
224
+ llm-models mcp
225
+ ```
226
+
227
+ Exposes 8 tools: `find_models`, `compare_models`, `estimate_cost`, `cheapest_models`, `get_provider`, `list_providers`, `get_stats`, `recommend_models`.
228
+
229
+ ### Global Options
68
230
 
69
231
  | Option | Description |
70
232
  |--------|-------------|
71
- | `-p, --provider <id>` | Filter by provider prefix/ID |
72
- | `-s, --search <term>` | Search models by name or ID |
73
- | `-c, --count` | Show count only |
233
+ | `--no-cache` | Disable cache for this request |
234
+ | `--quiet` | Suppress informational messages |
235
+ | `--verbose` | Show debug info (timing, cache status) |
74
236
  | `-V, --version` | Show version |
75
237
  | `-h, --help` | Show help |
76
238
 
239
+ ### Find Options
240
+
241
+ | Option | Description |
242
+ |--------|-------------|
243
+ | `-p, --provider <id>` | Filter by provider |
244
+ | `-C, --capability <cap>` | Filter: reasoning, tool_call, structured_output, open_weights, attachment |
245
+ | `-m, --modality <mod>` | Filter by input modality (image, audio, video, pdf) |
246
+ | `--max-cost <n>` | Max input cost ($/million tokens) |
247
+ | `--max-cost-output <n>` | Max output cost ($/million tokens) |
248
+ | `--min-context <n>` | Min context window |
249
+ | `-s, --search <term>` | Search by name or ID |
250
+ | `--status <status>` | Filter: active, beta, deprecated |
251
+ | `-f, --family <name>` | Filter by model family (e.g. gpt, claude, gemini) |
252
+ | `--sort <field>` | Sort: cost_input, cost_output, context_length, release_date, name, knowledge_cutoff, value |
253
+ | `--desc` | Sort descending |
254
+ | `-n, --limit <n>` | Max results (default: 20) |
255
+ | `-c, --count` | Show model count only |
256
+ | `--ids-only` | Output model IDs only, one per line |
257
+ | `--json` | Raw JSON output |
258
+ | `--format <fmt>` | Output format: table, json, csv, markdown |
259
+
260
+ ### Cost Options
261
+
262
+ | Option | Description |
263
+ |--------|-------------|
264
+ | `-i, --input <n>` | Input tokens (supports K/M suffix) |
265
+ | `-o, --output <n>` | Output tokens (supports K/M suffix) |
266
+ | `--daily <n>` | Number of daily requests for cost projection |
267
+ | `--monthly <n>` | Number of monthly requests for cost projection |
268
+ | `-P, --profile <name>` | Use workload profile (chatbot, code-gen, rag, summarization, translation) |
269
+
77
270
  ## Library Usage
78
271
 
272
+ ```typescript
273
+ import {
274
+ fetchUnifiedModels,
275
+ findModels,
276
+ compareModels,
277
+ getProvider,
278
+ listProviders,
279
+ estimateCost,
280
+ cheapestModels,
281
+ getStats,
282
+ recommendModels,
283
+ listUseCases,
284
+ diffModels,
285
+ query,
286
+ setCacheEnabled,
287
+ clearCache,
288
+ } from "llm-models";
289
+
290
+ // Get all models (unified from both APIs)
291
+ const models = await fetchUnifiedModels();
292
+
293
+ // Find cheapest reasoning models
294
+ const cheap = await findModels({
295
+ filter: { capability: "reasoning" },
296
+ sort: "cost_input",
297
+ limit: 5,
298
+ });
299
+
300
+ // Fluent query builder
301
+ const results = await query()
302
+ .provider("openai")
303
+ .capability("reasoning")
304
+ .maxCost(5)
305
+ .sortBy("cost_input")
306
+ .limit(10)
307
+ .execute();
308
+
309
+ // Compare models side by side
310
+ const comparison = await compareModels(["openai/gpt-4o", "anthropic/claude-sonnet-4"]);
311
+
312
+ // Get provider info (SDK, env vars, docs, models)
313
+ const openai = await getProvider("openai");
314
+ console.log(openai.npm); // "@ai-sdk/openai"
315
+
316
+ // Estimate costs
317
+ const costs = await estimateCost({
318
+ modelIds: ["openai/gpt-4o"],
319
+ inputTokens: 1_000_000,
320
+ outputTokens: 100_000,
321
+ });
322
+
323
+ // Get recommendations for a use case
324
+ const recs = await recommendModels("code-gen", { maxCost: 5, limit: 5 });
325
+
326
+ // Get market statistics
327
+ const stats = await getStats();
328
+
329
+ // Detect changes since last fetch
330
+ const diff = await diffModels();
331
+
332
+ // Cache control
333
+ setCacheEnabled(false); // Disable caching
334
+ clearCache(); // Clear cache directory
335
+ ```
336
+
337
+ ### Raw API Access
338
+
79
339
  ```typescript
80
340
  import { fetchOpenRouterModels, fetchModelsDevModels } from "llm-models";
81
341
 
82
- // Fetch from OpenRouter
83
342
  const openrouter = await fetchOpenRouterModels();
84
- console.log(openrouter.data.length, "models");
85
-
86
- // Fetch from models.dev
87
343
  const modelsDev = await fetchModelsDevModels();
88
- console.log(Object.keys(modelsDev).length, "providers");
89
344
  ```
90
345
 
91
346
  ### Zod Schemas
@@ -93,12 +348,7 @@ console.log(Object.keys(modelsDev).length, "providers");
93
348
  All schemas are exported for custom validation:
94
349
 
95
350
  ```typescript
96
- import { OpenRouterResponseSchema, ModelsDevResponseSchema } from "llm-models";
97
-
98
- const result = OpenRouterResponseSchema.safeParse(myData);
99
- if (!result.success) {
100
- console.error(result.error.issues);
101
- }
351
+ import { UnifiedModelSchema, OpenRouterResponseSchema, ModelsDevResponseSchema } from "llm-models";
102
352
  ```
103
353
 
104
354
  ### Types
@@ -106,7 +356,29 @@ if (!result.success) {
106
356
  All types are inferred from Zod schemas:
107
357
 
108
358
  ```typescript
109
- import type { OpenRouterModel, ModelsDevModel, ModelsDevProvider } from "llm-models";
359
+ import type {
360
+ UnifiedModel,
361
+ NormalizedCost,
362
+ Capabilities,
363
+ ProviderInfo,
364
+ ModelFilter,
365
+ ModelSortField,
366
+ CostEstimate,
367
+ ModelComparison,
368
+ } from "llm-models";
369
+ ```
370
+
371
+ ## GitHub Action
372
+
373
+ Use llm-models in CI/CD to check model costs and status:
374
+
375
+ ```yaml
376
+ - uses: maxgfr/llm-models@v1
377
+ with:
378
+ command: cost
379
+ args: "openai/gpt-4o anthropic/claude-sonnet-4 -i 1M -o 100K"
380
+ max-budget: "10"
381
+ fail-on-deprecated: "true"
110
382
  ```
111
383
 
112
384
  ## Data Sources
@@ -0,0 +1,14 @@
1
+ export declare function setCacheEnabled(enabled: boolean): void;
2
+ export declare function isCacheEnabled(): boolean;
3
+ export declare function readCache<T>(key: string, ttlMs?: number): T | null;
4
+ export declare function writeCache<T>(key: string, data: T): void;
5
+ export declare function readSnapshot<T>(key: string): {
6
+ timestamp: number;
7
+ data: T;
8
+ } | null;
9
+ export declare function readFallbackCache<T>(key: string): T | null;
10
+ export declare function clearCache(): void;
11
+ export declare function getCacheInfo(): {
12
+ files: number;
13
+ sizeBytes: number;
14
+ };
@@ -0,0 +1,10 @@
1
+ export declare const bold: (text: string) => string;
2
+ export declare const dim: (text: string) => string;
3
+ export declare const green: (text: string) => string;
4
+ export declare const yellow: (text: string) => string;
5
+ export declare const red: (text: string) => string;
6
+ export declare const cyan: (text: string) => string;
7
+ export declare const gray: (text: string) => string;
8
+ export declare function colorizeCost(cost: number | undefined | null, formatted: string): string;
9
+ export declare function stripAnsi(str: string): number;
10
+ export declare function isColorEnabled(): boolean;
@@ -0,0 +1,3 @@
1
+ export declare function generateBashCompletion(): string;
2
+ export declare function generateZshCompletion(): string;
3
+ export declare function generateFishCompletion(): string;
@@ -0,0 +1,22 @@
1
+ export interface WorkloadProfile {
2
+ input: string;
3
+ output: string;
4
+ }
5
+ export interface LlmModelsConfig {
6
+ cache?: {
7
+ enabled?: boolean;
8
+ ttl?: number;
9
+ };
10
+ defaults?: {
11
+ format?: "table" | "json" | "csv" | "markdown";
12
+ limit?: number;
13
+ sort?: string;
14
+ desc?: boolean;
15
+ };
16
+ profiles?: Record<string, WorkloadProfile>;
17
+ }
18
+ export declare function loadConfig(): LlmModelsConfig;
19
+ export declare function getProfile(name: string): WorkloadProfile | undefined;
20
+ export declare function listProfiles(): Record<string, WorkloadProfile>;
21
+ export declare function initConfig(): void;
22
+ export declare function getConfigPath(): string;
@@ -0,0 +1,11 @@
1
+ export declare function formatNumber(n: number): string;
2
+ export declare function formatCost(cost: number | undefined | null): string;
3
+ export declare function formatCostRaw(cost: number | undefined | null): string;
4
+ export declare function formatContext(n: number): string;
5
+ export declare function parseTokenCount(input: string): number;
6
+ export declare function formatCapabilities(caps: Record<string, boolean | undefined>): string;
7
+ export declare function printTable(headers: string[], rows: string[][]): void;
8
+ export type OutputFormat = "table" | "json" | "csv" | "markdown";
9
+ export declare function formatAsCSV(headers: string[], rows: string[][]): string;
10
+ export declare function formatAsMarkdown(headers: string[], rows: string[][]): string;
11
+ export declare function outputFormatted(format: OutputFormat, headers: string[], rows: string[][], jsonData?: unknown): void;
@@ -0,0 +1,2 @@
1
+ import type { ModelComparison } from "../types";
2
+ export declare function compareModels(modelIds: string[]): Promise<ModelComparison>;
@@ -0,0 +1,11 @@
1
+ import type { Capabilities, CostEstimate, UnifiedModel } from "../types";
2
+ export declare function estimateCost(options: {
3
+ modelIds: string[];
4
+ inputTokens: number;
5
+ outputTokens: number;
6
+ }): Promise<CostEstimate[]>;
7
+ export declare function cheapestModels(options?: {
8
+ minContext?: number;
9
+ capability?: keyof Capabilities;
10
+ limit?: number;
11
+ }): Promise<UnifiedModel[]>;
@@ -0,0 +1,18 @@
1
+ import type { UnifiedModel } from "../types";
2
+ export interface ModelChange {
3
+ model_id: string;
4
+ field: string;
5
+ old_value: string | number | null;
6
+ new_value: string | number | null;
7
+ }
8
+ export interface ModelDiff {
9
+ timestamp_previous: number | null;
10
+ timestamp_current: number;
11
+ added: UnifiedModel[];
12
+ removed: string[];
13
+ price_changes: ModelChange[];
14
+ status_changes: ModelChange[];
15
+ total_before: number;
16
+ total_after: number;
17
+ }
18
+ export declare function diffModels(): Promise<ModelDiff>;
@@ -0,0 +1,9 @@
1
+ export { compareModels } from "./compare";
2
+ export { cheapestModels, estimateCost } from "./cost";
3
+ export { diffModels } from "./diff";
4
+ export { fetchUnifiedData, fetchUnifiedModels } from "./normalize";
5
+ export { getProvider, listProviders } from "./provider";
6
+ export { QueryBuilder, query } from "./query";
7
+ export { listUseCases, recommendModels } from "./recommend";
8
+ export { filterModels, findModels, sortModels } from "./search";
9
+ export { getStats } from "./stats";
@@ -0,0 +1,9 @@
1
+ import type { ModelsDevModel, ModelsDevResponse, OpenRouterModel, UnifiedModel } from "../types";
2
+ export declare function openRouterPriceToPerMillion(perToken: string): number;
3
+ export declare function normalizeOpenRouterModel(model: OpenRouterModel): UnifiedModel;
4
+ export declare function normalizeModelsDevModel(model: ModelsDevModel, providerId: string): UnifiedModel;
5
+ export declare function fetchUnifiedData(): Promise<{
6
+ models: UnifiedModel[];
7
+ modelsDevData: ModelsDevResponse;
8
+ }>;
9
+ export declare function fetchUnifiedModels(): Promise<UnifiedModel[]>;
@@ -0,0 +1,3 @@
1
+ import type { ProviderInfo } from "../types";
2
+ export declare function getProvider(providerId: string): Promise<ProviderInfo>;
3
+ export declare function listProviders(): Promise<ProviderInfo[]>;
@@ -0,0 +1,20 @@
1
+ import type { Capabilities, ModelSortField, UnifiedModel } from "../types";
2
+ export declare class QueryBuilder {
3
+ private filter;
4
+ private sortField?;
5
+ private sortDesc;
6
+ private maxResults?;
7
+ provider(id: string): this;
8
+ capability(cap: keyof Capabilities): this;
9
+ modality(mod: string): this;
10
+ maxCost(input: number): this;
11
+ maxCostOutput(output: number): this;
12
+ minContext(ctx: number): this;
13
+ search(term: string): this;
14
+ status(s: "active" | "deprecated" | "beta"): this;
15
+ family(f: string): this;
16
+ sortBy(field: ModelSortField, descending?: boolean): this;
17
+ limit(n: number): this;
18
+ execute(): Promise<UnifiedModel[]>;
19
+ }
20
+ export declare function query(): QueryBuilder;
@@ -0,0 +1,16 @@
1
+ import type { ModelFilter, ModelSortField, UnifiedModel } from "../types";
2
+ export interface UseCasePreset {
3
+ description: string;
4
+ filter: ModelFilter;
5
+ sort: ModelSortField;
6
+ descending?: boolean;
7
+ }
8
+ export declare function listUseCases(): Record<string, UseCasePreset>;
9
+ export declare function recommendModels(useCase: string, options?: {
10
+ limit?: number;
11
+ maxCost?: number;
12
+ minContext?: number;
13
+ }): Promise<{
14
+ preset: UseCasePreset;
15
+ models: UnifiedModel[];
16
+ }>;
@@ -0,0 +1,9 @@
1
+ import type { ModelFilter, ModelSortField, UnifiedModel } from "../types";
2
+ export declare function filterModels(models: UnifiedModel[], filter: ModelFilter): UnifiedModel[];
3
+ export declare function sortModels(models: UnifiedModel[], field: ModelSortField, descending?: boolean): UnifiedModel[];
4
+ export declare function findModels(options: {
5
+ filter?: ModelFilter;
6
+ sort?: ModelSortField;
7
+ descending?: boolean;
8
+ limit?: number;
9
+ }): Promise<UnifiedModel[]>;
@@ -0,0 +1,38 @@
1
+ export interface ModelStats {
2
+ total_models: number;
3
+ total_providers: number;
4
+ by_provider: {
5
+ provider: string;
6
+ count: number;
7
+ }[];
8
+ by_capability: Record<string, number>;
9
+ by_modality: Record<string, number>;
10
+ cost: {
11
+ input: {
12
+ min: number;
13
+ max: number;
14
+ median: number;
15
+ p25: number;
16
+ p75: number;
17
+ p90: number;
18
+ };
19
+ output: {
20
+ min: number;
21
+ max: number;
22
+ median: number;
23
+ p25: number;
24
+ p75: number;
25
+ p90: number;
26
+ };
27
+ } | null;
28
+ newest_models: {
29
+ id: string;
30
+ release_date: string;
31
+ }[];
32
+ context_length: {
33
+ min: number;
34
+ max: number;
35
+ median: number;
36
+ };
37
+ }
38
+ export declare function getStats(): Promise<ModelStats>;
package/build/index.d.ts CHANGED
@@ -1,6 +1,10 @@
1
1
  #!/usr/bin/env node
2
- export { fetchOpenRouterModels } from "./clients/openrouter";
2
+ export { clearCache, setCacheEnabled } from "./cache";
3
3
  export { fetchModelsDevModels } from "./clients/models-dev";
4
- export { OpenRouterModelSchema, OpenRouterResponseSchema, OpenRouterArchitectureSchema, OpenRouterPricingSchema, OpenRouterTopProviderSchema, OpenRouterDefaultParametersSchema, OpenRouterLinksSchema, } from "./schemas/openrouter";
5
- export { ModelsDevModelSchema, ModelsDevProviderSchema, ModelsDevResponseSchema, ModelsDevCostSchema, ModelsDevLimitSchema, ModelsDevModalitiesSchema, } from "./schemas/models-dev";
6
- export type { OpenRouterModel, OpenRouterResponse, ModelsDevModel, ModelsDevProvider, ModelsDevResponse, } from "./types";
4
+ export { fetchOpenRouterModels } from "./clients/openrouter";
5
+ export { getProfile, listProfiles, loadConfig } from "./config";
6
+ export { cheapestModels, compareModels, diffModels, estimateCost, fetchUnifiedData, fetchUnifiedModels, filterModels, findModels, getProvider, getStats, listProviders, listUseCases, QueryBuilder, query, recommendModels, sortModels, } from "./functions/index";
7
+ export { CapabilitiesSchema, CostEstimateSchema, ModelComparisonSchema, ModelFilterSchema, ModelSortFieldSchema, NormalizedCostSchema, ProviderInfoSchema, UnifiedModelSchema, } from "./schemas/functions";
8
+ export { ModelsDevCostSchema, ModelsDevLimitSchema, ModelsDevModalitiesSchema, ModelsDevModelSchema, ModelsDevProviderSchema, ModelsDevResponseSchema, } from "./schemas/models-dev";
9
+ export { OpenRouterArchitectureSchema, OpenRouterDefaultParametersSchema, OpenRouterLinksSchema, OpenRouterModelSchema, OpenRouterPricingSchema, OpenRouterResponseSchema, OpenRouterTopProviderSchema, } from "./schemas/openrouter";
10
+ export type { Capabilities, CostEstimate, ModelComparison, ModelFilter, ModelSortField, ModelsDevModel, ModelsDevProvider, ModelsDevResponse, NormalizedCost, OpenRouterModel, OpenRouterResponse, ProviderInfo, UnifiedModel, } from "./types";