@funkai/models 0.3.2 → 0.3.3

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
@@ -0,0 +1,302 @@
1
+ # Model Catalog
2
+
3
+ The model catalog is an auto-generated, readonly collection of `ModelDefinition` objects sourced from [models.dev](https://models.dev). It provides lookup functions, type-safe IDs with autocomplete, and per-provider subpath exports.
4
+
5
+ ## Architecture
6
+
7
+ ```mermaid
8
+ %%{init: {
9
+ 'theme': 'base',
10
+ 'themeVariables': {
11
+ 'primaryColor': '#313244',
12
+ 'primaryTextColor': '#cdd6f4',
13
+ 'primaryBorderColor': '#6c7086',
14
+ 'lineColor': '#89b4fa',
15
+ 'secondaryColor': '#45475a',
16
+ 'tertiaryColor': '#1e1e2e',
17
+ 'background': '#1e1e2e',
18
+ 'mainBkg': '#313244',
19
+ 'clusterBkg': '#1e1e2e',
20
+ 'clusterBorder': '#45475a'
21
+ },
22
+ 'flowchart': { 'curve': 'basis', 'padding': 15 }
23
+ }}%%
24
+
25
+ flowchart LR
26
+ source["models.dev API"]:::external
27
+
28
+ subgraph generation [" "]
29
+ script["generate:models script"]:::core
30
+ providers["Per-provider .ts files"]:::core
31
+ end
32
+
33
+ subgraph catalog [" "]
34
+ MODELS["MODELS constant"]:::core
35
+ modelFn["model(id)"]:::core
36
+ modelsFn["models(filter?)"]:::core
37
+ end
38
+
39
+ source --> script
40
+ script --> providers
41
+ providers --> MODELS
42
+ MODELS --> modelFn
43
+ MODELS --> modelsFn
44
+
45
+ classDef external fill:#313244,stroke:#f5c2e7,stroke-width:2px,color:#cdd6f4
46
+ classDef core fill:#313244,stroke:#89b4fa,stroke-width:2px,color:#cdd6f4
47
+
48
+ style generation fill:#181825,stroke:#fab387,stroke-width:2px
49
+ style catalog fill:#181825,stroke:#89b4fa,stroke-width:2px
50
+ ```
51
+
52
+ ## ModelDefinition
53
+
54
+ Each model has the following fields:
55
+
56
+ | Field | Type | Description |
57
+ | --------------- | ------------------- | ---------------------------------------------- |
58
+ | `id` | `string` | Provider-native identifier (e.g. `"gpt-4.1"`) |
59
+ | `name` | `string` | Human-readable display name |
60
+ | `provider` | `string` | Provider slug (e.g. `"openai"`) |
61
+ | `family` | `string` | Model family (e.g. `"gpt"`, `"claude-sonnet"`) |
62
+ | `pricing` | `ModelPricing` | Per-token pricing rates in USD |
63
+ | `contextWindow` | `number` | Maximum context window in tokens |
64
+ | `maxOutput` | `number` | Maximum output tokens |
65
+ | `modalities` | `ModelModalities` | Supported input/output modalities |
66
+ | `capabilities` | `ModelCapabilities` | Boolean capability flags |
67
+
68
+ ### ModelPricing
69
+
70
+ | Field | Type | Description |
71
+ | ------------ | --------------------- | ----------------------------------- |
72
+ | `input` | `number` | Cost per input token |
73
+ | `output` | `number` | Cost per output token |
74
+ | `cacheRead` | `number \| undefined` | Cost per cached input token (read) |
75
+ | `cacheWrite` | `number \| undefined` | Cost per cached input token (write) |
76
+
77
+ ### ModelCapabilities
78
+
79
+ | Field | Type | Description |
80
+ | ------------------ | --------- | -------------------------------- |
81
+ | `reasoning` | `boolean` | Supports chain-of-thought |
82
+ | `toolCall` | `boolean` | Supports tool (function) calling |
83
+ | `attachment` | `boolean` | Supports file/image attachments |
84
+ | `structuredOutput` | `boolean` | Supports structured JSON output |
85
+
86
+ ### ModelModalities
87
+
88
+ | Field | Type | Description |
89
+ | -------- | ------------------- | ---------------------------------------------------- |
90
+ | `input` | `readonly string[]` | Accepted input modalities (e.g. `"text"`, `"image"`) |
91
+ | `output` | `readonly string[]` | Produced output modalities |
92
+
93
+ ## Lookup API
94
+
95
+ ### Look Up a Single Model
96
+
97
+ `model(id)` returns the matching `ModelDefinition` or `null`:
98
+
99
+ ```ts
100
+ import { model } from "@funkai/models";
101
+
102
+ const m = model("openai/gpt-4.1");
103
+ if (m) {
104
+ console.log(m.name);
105
+ console.log(m.pricing.input);
106
+ console.log(m.capabilities.reasoning);
107
+ }
108
+ ```
109
+
110
+ ### Get All Models
111
+
112
+ `models()` returns the full catalog. Pass a predicate to filter:
113
+
114
+ ```ts
115
+ import { models } from "@funkai/models";
116
+
117
+ const all = models();
118
+ const withTools = models((m) => m.capabilities.toolCall);
119
+ ```
120
+
121
+ ### Access the Raw Catalog
122
+
123
+ `MODELS` is the complete readonly array, useful when you need direct iteration:
124
+
125
+ ```ts
126
+ import { MODELS } from "@funkai/models";
127
+
128
+ const providers = new Set(MODELS.map((m) => m.provider));
129
+ ```
130
+
131
+ ## ModelId Type
132
+
133
+ `ModelId` provides autocomplete for known model IDs while accepting arbitrary strings for new or custom models:
134
+
135
+ ```ts
136
+ import type { ModelId } from "@funkai/models";
137
+
138
+ const id: ModelId = "openai/gpt-4.1";
139
+ ```
140
+
141
+ ## Filtering Patterns
142
+
143
+ `models()` accepts an optional predicate function `(m: ModelDefinition) => boolean`. When provided, only models where the predicate returns `true` are included.
144
+
145
+ ### Filter by Capability
146
+
147
+ ```ts
148
+ const reasoning = models((m) => m.capabilities.reasoning);
149
+ const withTools = models((m) => m.capabilities.toolCall);
150
+ const structured = models((m) => m.capabilities.structuredOutput);
151
+ ```
152
+
153
+ ### Filter by Provider
154
+
155
+ ```ts
156
+ const openai = models((m) => m.provider === "openai");
157
+ const anthropic = models((m) => m.provider === "anthropic");
158
+ ```
159
+
160
+ ### Filter by Modality
161
+
162
+ ```ts
163
+ const vision = models((m) => m.modalities.input.includes("image"));
164
+ const audio = models((m) => m.modalities.input.includes("audio"));
165
+ const multimodal = models((m) => m.modalities.input.length > 1);
166
+ ```
167
+
168
+ ### Filter by Context Window
169
+
170
+ ```ts
171
+ const largeContext = models((m) => m.contextWindow >= 128_000);
172
+ const longOutput = models((m) => m.maxOutput >= 16_000);
173
+ ```
174
+
175
+ ### Filter by Pricing
176
+
177
+ ```ts
178
+ const cheapInput = models((m) => m.pricing.input < 0.000001);
179
+ const withCache = models((m) => m.pricing.cacheRead != null);
180
+ ```
181
+
182
+ ### Filter by Family
183
+
184
+ ```ts
185
+ const gpt = models((m) => m.family === "gpt");
186
+ const claude = models((m) => m.family.startsWith("claude"));
187
+ ```
188
+
189
+ ### Combine Multiple Conditions
190
+
191
+ ```ts
192
+ const ideal = models(
193
+ (m) =>
194
+ m.capabilities.reasoning &&
195
+ m.capabilities.toolCall &&
196
+ m.contextWindow >= 128_000 &&
197
+ m.pricing.input < 0.00001,
198
+ );
199
+ ```
200
+
201
+ ### Sort by Price
202
+
203
+ ```ts
204
+ const cheapest = models((m) => m.capabilities.reasoning).toSorted(
205
+ (a, b) => a.pricing.input - b.pricing.input,
206
+ );
207
+
208
+ const pick = cheapest[0];
209
+ ```
210
+
211
+ ### Extract Unique Values
212
+
213
+ ```ts
214
+ const providers = [...new Set(models().map((m) => m.provider))];
215
+ const families = [...new Set(models().map((m) => m.family))];
216
+ ```
217
+
218
+ ### Per-Provider Filtering
219
+
220
+ Use subpath exports for provider-scoped operations:
221
+
222
+ ```ts
223
+ import { openAIModels } from "@funkai/models/openai";
224
+
225
+ const reasoningGpt = openAIModels.filter((m) => m.capabilities.reasoning);
226
+ ```
227
+
228
+ ## Supported Providers
229
+
230
+ The model catalog includes models from 21 providers. Each provider has a dedicated subpath export and a prefix used in model IDs.
231
+
232
+ | Provider | Prefix | Subpath Import |
233
+ | -------------- | ---------------- | ------------------------------- |
234
+ | OpenAI | `openai` | `@funkai/models/openai` |
235
+ | Anthropic | `anthropic` | `@funkai/models/anthropic` |
236
+ | Google | `google` | `@funkai/models/google` |
237
+ | Google Vertex | `google-vertex` | `@funkai/models/google-vertex` |
238
+ | Mistral | `mistral` | `@funkai/models/mistral` |
239
+ | Amazon Bedrock | `amazon-bedrock` | `@funkai/models/amazon-bedrock` |
240
+ | Groq | `groq` | `@funkai/models/groq` |
241
+ | DeepSeek | `deepseek` | `@funkai/models/deepseek` |
242
+ | xAI | `xai` | `@funkai/models/xai` |
243
+ | Cohere | `cohere` | `@funkai/models/cohere` |
244
+ | Fireworks AI | `fireworks-ai` | `@funkai/models/fireworks-ai` |
245
+ | Together AI | `togetherai` | `@funkai/models/togetherai` |
246
+ | DeepInfra | `deepinfra` | `@funkai/models/deepinfra` |
247
+ | Cerebras | `cerebras` | `@funkai/models/cerebras` |
248
+ | Perplexity | `perplexity` | `@funkai/models/perplexity` |
249
+ | OpenRouter | `openrouter` | `@funkai/models/openrouter` |
250
+ | Llama | `llama` | `@funkai/models/llama` |
251
+ | Alibaba | `alibaba` | `@funkai/models/alibaba` |
252
+ | NVIDIA | `nvidia` | `@funkai/models/nvidia` |
253
+ | Hugging Face | `huggingface` | `@funkai/models/huggingface` |
254
+ | Inception | `inception` | `@funkai/models/inception` |
255
+
256
+ ## Per-Provider Subpath Exports
257
+
258
+ Each provider subpath exports three members following a consistent naming pattern:
259
+
260
+ | Export | Type | Description |
261
+ | ------------------- | ---------- | ------------------------------------------------ |
262
+ | `<provider>Models` | `const` | Readonly array of `ModelDefinition` for provider |
263
+ | `<provider>Model` | `function` | Look up a model by ID, returns `null` if missing |
264
+ | `<Provider>ModelId` | `type` | Union type of known model IDs for the provider |
265
+
266
+ ```ts
267
+ import { anthropicModels, anthropicModel } from "@funkai/models/anthropic";
268
+ import type { AnthropicModelId } from "@funkai/models/anthropic";
269
+
270
+ const id: AnthropicModelId = "claude-sonnet-4-20250514";
271
+
272
+ const m = anthropicModel(id);
273
+ if (m) {
274
+ console.log(m.name, m.pricing.input);
275
+ }
276
+
277
+ const withReasoning = anthropicModels.filter((m) => m.capabilities.reasoning);
278
+ ```
279
+
280
+ Model IDs in the catalog use the format `<provider-native-id>` (e.g. `"gpt-4.1"`, `"claude-sonnet-4-20250514"`). When used with `createProviderRegistry()`, prefix them with the provider slug: `"openai/gpt-4.1"`, `"anthropic/claude-sonnet-4-20250514"`.
281
+
282
+ ## Updating the Catalog
283
+
284
+ Regenerate the catalog from models.dev:
285
+
286
+ ```bash
287
+ pnpm --filter=@funkai/models generate:models
288
+ ```
289
+
290
+ Force-regenerate (ignoring staleness cache):
291
+
292
+ ```bash
293
+ pnpm --filter=@funkai/models generate:models --force
294
+ ```
295
+
296
+ This requires `OPENROUTER_API_KEY` to be set in the environment.
297
+
298
+ ## References
299
+
300
+ - [Provider Resolution](provider-resolution.md)
301
+ - [Cost Tracking](cost-tracking.md)
302
+ - [Troubleshooting](troubleshooting.md)
@@ -0,0 +1,144 @@
1
+ # Cost Tracking
2
+
3
+ `calculateCost()` computes the USD cost of a model invocation by multiplying token counts against per-token pricing rates from the catalog.
4
+
5
+ ## calculateCost() API
6
+
7
+ ```ts
8
+ import { calculateCost, model } from "@funkai/models";
9
+ import type { TokenUsage } from "@funkai/models";
10
+
11
+ const m = model("openai/gpt-4.1");
12
+ if (m) {
13
+ const cost = calculateCost(usage, m.pricing);
14
+ }
15
+ ```
16
+
17
+ ## Types
18
+
19
+ ### TokenUsage
20
+
21
+ Token counts from a model invocation:
22
+
23
+ | Field | Type | Description |
24
+ | ------------------ | -------- | ------------------------------------- |
25
+ | `inputTokens` | `number` | Number of input (prompt) tokens |
26
+ | `outputTokens` | `number` | Number of output (completion) tokens |
27
+ | `totalTokens` | `number` | Total tokens (input + output) |
28
+ | `cacheReadTokens` | `number` | Tokens served from prompt cache |
29
+ | `cacheWriteTokens` | `number` | Tokens written into prompt cache |
30
+ | `reasoningTokens` | `number` | Tokens consumed by internal reasoning |
31
+
32
+ ### ModelPricing
33
+
34
+ Per-token pricing rates from the model catalog:
35
+
36
+ | Field | Type | Description |
37
+ | ------------ | --------------------- | --------------------------------- |
38
+ | `input` | `number` | Cost per input token (USD) |
39
+ | `output` | `number` | Cost per output token (USD) |
40
+ | `cacheRead` | `number \| undefined` | Cost per cached read token (USD) |
41
+ | `cacheWrite` | `number \| undefined` | Cost per cached write token (USD) |
42
+
43
+ Pricing rates are stored per-token in the catalog (converted from per-million at generation time). No runtime conversion is needed.
44
+
45
+ ### UsageCost
46
+
47
+ The output of `calculateCost()`:
48
+
49
+ | Field | Type | Description |
50
+ | ------------ | -------- | ---------------------------- |
51
+ | `input` | `number` | Cost for input tokens |
52
+ | `output` | `number` | Cost for output tokens |
53
+ | `cacheRead` | `number` | Cost for cached read tokens |
54
+ | `cacheWrite` | `number` | Cost for cached write tokens |
55
+ | `total` | `number` | Sum of all cost fields |
56
+
57
+ All fields are non-negative. Fields that don't apply are `0`.
58
+
59
+ ## Basic Usage
60
+
61
+ ```ts
62
+ const m = model("openai/gpt-4.1");
63
+ if (!m) {
64
+ throw new Error("Model not found in catalog");
65
+ }
66
+
67
+ const usage: TokenUsage = {
68
+ inputTokens: 1500,
69
+ outputTokens: 800,
70
+ totalTokens: 2300,
71
+ cacheReadTokens: 500,
72
+ cacheWriteTokens: 0,
73
+ reasoningTokens: 0,
74
+ };
75
+
76
+ const cost = calculateCost(usage, m.pricing);
77
+ console.log(`Total: $${cost.total.toFixed(6)}`);
78
+ ```
79
+
80
+ ## Cost Breakdown
81
+
82
+ ```ts
83
+ const cost = calculateCost(usage, m.pricing);
84
+
85
+ console.log(`Input: $${cost.input.toFixed(6)}`);
86
+ console.log(`Output: $${cost.output.toFixed(6)}`);
87
+ console.log(`Cache read: $${cost.cacheRead.toFixed(6)}`);
88
+ console.log(`Cache write: $${cost.cacheWrite.toFixed(6)}`);
89
+ console.log(`Total: $${cost.total.toFixed(6)}`);
90
+ ```
91
+
92
+ ## Accumulating Costs Across Calls
93
+
94
+ ```ts
95
+ const totalCost = runs.reduce((sum, run) => {
96
+ const runModel = model(run.modelId);
97
+ if (!runModel) return sum;
98
+ return sum + calculateCost(run.usage, runModel.pricing).total;
99
+ }, 0);
100
+
101
+ console.log(`Session total: $${totalCost.toFixed(6)}`);
102
+ ```
103
+
104
+ ## Comparing Model Costs
105
+
106
+ Estimate the cost of a workload across different models:
107
+
108
+ ```ts
109
+ const usage: TokenUsage = {
110
+ inputTokens: 10_000,
111
+ outputTokens: 2_000,
112
+ totalTokens: 12_000,
113
+ cacheReadTokens: 0,
114
+ cacheWriteTokens: 0,
115
+ reasoningTokens: 0,
116
+ };
117
+
118
+ const candidates = models((m) => m.capabilities.reasoning);
119
+
120
+ const costs = candidates.map((m) => ({
121
+ id: m.id,
122
+ total: calculateCost(usage, m.pricing).total,
123
+ }));
124
+
125
+ const sorted = costs.toSorted((a, b) => a.total - b.total);
126
+ ```
127
+
128
+ ## Calculation Formula
129
+
130
+ ```text
131
+ input = inputTokens * pricing.input
132
+ output = outputTokens * pricing.output
133
+ cacheRead = cacheReadTokens * (pricing.cacheRead ?? 0)
134
+ cacheWrite = cacheWriteTokens * (pricing.cacheWrite ?? 0)
135
+ total = input + output + cacheRead + cacheWrite
136
+ ```
137
+
138
+ Optional pricing fields (`cacheRead`, `cacheWrite`) default to `0` when absent.
139
+
140
+ ## References
141
+
142
+ - [Model Catalog](catalog.md)
143
+ - [Provider Resolution](provider-resolution.md)
144
+ - [Troubleshooting](troubleshooting.md)
@@ -1,12 +1,11 @@
1
- # Set Up a Model Resolver
1
+ # Set Up a Provider Registry
2
2
 
3
- Configure `createModelResolver()` with multiple providers and an OpenRouter fallback.
3
+ Configure `createProviderRegistry()` with multiple providers.
4
4
 
5
5
  ## Prerequisites
6
6
 
7
7
  - `@funkai/models` installed
8
8
  - API keys for your providers (OpenAI, Anthropic, etc.)
9
- - `OPENROUTER_API_KEY` set in the environment (for fallback)
10
9
 
11
10
  ## Steps
12
11
 
@@ -18,55 +17,52 @@ Install the AI SDK providers you want to use directly:
18
17
  pnpm add @ai-sdk/openai @ai-sdk/anthropic
19
18
  ```
20
19
 
21
- ### 2. Create the Resolver
20
+ ### 2. Create the Registry
22
21
 
23
22
  ```ts
24
- import { createModelResolver, openrouter } from "@funkai/models";
23
+ import { createProviderRegistry } from "@funkai/models";
25
24
  import { createOpenAI } from "@ai-sdk/openai";
26
25
  import { createAnthropic } from "@ai-sdk/anthropic";
27
26
 
28
- const resolve = createModelResolver({
27
+ const registry = createProviderRegistry({
29
28
  providers: {
30
29
  openai: createOpenAI({ apiKey: process.env.OPENAI_API_KEY }),
31
30
  anthropic: createAnthropic({ apiKey: process.env.ANTHROPIC_API_KEY }),
32
31
  },
33
- fallback: openrouter,
34
32
  });
35
33
  ```
36
34
 
37
35
  ### 3. Resolve Models
38
36
 
39
37
  ```ts
40
- const gpt = resolve("openai/gpt-4.1");
41
- const claude = resolve("anthropic/claude-sonnet-4");
42
- const mistral = resolve("mistral/mistral-large-latest");
38
+ const gpt = registry("openai/gpt-4.1");
39
+ const claude = registry("anthropic/claude-sonnet-4");
43
40
  ```
44
41
 
45
42
  - `"openai/gpt-4.1"` routes through `@ai-sdk/openai` directly
46
43
  - `"anthropic/claude-sonnet-4"` routes through `@ai-sdk/anthropic` directly
47
- - `"mistral/mistral-large-latest"` has no mapped provider, so it routes through OpenRouter
48
44
 
49
45
  ### 4. Use with Agents
50
46
 
51
- Pass the resolver to `@funkai/agents` by resolving the model before creating the agent:
47
+ Pass the registry to `@funkai/agents` by resolving the model before creating the agent:
52
48
 
53
49
  ```ts
54
50
  import { agent } from "@funkai/agents";
55
51
 
56
52
  const summarizer = agent({
57
53
  name: "summarizer",
58
- model: resolve("openai/gpt-4.1"),
54
+ model: registry("openai/gpt-4.1"),
59
55
  prompt: ({ input }) => `Summarize:\n\n${input.text}`,
60
56
  });
61
57
  ```
62
58
 
63
59
  ## Verification
64
60
 
65
- Verify the resolver works by resolving each configured provider:
61
+ Verify the registry works by resolving each configured provider:
66
62
 
67
63
  ```ts
68
- const gpt = resolve("openai/gpt-4.1");
69
- const claude = resolve("anthropic/claude-sonnet-4");
64
+ const gpt = registry("openai/gpt-4.1");
65
+ const claude = registry("anthropic/claude-sonnet-4");
70
66
 
71
67
  console.log(gpt.modelId);
72
68
  console.log(claude.modelId);
@@ -76,29 +72,18 @@ console.log(claude.modelId);
76
72
 
77
73
  ### Cannot resolve model: no provider mapped
78
74
 
79
- **Issue:** The model ID prefix does not match any key in `providers` and no `fallback` is configured.
75
+ **Issue:** The model ID prefix does not match any key in `providers`.
80
76
 
81
- **Fix:** Add the provider to the `providers` map or configure a `fallback`:
77
+ **Fix:** Add the provider to the `providers` map:
82
78
 
83
79
  ```ts
84
- const resolve = createModelResolver({
80
+ const registry = createProviderRegistry({
85
81
  providers: {
86
82
  openai: createOpenAI({ apiKey: process.env.OPENAI_API_KEY }),
87
83
  },
88
- fallback: openrouter,
89
84
  });
90
85
  ```
91
86
 
92
- ### OPENROUTER_API_KEY environment variable is required
93
-
94
- **Issue:** Using `openrouter` as the fallback but `OPENROUTER_API_KEY` is not set.
95
-
96
- **Fix:** Set the environment variable:
97
-
98
- ```bash
99
- export OPENROUTER_API_KEY=sk-or-...
100
- ```
101
-
102
87
  ## References
103
88
 
104
89
  - [Provider Resolution](../provider/overview.md)
package/docs/overview.md CHANGED
@@ -34,9 +34,8 @@ flowchart LR
34
34
 
35
35
  subgraph resolver [" "]
36
36
  direction TB
37
- createResolver["createModelResolver()"]:::core
37
+ createRegistry["createProviderRegistry()"]:::core
38
38
  providers["Provider map"]:::gateway
39
- fallback["Fallback (OpenRouter)"]:::gateway
40
39
  end
41
40
 
42
41
  subgraph cost [" "]
@@ -49,11 +48,9 @@ flowchart LR
49
48
  ModelId --> lookup
50
49
  MODELS --> lookup
51
50
  lookup --> filter
52
- ModelId --> createResolver
53
- createResolver --> providers
54
- createResolver --> fallback
51
+ ModelId --> createRegistry
52
+ createRegistry --> providers
55
53
  providers --> LanguageModel["LanguageModel"]:::external
56
- fallback --> LanguageModel
57
54
  usage --> calcCost
58
55
  pricing --> calcCost
59
56
  calcCost --> UsageCost["UsageCost"]:::external
@@ -68,72 +65,19 @@ flowchart LR
68
65
  style cost fill:#181825,stroke:#a6e3a1,stroke-width:2px
69
66
  ```
70
67
 
71
- The package has three domains:
68
+ ## Three Domains
72
69
 
73
- | Domain | Purpose | Key Exports |
74
- | ------------ | -------------------------------------------- | ------------------------------------- |
75
- | **Catalog** | Generated model metadata from models.dev | `model()`, `models()`, `MODELS` |
76
- | **Provider** | Resolve model IDs to AI SDK `LanguageModel`s | `createModelResolver()`, `openrouter` |
77
- | **Cost** | Calculate USD costs from token usage | `calculateCost()` |
70
+ | Domain | Purpose | Key Exports |
71
+ | ------------ | -------------------------------------------- | ------------------------------- |
72
+ | **Catalog** | Generated model metadata from models.dev | `model()`, `models()`, `MODELS` |
73
+ | **Provider** | Resolve model IDs to AI SDK `LanguageModel`s | `createProviderRegistry()` |
74
+ | **Cost** | Calculate USD costs from token usage | `calculateCost()` |
78
75
 
79
- ## Key Concepts
76
+ ## Documentation
80
77
 
81
- ### Model Definitions
82
-
83
- Every model in the catalog is a `ModelDefinition` with pricing, capabilities, modalities, and context window metadata. The catalog is auto-generated from [models.dev](https://models.dev) and updated via `pnpm --filter=@funkai/models generate:models`.
84
-
85
- ### Provider Resolution
86
-
87
- `createModelResolver()` maps model ID prefixes (e.g. `"openai"` from `"openai/gpt-4.1"`) to AI SDK provider factories. Unmapped prefixes fall through to an optional fallback (typically OpenRouter).
88
-
89
- ### Cost Calculation
90
-
91
- `calculateCost()` multiplies token counts by per-token pricing rates. Pricing is stored per-token in the catalog (converted from per-million at generation time), so no runtime conversion is needed.
92
-
93
- ## Usage
94
-
95
- ### Look Up a Model
96
-
97
- ```ts
98
- const m = model("openai/gpt-4.1");
99
- if (m) {
100
- console.log(m.name, m.contextWindow, m.capabilities.reasoning);
101
- }
102
- ```
103
-
104
- ### Filter Models
105
-
106
- ```ts
107
- const reasoning = models((m) => m.capabilities.reasoning);
108
- const multimodal = models((m) => m.modalities.input.includes("image"));
109
- ```
110
-
111
- ### Resolve a Model
112
-
113
- ```ts
114
- const resolve = createModelResolver({
115
- fallback: openrouter,
116
- });
117
- const lm = resolve("openai/gpt-4.1");
118
- ```
119
-
120
- ### Calculate Cost
121
-
122
- ```ts
123
- const cost = calculateCost(usage, m.pricing);
124
- console.log(`Total: $${cost.total.toFixed(6)}`);
125
- ```
126
-
127
- ## References
128
-
129
- - [Model Catalog](catalog/overview.md)
130
- - [Filtering](catalog/filtering.md)
131
- - [Providers](catalog/providers.md)
132
- - [Provider Resolution](provider/overview.md)
133
- - [Configuration](provider/configuration.md)
134
- - [OpenRouter](provider/openrouter.md)
135
- - [Cost Calculation](cost/overview.md)
136
- - [Setup Resolver Guide](guides/setup-resolver.md)
137
- - [Filter Models Guide](guides/filter-models.md)
138
- - [Track Costs Guide](guides/track-costs.md)
139
- - [Troubleshooting](troubleshooting.md)
78
+ | Topic | Description |
79
+ | --------------------------------------------- | --------------------------------------------------------------------------- |
80
+ | [Model Catalog](catalog.md) | Model definitions, lookup API, filtering patterns, provider subpath exports |
81
+ | [Provider Resolution](provider-resolution.md) | Resolution algorithm, registry configuration, OpenRouter integration |
82
+ | [Cost Tracking](cost-tracking.md) | calculateCost() API, types, formula, usage patterns |
83
+ | [Troubleshooting](troubleshooting.md) | Common errors and fixes |