@oai2lmapi/opencode-provider 0.2.1 → 0.3.0

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
package/README.md CHANGED
@@ -1,70 +1,141 @@
1
1
  # @oai2lmapi/opencode-provider
2
2
 
3
- AI SDK Provider for **OpenAI-compatible APIs** with **automatic model discovery**.
3
+ AI SDK Provider and OpenCode Plugin for **OpenAI-compatible APIs** with **automatic model discovery**.
4
4
 
5
- Use this provider in [OpenCode](https://opencode.ai) to connect to any OpenAI-compatible API endpoint. Unlike static configurations, this provider automatically discovers all available models from your API's `/models` endpoint.
5
+ Use this package to connect [OpenCode](https://opencode.ai) or any Vercel AI SDK application to any OpenAI-compatible API endpoint. It automatically discovers all available models from your API's `/models` endpoint.
6
6
 
7
7
  ## Features
8
8
 
9
- - **AI SDK Provider**: Native provider for OpenCode and Vercel AI SDK
10
- - **Auto-discovery**: Automatically fetches models from `$baseURL/models`
11
- - **Zero model configuration**: No need to manually list each model
12
- - **Metadata enrichment**: Merges API-returned metadata with `@oai2lmapi/model-metadata` registry
13
- - **Wildcard overrides**: Apply settings to multiple models using patterns like `gpt-4*`
14
- - **Config file support**: Optional `oai2lm.json` for persistent configuration
9
+ - **CLI Tool**: Generate `opencode.json` model configuration automatically
10
+ - **OpenCode Plugin**: Adds `oai2lm_discover` tool inside OpenCode
11
+ - **AI SDK Provider**: Native provider for Vercel AI SDK and OpenCode
12
+ - **Auto-discovery**: Fetches models from `$baseURL/models`
13
+ - **Metadata enrichment**: Merges API metadata with `@oai2lmapi/model-metadata` registry
15
14
 
16
- ## Installation
15
+ ## Quick Start
16
+
17
+ ### Option 1: CLI Tool (Recommended)
18
+
19
+ Generate model configuration for your `opencode.json`:
17
20
 
18
21
  ```bash
19
- npm install @oai2lmapi/opencode-provider
20
- # or
21
- pnpm add @oai2lmapi/opencode-provider
22
+ # Discover models and output opencode.json config
23
+ npx @oai2lmapi/opencode-provider --baseURL https://api.example.com/v1 --apiKey sk-xxx --provider my-api
22
24
  ```
23
25
 
24
- ## Usage with OpenCode
25
-
26
- Add to your `opencode.json`:
26
+ This outputs ready-to-use configuration:
27
27
 
28
28
  ```json
29
29
  {
30
- "$schema": "https://opencode.ai/config.json",
31
30
  "provider": {
32
31
  "my-api": {
32
+ "name": "my-api",
33
33
  "npm": "@oai2lmapi/opencode-provider",
34
34
  "options": {
35
- "baseURL": "https://api.example.com/v1",
36
- "apiKey": "your-api-key"
35
+ "baseURL": "YOUR_API_BASE_URL",
36
+ "apiKey": "{env:YOUR_API_KEY_ENV}"
37
+ },
38
+ "models": {
39
+ "gpt-4o": {
40
+ "name": "gpt-4o",
41
+ "tool_call": true,
42
+ "attachment": true,
43
+ "limit": { "context": 128000, "output": 16384 }
44
+ }
37
45
  }
38
46
  }
39
47
  }
40
48
  }
41
49
  ```
42
50
 
43
- That's it! OpenCode will automatically discover all available models from your API.
51
+ Copy this into your `opencode.json` and adjust the `baseURL` and `apiKey`.
52
+
53
+ ### Option 2: OpenCode Plugin
54
+
55
+ Add as a plugin to get the `oai2lm_discover` tool inside OpenCode:
56
+
57
+ ```json
58
+ {
59
+ "$schema": "https://opencode.ai/config.json",
60
+ "plugin": ["@oai2lmapi/opencode-provider"]
61
+ }
62
+ ```
63
+
64
+ Then in OpenCode, use the tool:
44
65
 
45
- ### Using environment variables
66
+ ```
67
+ /tool oai2lm_discover baseURL=https://api.example.com/v1 apiKey=sk-xxx
68
+ ```
69
+
70
+ The tool will output the configuration you need.
71
+
72
+ ### Option 3: Manual Configuration with npm Provider
73
+
74
+ Once you have your model list (from CLI or Plugin), add to `opencode.json`:
46
75
 
47
76
  ```json
48
77
  {
78
+ "$schema": "https://opencode.ai/config.json",
49
79
  "provider": {
50
80
  "my-api": {
51
81
  "npm": "@oai2lmapi/opencode-provider",
52
- "env": ["MY_API_KEY"],
53
82
  "options": {
54
- "baseURL": "https://api.example.com/v1"
83
+ "baseURL": "https://api.example.com/v1",
84
+ "apiKey": "{env:MY_API_KEY}"
85
+ },
86
+ "models": {
87
+ "gpt-4o": {
88
+ "name": "GPT-4o",
89
+ "tool_call": true,
90
+ "limit": { "context": 128000, "output": 16384 }
91
+ },
92
+ "claude-sonnet-4-20250514": {
93
+ "name": "Claude Sonnet 4",
94
+ "tool_call": true,
95
+ "limit": { "context": 200000, "output": 64000 }
96
+ }
55
97
  }
56
98
  }
57
99
  }
58
100
  }
59
101
  ```
60
102
 
61
- Set `MY_API_KEY` in your environment:
103
+ > **Note**: OpenCode requires the `models` field to know which models are available. The CLI and Plugin tools help you generate this automatically.
104
+
105
+ ## CLI Reference
62
106
 
63
107
  ```bash
64
- export MY_API_KEY=your-api-key
108
+ oai2lm-discover [options]
109
+
110
+ OPTIONS:
111
+ -b, --baseURL <url> Base URL of the API (e.g., https://api.example.com/v1)
112
+ -k, --apiKey <key> API key for authentication
113
+ -p, --provider <name> Provider name for config (default: custom-provider)
114
+ -f, --filter <regex> Filter models by regex pattern
115
+ -o, --output <format> Output format: json, table, or config (default: config)
116
+ -c, --config Load settings from oai2lm.json
117
+ -h, --help Show help
65
118
  ```
66
119
 
67
- ## Programmatic Usage
120
+ ### Examples
121
+
122
+ ```bash
123
+ # Discover all models and output config
124
+ npx @oai2lmapi/opencode-provider -b https://api.example.com/v1 -k sk-xxx -p my-api
125
+
126
+ # Filter to specific models
127
+ npx @oai2lmapi/opencode-provider -b https://api.example.com/v1 -k sk-xxx -f "gpt-4|claude"
128
+
129
+ # Output as table for review
130
+ npx @oai2lmapi/opencode-provider -b https://api.example.com/v1 -k sk-xxx -o table
131
+
132
+ # Use settings from oai2lm.json config file
133
+ npx @oai2lmapi/opencode-provider --config
134
+ ```
135
+
136
+ ## Programmatic Usage (AI SDK)
137
+
138
+ For use in your own applications with Vercel AI SDK:
68
139
 
69
140
  ```typescript
70
141
  import { createOai2lm } from "@oai2lmapi/opencode-provider";
@@ -88,151 +159,169 @@ const result = await streamText({
88
159
  });
89
160
  ```
90
161
 
91
- ## Provider Options
92
-
93
- | Option | Type | Required | Description |
94
- | ---------------- | --------- | -------- | ----------------------------------------------------------- |
95
- | `baseURL` | `string` | Yes | Base URL for API calls (e.g., `https://api.example.com/v1`) |
96
- | `apiKey` | `string` | No | API key for authentication |
97
- | `name` | `string` | No | Provider name (default: `"oai2lm"`) |
98
- | `headers` | `object` | No | Custom headers for all requests |
99
- | `modelFilter` | `string` | No | Regex pattern to filter models |
100
- | `modelOverrides` | `object` | No | Per-model configuration overrides (supports wildcards) |
101
- | `useConfigFile` | `boolean` | No | Merge settings from `oai2lm.json` (default: `true`) |
162
+ ## Config File (oai2lm.json)
102
163
 
103
- ## Config File
164
+ For persistent configuration, create `oai2lm.json` in:
104
165
 
105
- For persistent configuration, create `oai2lm.json` in one of these locations:
166
+ - `~/.local/share/opencode/oai2lm.json` (Linux)
167
+ - `~/.config/opencode/oai2lm.json` (Linux/macOS)
106
168
 
107
- 1. `~/.local/share/opencode/oai2lm.json`
108
- 2. `~/.config/opencode/oai2lm.json`
109
-
110
- ```jsonc
169
+ ```json
111
170
  {
112
- // Base URL for your OpenAI-compatible API
171
+ "$schema": "https://raw.githubusercontent.com/hugefiver/OAI2LMApi/main/packages/opencode-provider/oai2lm.schema.json",
113
172
  "baseURL": "https://api.example.com/v1",
114
-
115
- // API key (supports variable substitution)
116
173
  "apiKey": "{env:MY_API_KEY}",
117
-
118
- // Provider ID
119
- "name": "myapi",
120
-
121
- // Display name
122
- "displayName": "My API",
123
-
124
- // Custom headers
125
- "headers": {
126
- "X-Custom-Header": "value",
127
- },
128
-
129
- // Filter models by regex
174
+ "name": "my-api",
130
175
  "modelFilter": "^(gpt-|claude-)",
131
-
132
- // Override model metadata (supports wildcards)
133
176
  "modelOverrides": {
134
177
  "gpt-4*": {
135
178
  "maxInputTokens": 128000,
136
- "supportsImageInput": true,
137
- },
138
- },
179
+ "supportsImageInput": true
180
+ }
181
+ }
139
182
  }
140
183
  ```
141
184
 
142
- ### Variable substitution
185
+ Then just run:
143
186
 
144
- The `apiKey` field supports:
187
+ ```bash
188
+ npx @oai2lmapi/opencode-provider --config
189
+ ```
145
190
 
146
- - `{env:VAR_NAME}` - Read from environment variable
147
- - `{file:/path/to/file}` - Read from file
191
+ ## Provider Options
148
192
 
149
- ## Extended API
193
+ | Option | Type | Required | Description |
194
+ | ---------------- | --------- | -------- | ------------------------------------------------------ |
195
+ | `baseURL` | `string` | Yes | Base URL for API calls |
196
+ | `apiKey` | `string` | No | API key for authentication |
197
+ | `name` | `string` | No | Provider name (default: `"oai2lm"`) |
198
+ | `headers` | `object` | No | Custom headers for all requests |
199
+ | `modelFilter` | `string` | No | Regex pattern to filter models |
200
+ | `modelOverrides` | `object` | No | Per-model configuration overrides (supports wildcards) |
201
+ | `useConfigFile` | `boolean` | No | Merge settings from `oai2lm.json` (default: `true`) |
150
202
 
151
- ### `provider.listModels()`
203
+ ### Model Override Options
152
204
 
153
- Returns a list of discovered models with metadata:
205
+ The `modelOverrides` object supports wildcard patterns (`*` and `?`) to match model IDs:
154
206
 
155
- ```typescript
156
- const models = await provider.listModels();
157
- // [{ id: "gpt-4", name: "GPT-4", object: "model", ... }]
158
- ```
207
+ | Option | Type | Description |
208
+ | -------------------------------- | ------------------- | ------------------------------------ |
209
+ | `maxInputTokens` | `number` | Maximum input tokens |
210
+ | `maxOutputTokens` | `number` | Maximum output tokens |
211
+ | `supportsToolCalling` | `boolean` | Native tool/function calling support |
212
+ | `supportsImageInput` | `boolean` | Image/vision input support |
213
+ | `temperature` | `number` | Default temperature (0.0-2.0) |
214
+ | `thinkingLevel` | `string` / `number` | CoT thinking level (see below) |
215
+ | `suppressChainOfThought` | `boolean` | Hide thinking content in response |
216
+ | `usePromptBasedToolCalling` | `boolean` | Use XML tools in system prompt |
217
+ | `trimXmlToolParameterWhitespace` | `boolean` | Trim XML parameter whitespace |
159
218
 
160
- ### `provider.getModelMetadata(modelId)`
219
+ #### Thinking Level
161
220
 
162
- Returns enriched metadata for a specific model:
221
+ For models that support chain-of-thought reasoning (Claude 3.7, DeepSeek-R1, o1, etc.):
163
222
 
164
- ```typescript
165
- const metadata = await provider.getModelMetadata("gpt-4");
166
- // { maxInputTokens: 128000, maxOutputTokens: 4096, supportsToolCalling: true, ... }
167
- ```
223
+ - `"none"` - Disable thinking
224
+ - `"low"` / `"medium"` / `"high"` - Preset token budgets
225
+ - `"auto"` - Let the model decide
226
+ - `number` - Explicit token budget (e.g., `8000`)
168
227
 
169
- ### `provider.refreshModels()`
228
+ #### Prompt-Based Tool Calling
170
229
 
171
- Force refresh the model list from the API:
230
+ For models without native function calling support:
172
231
 
173
- ```typescript
174
- await provider.refreshModels();
232
+ ```json
233
+ {
234
+ "modelOverrides": {
235
+ "qwq-*": {
236
+ "usePromptBasedToolCalling": true,
237
+ "trimXmlToolParameterWhitespace": true,
238
+ "supportsToolCalling": false
239
+ }
240
+ }
241
+ }
175
242
  ```
176
243
 
244
+ This converts tools to XML format in the system prompt, allowing models to use structured tool calls.
245
+
177
246
  ## Example Configurations
178
247
 
179
248
  ### OpenRouter
180
249
 
181
- ```json
182
- {
183
- "provider": {
184
- "openrouter": {
185
- "npm": "@oai2lmapi/opencode-provider",
186
- "env": ["OPENROUTER_API_KEY"],
187
- "options": {
188
- "baseURL": "https://openrouter.ai/api/v1"
189
- }
190
- }
191
- }
192
- }
250
+ ```bash
251
+ npx @oai2lmapi/opencode-provider -b https://openrouter.ai/api/v1 -k $OPENROUTER_API_KEY -p openrouter
252
+ ```
253
+
254
+ ### Together AI
255
+
256
+ ```bash
257
+ npx @oai2lmapi/opencode-provider -b https://api.together.xyz/v1 -k $TOGETHER_API_KEY -p together
193
258
  ```
194
259
 
195
260
  ### Local Ollama
196
261
 
197
- ```json
198
- {
199
- "provider": {
200
- "ollama": {
201
- "npm": "@oai2lmapi/opencode-provider",
202
- "options": {
203
- "baseURL": "http://localhost:11434/v1",
204
- "apiKey": "ollama"
205
- }
206
- }
207
- }
208
- }
262
+ ```bash
263
+ npx @oai2lmapi/opencode-provider -b http://localhost:11434/v1 -k ollama -p ollama
209
264
  ```
210
265
 
211
- ### Together AI
266
+ ## Why Models Must Be Configured?
212
267
 
213
- ```json
214
- {
215
- "provider": {
216
- "together": {
217
- "npm": "@oai2lmapi/opencode-provider",
218
- "env": ["TOGETHER_API_KEY"],
219
- "options": {
220
- "baseURL": "https://api.together.xyz/v1"
221
- }
222
- }
223
- }
224
- }
268
+ OpenCode loads provider configurations at startup and needs to know:
269
+
270
+ - Which models are available
271
+ - Token limits for each model
272
+ - Model capabilities (tool calling, vision, etc.)
273
+
274
+ Since OpenCode doesn't call our provider's dynamic model discovery at startup, we provide the CLI/Plugin tools to help you generate this configuration once, which you then add to your `opencode.json`.
275
+
276
+ ## XML Tool Utilities
277
+
278
+ For advanced use cases like building custom middleware, this package exports XML tool utilities:
279
+
280
+ ```typescript
281
+ import {
282
+ generateXmlToolPrompt,
283
+ parseXmlToolCalls,
284
+ formatToolCallAsXml,
285
+ formatToolResultAsText,
286
+ findModelOverride,
287
+ } from "@oai2lmapi/opencode-provider";
288
+
289
+ // Generate XML tool prompt from tool definitions
290
+ const xmlPrompt = generateXmlToolPrompt([
291
+ {
292
+ type: "function",
293
+ name: "search",
294
+ description: "Search the web",
295
+ parameters: {
296
+ type: "object",
297
+ properties: { query: { type: "string" } },
298
+ required: ["query"],
299
+ },
300
+ },
301
+ ]);
302
+
303
+ // Parse XML tool calls from model response
304
+ const toolCalls = parseXmlToolCalls(responseText, ["search", "read_file"], {
305
+ trimParameterWhitespace: true,
306
+ });
307
+
308
+ // Format a tool call as XML
309
+ const xml = formatToolCallAsXml("search", { query: "hello world" });
310
+
311
+ // Format a tool result as XML
312
+ const result = formatToolResultAsText("search", "Found 10 results...");
313
+
314
+ // Find model override by pattern matching
315
+ const override = findModelOverride("qwq-32b", {
316
+ "qwq-*": { usePromptBasedToolCalling: true },
317
+ });
225
318
  ```
226
319
 
227
- ## How It Works
320
+ For models without native function calling, consider using:
228
321
 
229
- 1. When loaded, the provider creates an `@ai-sdk/openai-compatible` instance
230
- 2. On first model request or `listModels()` call, it fetches `/models` from your API
231
- 3. Each model is enriched with metadata from:
232
- - API response (context_length, max_tokens, etc.)
233
- - `@oai2lmapi/model-metadata` pattern matching
234
- - Your config file overrides
235
- 4. The provider then works like any standard AI SDK provider
322
+ - **[@ai-sdk-tool/parser](https://github.com/minpeter/ai-sdk-tool-call-middleware)**: Community middleware for AI SDK
323
+ - **hermesToolMiddleware**: For Hermes & Qwen format function calls
324
+ - **gemmaToolMiddleware**: For Gemma 3 model series
236
325
 
237
326
  ## License
238
327
 
package/dist/cli.d.ts ADDED
@@ -0,0 +1,11 @@
1
+ #!/usr/bin/env node
2
+ /**
3
+ * CLI tool to discover models from an OpenAI-compatible API
4
+ * and generate opencode.json configuration.
5
+ *
6
+ * Usage:
7
+ * npx oai2lm-discover --baseURL https://api.example.com/v1 --apiKey sk-xxx
8
+ * npx oai2lm-discover --config # Use settings from oai2lm.json
9
+ */
10
+ export {};
11
+ //# sourceMappingURL=cli.d.ts.map
@@ -0,0 +1 @@
1
+ {"version":3,"file":"cli.d.ts","sourceRoot":"","sources":["../src/cli.ts"],"names":[],"mappings":";AACA;;;;;;;GAOG"}