@aigne/open-router 0.1.0
This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
- package/CHANGELOG.md +18 -0
- package/LICENSE.md +93 -0
- package/README.md +133 -0
- package/README.zh.md +133 -0
- package/lib/cjs/index.d.ts +1 -0
- package/lib/cjs/index.js +17 -0
- package/lib/cjs/open-router-chat-model.d.ts +22 -0
- package/lib/cjs/open-router-chat-model.js +34 -0
- package/lib/cjs/package.json +1 -0
- package/lib/dts/index.d.ts +1 -0
- package/lib/dts/open-router-chat-model.d.ts +22 -0
- package/lib/esm/index.d.ts +1 -0
- package/lib/esm/index.js +1 -0
- package/lib/esm/open-router-chat-model.d.ts +22 -0
- package/lib/esm/open-router-chat-model.js +30 -0
- package/lib/esm/package.json +1 -0
- package/package.json +54 -0
package/CHANGELOG.md
ADDED
|
@@ -0,0 +1,18 @@
|
|
|
1
|
+
# Changelog
|
|
2
|
+
|
|
3
|
+
## [0.1.0](https://github.com/AIGNE-io/aigne-framework/compare/open-router-v0.0.1...open-router-v0.1.0) (2025-05-23)
|
|
4
|
+
|
|
5
|
+
|
|
6
|
+
### Features
|
|
7
|
+
|
|
8
|
+
* **models:** publish model adapters as standalone packages ([#126](https://github.com/AIGNE-io/aigne-framework/issues/126)) ([588b8ae](https://github.com/AIGNE-io/aigne-framework/commit/588b8aea6abcee5fa87def1358bf51f84021c6ef))
|
|
9
|
+
|
|
10
|
+
|
|
11
|
+
### Dependencies
|
|
12
|
+
|
|
13
|
+
* The following workspace dependencies were updated
|
|
14
|
+
* dependencies
|
|
15
|
+
* @aigne/openai bumped to 0.1.0
|
|
16
|
+
* devDependencies
|
|
17
|
+
* @aigne/core bumped to 1.16.0
|
|
18
|
+
* @aigne/test-utils bumped to 0.3.0
|
package/LICENSE.md
ADDED
|
@@ -0,0 +1,93 @@
|
|
|
1
|
+
Elastic License 2.0
|
|
2
|
+
|
|
3
|
+
URL: https://www.elastic.co/licensing/elastic-license
|
|
4
|
+
|
|
5
|
+
## Acceptance
|
|
6
|
+
|
|
7
|
+
By using the software, you agree to all of the terms and conditions below.
|
|
8
|
+
|
|
9
|
+
## Copyright License
|
|
10
|
+
|
|
11
|
+
The licensor grants you a non-exclusive, royalty-free, worldwide,
|
|
12
|
+
non-sublicensable, non-transferable license to use, copy, distribute, make
|
|
13
|
+
available, and prepare derivative works of the software, in each case subject to
|
|
14
|
+
the limitations and conditions below.
|
|
15
|
+
|
|
16
|
+
## Limitations
|
|
17
|
+
|
|
18
|
+
You may not provide the software to third parties as a hosted or managed
|
|
19
|
+
service, where the service provides users with access to any substantial set of
|
|
20
|
+
the features or functionality of the software.
|
|
21
|
+
|
|
22
|
+
You may not move, change, disable, or circumvent the license key functionality
|
|
23
|
+
in the software, and you may not remove or obscure any functionality in the
|
|
24
|
+
software that is protected by the license key.
|
|
25
|
+
|
|
26
|
+
You may not alter, remove, or obscure any licensing, copyright, or other notices
|
|
27
|
+
of the licensor in the software. Any use of the licensor’s trademarks is subject
|
|
28
|
+
to applicable law.
|
|
29
|
+
|
|
30
|
+
## Patents
|
|
31
|
+
|
|
32
|
+
The licensor grants you a license, under any patent claims the licensor can
|
|
33
|
+
license, or becomes able to license, to make, have made, use, sell, offer for
|
|
34
|
+
sale, import and have imported the software, in each case subject to the
|
|
35
|
+
limitations and conditions in this license. This license does not cover any
|
|
36
|
+
patent claims that you cause to be infringed by modifications or additions to
|
|
37
|
+
the software. If you or your company make any written claim that the software
|
|
38
|
+
infringes or contributes to infringement of any patent, your patent license for
|
|
39
|
+
the software granted under these terms ends immediately. If your company makes
|
|
40
|
+
such a claim, your patent license ends immediately for work on behalf of your
|
|
41
|
+
company.
|
|
42
|
+
|
|
43
|
+
## Notices
|
|
44
|
+
|
|
45
|
+
You must ensure that anyone who gets a copy of any part of the software from you
|
|
46
|
+
also gets a copy of these terms.
|
|
47
|
+
|
|
48
|
+
If you modify the software, you must include in any modified copies of the
|
|
49
|
+
software prominent notices stating that you have modified the software.
|
|
50
|
+
|
|
51
|
+
## No Other Rights
|
|
52
|
+
|
|
53
|
+
These terms do not imply any licenses other than those expressly granted in
|
|
54
|
+
these terms.
|
|
55
|
+
|
|
56
|
+
## Termination
|
|
57
|
+
|
|
58
|
+
If you use the software in violation of these terms, such use is not licensed,
|
|
59
|
+
and your licenses will automatically terminate. If the licensor provides you
|
|
60
|
+
with a notice of your violation, and you cease all violation of this license no
|
|
61
|
+
later than 30 days after you receive that notice, your licenses will be
|
|
62
|
+
reinstated retroactively. However, if you violate these terms after such
|
|
63
|
+
reinstatement, any additional violation of these terms will cause your licenses
|
|
64
|
+
to terminate automatically and permanently.
|
|
65
|
+
|
|
66
|
+
## No Liability
|
|
67
|
+
|
|
68
|
+
*As far as the law allows, the software comes as is, without any warranty or
|
|
69
|
+
condition, and the licensor will not be liable to you for any damages arising
|
|
70
|
+
out of these terms or the use or nature of the software, under any kind of
|
|
71
|
+
legal claim.*
|
|
72
|
+
|
|
73
|
+
## Definitions
|
|
74
|
+
|
|
75
|
+
The **licensor** is the entity offering these terms, and the **software** is the
|
|
76
|
+
software the licensor makes available under these terms, including any portion
|
|
77
|
+
of it.
|
|
78
|
+
|
|
79
|
+
**you** refers to the individual or entity agreeing to these terms.
|
|
80
|
+
|
|
81
|
+
**your company** is any legal entity, sole proprietorship, or other kind of
|
|
82
|
+
organization that you work for, plus all organizations that have control over,
|
|
83
|
+
are under the control of, or are under common control with that
|
|
84
|
+
organization. **control** means ownership of substantially all the assets of an
|
|
85
|
+
entity, or the power to direct its management and policies by vote, contract, or
|
|
86
|
+
otherwise. Control can be direct or indirect.
|
|
87
|
+
|
|
88
|
+
**your licenses** are all the licenses granted to you for the software under
|
|
89
|
+
these terms.
|
|
90
|
+
|
|
91
|
+
**use** means anything you do with the software requiring one of your licenses.
|
|
92
|
+
|
|
93
|
+
**trademark** means trademarks, service marks, and similar rights.
|
package/README.md
ADDED
|
@@ -0,0 +1,133 @@
|
|
|
1
|
+
# @aigne/open-router
|
|
2
|
+
|
|
3
|
+
[](https://star-history.com/#AIGNE-io/aigne-framework)
|
|
4
|
+
[](https://github.com/AIGNE-io/aigne-framework/issues)
|
|
5
|
+
[](https://codecov.io/gh/AIGNE-io/aigne-framework)
|
|
6
|
+
[](https://www.npmjs.com/package/@aigne/open-router)
|
|
7
|
+
[](https://github.com/AIGNE-io/aigne-framework/blob/main/LICENSE.md)
|
|
8
|
+
|
|
9
|
+
**English** | [中文](README.zh.md)
|
|
10
|
+
|
|
11
|
+
AIGNE OpenRouter SDK for accessing multiple AI models through a unified API within the [AIGNE Framework](https://github.com/AIGNE-io/aigne-framework).
|
|
12
|
+
|
|
13
|
+
## Introduction
|
|
14
|
+
|
|
15
|
+
`@aigne/open-router` provides a seamless integration between the AIGNE Framework and OpenRouter's unified API for accessing a wide variety of AI models. This package enables developers to easily leverage models from multiple providers (including OpenAI, Anthropic, Google, and more) through a single consistent interface, allowing for flexible model selection and fallback options.
|
|
16
|
+
|
|
17
|
+
## Features
|
|
18
|
+
|
|
19
|
+
* **OpenRouter API Integration**: Direct connection to OpenRouter's API services
|
|
20
|
+
* **Multi-Provider Access**: Access to models from OpenAI, Anthropic, Claude, Google, and many other providers
|
|
21
|
+
* **Unified Interface**: Consistent interface for all models regardless of their origin
|
|
22
|
+
* **Model Fallbacks**: Easily configure fallback options between different models
|
|
23
|
+
* **Chat Completions**: Support for chat completions API with all available models
|
|
24
|
+
* **Streaming Responses**: Support for streaming responses for more responsive applications
|
|
25
|
+
* **Type-Safe**: Comprehensive TypeScript typings for all APIs and models
|
|
26
|
+
* **Consistent Interface**: Compatible with the AIGNE Framework's model interface
|
|
27
|
+
* **Error Handling**: Robust error handling and retry mechanisms
|
|
28
|
+
* **Full Configuration**: Extensive configuration options for fine-tuning behavior
|
|
29
|
+
|
|
30
|
+
## Installation
|
|
31
|
+
|
|
32
|
+
### Using npm
|
|
33
|
+
|
|
34
|
+
```bash
|
|
35
|
+
npm install @aigne/open-router @aigne/core
|
|
36
|
+
```
|
|
37
|
+
|
|
38
|
+
### Using yarn
|
|
39
|
+
|
|
40
|
+
```bash
|
|
41
|
+
yarn add @aigne/open-router @aigne/core
|
|
42
|
+
```
|
|
43
|
+
|
|
44
|
+
### Using pnpm
|
|
45
|
+
|
|
46
|
+
```bash
|
|
47
|
+
pnpm add @aigne/open-router @aigne/core
|
|
48
|
+
```
|
|
49
|
+
|
|
50
|
+
## Basic Usage
|
|
51
|
+
|
|
52
|
+
```typescript file="test/open-router-chat-model.test.ts" region="example-openrouter-chat-model"
|
|
53
|
+
import { OpenRouterChatModel } from "@aigne/open-router";
|
|
54
|
+
|
|
55
|
+
const model = new OpenRouterChatModel({
|
|
56
|
+
// Provide API key directly or use environment variable OPEN_ROUTER_API_KEY
|
|
57
|
+
apiKey: "your-api-key", // Optional if set in env variables
|
|
58
|
+
// Specify model (defaults to 'openai/gpt-4o')
|
|
59
|
+
model: "anthropic/claude-3-opus",
|
|
60
|
+
modelOptions: {
|
|
61
|
+
temperature: 0.7,
|
|
62
|
+
},
|
|
63
|
+
});
|
|
64
|
+
|
|
65
|
+
const result = await model.invoke({
|
|
66
|
+
messages: [{ role: "user", content: "Which model are you using?" }],
|
|
67
|
+
});
|
|
68
|
+
|
|
69
|
+
console.log(result);
|
|
70
|
+
/* Output:
|
|
71
|
+
{
|
|
72
|
+
text: "I'm powered by OpenRouter, using the Claude 3 Opus model from Anthropic.",
|
|
73
|
+
model: "anthropic/claude-3-opus",
|
|
74
|
+
usage: {
|
|
75
|
+
inputTokens: 5,
|
|
76
|
+
outputTokens: 14
|
|
77
|
+
}
|
|
78
|
+
}
|
|
79
|
+
*/
|
|
80
|
+
```
|
|
81
|
+
|
|
82
|
+
## Using Multiple Models with Fallbacks
|
|
83
|
+
|
|
84
|
+
```typescript
|
|
85
|
+
const modelWithFallbacks = new OpenRouterChatModel({
|
|
86
|
+
apiKey: "your-api-key",
|
|
87
|
+
model: "openai/gpt-4o",
|
|
88
|
+
fallbackModels: ["anthropic/claude-3-opus", "google/gemini-1.5-pro"], // Fallback order
|
|
89
|
+
modelOptions: {
|
|
90
|
+
temperature: 0.7,
|
|
91
|
+
},
|
|
92
|
+
});
|
|
93
|
+
|
|
94
|
+
// Will try gpt-4o first, then claude-3-opus if that fails, then gemini-1.5-pro
|
|
95
|
+
const fallbackResult = await modelWithFallbacks.invoke({
|
|
96
|
+
messages: [{ role: "user", content: "Which model are you using?" }],
|
|
97
|
+
});
|
|
98
|
+
```
|
|
99
|
+
|
|
100
|
+
## Streaming Responses
|
|
101
|
+
|
|
102
|
+
```typescript file="test/open-router-chat-model.test.ts" region="example-openrouter-chat-model-streaming"
|
|
103
|
+
import { OpenRouterChatModel } from "@aigne/open-router";
|
|
104
|
+
|
|
105
|
+
const model = new OpenRouterChatModel({
|
|
106
|
+
apiKey: "your-api-key",
|
|
107
|
+
model: "anthropic/claude-3-opus",
|
|
108
|
+
});
|
|
109
|
+
|
|
110
|
+
const stream = await model.invoke(
|
|
111
|
+
{
|
|
112
|
+
messages: [{ role: "user", content: "Which model are you using?" }],
|
|
113
|
+
},
|
|
114
|
+
undefined,
|
|
115
|
+
{ streaming: true },
|
|
116
|
+
);
|
|
117
|
+
|
|
118
|
+
let fullText = "";
|
|
119
|
+
const json = {};
|
|
120
|
+
|
|
121
|
+
for await (const chunk of stream) {
|
|
122
|
+
const text = chunk.delta.text?.text;
|
|
123
|
+
if (text) fullText += text;
|
|
124
|
+
if (chunk.delta.json) Object.assign(json, chunk.delta.json);
|
|
125
|
+
}
|
|
126
|
+
|
|
127
|
+
console.log(fullText); // Output: "I'm powered by OpenRouter, using the Claude 3 Opus model from Anthropic."
|
|
128
|
+
console.log(json); // { model: "anthropic/claude-3-opus", usage: { inputTokens: 5, outputTokens: 14 } }
|
|
129
|
+
```
|
|
130
|
+
|
|
131
|
+
## License
|
|
132
|
+
|
|
133
|
+
Elastic-2.0
|
package/README.zh.md
ADDED
|
@@ -0,0 +1,133 @@
|
|
|
1
|
+
# @aigne/open-router
|
|
2
|
+
|
|
3
|
+
[](https://star-history.com/#AIGNE-io/aigne-framework)
|
|
4
|
+
[](https://github.com/AIGNE-io/aigne-framework/issues)
|
|
5
|
+
[](https://codecov.io/gh/AIGNE-io/aigne-framework)
|
|
6
|
+
[](https://www.npmjs.com/package/@aigne/open-router)
|
|
7
|
+
[](https://github.com/AIGNE-io/aigne-framework/blob/main/LICENSE.md)
|
|
8
|
+
|
|
9
|
+
[English](README.md) | **中文**
|
|
10
|
+
|
|
11
|
+
AIGNE OpenRouter SDK,用于在 [AIGNE 框架](https://github.com/AIGNE-io/aigne-framework) 中通过统一 API 访问多种 AI 模型。
|
|
12
|
+
|
|
13
|
+
## 简介
|
|
14
|
+
|
|
15
|
+
`@aigne/open-router` 提供了 AIGNE 框架与 OpenRouter 统一 API 之间的无缝集成,用于访问各种 AI 模型。该包使开发者能够通过单一一致的接口轻松使用来自多个提供商(包括 OpenAI、Anthropic、Google 等)的模型,允许灵活的模型选择和备选方案。
|
|
16
|
+
|
|
17
|
+
## 特性
|
|
18
|
+
|
|
19
|
+
* **OpenRouter API 集成**:直接连接到 OpenRouter 的 API 服务
|
|
20
|
+
* **多提供商访问**:可访问来自 OpenAI、Anthropic、Claude、Google 等多家提供商的模型
|
|
21
|
+
* **统一接口**:为所有模型提供一致的接口,无论其来源
|
|
22
|
+
* **模型备选**:轻松配置不同模型之间的备选选项
|
|
23
|
+
* **聊天完成**:支持所有可用模型的聊天完成 API
|
|
24
|
+
* **流式响应**:支持流式响应,提供更高响应性的应用程序体验
|
|
25
|
+
* **类型安全**:为所有 API 和模型提供全面的 TypeScript 类型定义
|
|
26
|
+
* **一致接口**:兼容 AIGNE 框架的模型接口
|
|
27
|
+
* **错误处理**:健壮的错误处理和重试机制
|
|
28
|
+
* **完整配置**:丰富的配置选项用于微调行为
|
|
29
|
+
|
|
30
|
+
## 安装
|
|
31
|
+
|
|
32
|
+
### 使用 npm
|
|
33
|
+
|
|
34
|
+
```bash
|
|
35
|
+
npm install @aigne/open-router @aigne/core
|
|
36
|
+
```
|
|
37
|
+
|
|
38
|
+
### 使用 yarn
|
|
39
|
+
|
|
40
|
+
```bash
|
|
41
|
+
yarn add @aigne/open-router @aigne/core
|
|
42
|
+
```
|
|
43
|
+
|
|
44
|
+
### 使用 pnpm
|
|
45
|
+
|
|
46
|
+
```bash
|
|
47
|
+
pnpm add @aigne/open-router @aigne/core
|
|
48
|
+
```
|
|
49
|
+
|
|
50
|
+
## 基本用法
|
|
51
|
+
|
|
52
|
+
```typescript file="test/open-router-chat-model.test.ts" region="example-openrouter-chat-model"
|
|
53
|
+
import { OpenRouterChatModel } from "@aigne/open-router";
|
|
54
|
+
|
|
55
|
+
const model = new OpenRouterChatModel({
|
|
56
|
+
// Provide API key directly or use environment variable OPEN_ROUTER_API_KEY
|
|
57
|
+
apiKey: "your-api-key", // Optional if set in env variables
|
|
58
|
+
// Specify model (defaults to 'openai/gpt-4o')
|
|
59
|
+
model: "anthropic/claude-3-opus",
|
|
60
|
+
modelOptions: {
|
|
61
|
+
temperature: 0.7,
|
|
62
|
+
},
|
|
63
|
+
});
|
|
64
|
+
|
|
65
|
+
const result = await model.invoke({
|
|
66
|
+
messages: [{ role: "user", content: "Which model are you using?" }],
|
|
67
|
+
});
|
|
68
|
+
|
|
69
|
+
console.log(result);
|
|
70
|
+
/* Output:
|
|
71
|
+
{
|
|
72
|
+
text: "I'm powered by OpenRouter, using the Claude 3 Opus model from Anthropic.",
|
|
73
|
+
model: "anthropic/claude-3-opus",
|
|
74
|
+
usage: {
|
|
75
|
+
inputTokens: 5,
|
|
76
|
+
outputTokens: 14
|
|
77
|
+
}
|
|
78
|
+
}
|
|
79
|
+
*/
|
|
80
|
+
```
|
|
81
|
+
|
|
82
|
+
## 使用多模型备选
|
|
83
|
+
|
|
84
|
+
```typescript
|
|
85
|
+
const modelWithFallbacks = new OpenRouterChatModel({
|
|
86
|
+
apiKey: "your-api-key",
|
|
87
|
+
model: "openai/gpt-4o",
|
|
88
|
+
fallbackModels: ["anthropic/claude-3-opus", "google/gemini-1.5-pro"], // 备选顺序
|
|
89
|
+
modelOptions: {
|
|
90
|
+
temperature: 0.7,
|
|
91
|
+
},
|
|
92
|
+
});
|
|
93
|
+
|
|
94
|
+
// 将首先尝试 gpt-4o,如果失败则尝试 claude-3-opus,如果再失败则尝试 gemini-1.5-pro
|
|
95
|
+
const fallbackResult = await modelWithFallbacks.invoke({
|
|
96
|
+
messages: [{ role: "user", content: "Which model are you using?" }],
|
|
97
|
+
});
|
|
98
|
+
```
|
|
99
|
+
|
|
100
|
+
## 流式响应
|
|
101
|
+
|
|
102
|
+
```typescript file="test/open-router-chat-model.test.ts" region="example-openrouter-chat-model-streaming"
|
|
103
|
+
import { OpenRouterChatModel } from "@aigne/open-router";
|
|
104
|
+
|
|
105
|
+
const model = new OpenRouterChatModel({
|
|
106
|
+
apiKey: "your-api-key",
|
|
107
|
+
model: "anthropic/claude-3-opus",
|
|
108
|
+
});
|
|
109
|
+
|
|
110
|
+
const stream = await model.invoke(
|
|
111
|
+
{
|
|
112
|
+
messages: [{ role: "user", content: "Which model are you using?" }],
|
|
113
|
+
},
|
|
114
|
+
undefined,
|
|
115
|
+
{ streaming: true },
|
|
116
|
+
);
|
|
117
|
+
|
|
118
|
+
let fullText = "";
|
|
119
|
+
const json = {};
|
|
120
|
+
|
|
121
|
+
for await (const chunk of stream) {
|
|
122
|
+
const text = chunk.delta.text?.text;
|
|
123
|
+
if (text) fullText += text;
|
|
124
|
+
if (chunk.delta.json) Object.assign(json, chunk.delta.json);
|
|
125
|
+
}
|
|
126
|
+
|
|
127
|
+
console.log(fullText); // Output: "I'm powered by OpenRouter, using the Claude 3 Opus model from Anthropic."
|
|
128
|
+
console.log(json); // { model: "anthropic/claude-3-opus", usage: { inputTokens: 5, outputTokens: 14 } }
|
|
129
|
+
```
|
|
130
|
+
|
|
131
|
+
## 许可证
|
|
132
|
+
|
|
133
|
+
Elastic-2.0
|
|
@@ -0,0 +1 @@
|
|
|
1
|
+
export * from "./open-router-chat-model.js";
|
package/lib/cjs/index.js
ADDED
|
@@ -0,0 +1,17 @@
|
|
|
1
|
+
"use strict";
|
|
2
|
+
var __createBinding = (this && this.__createBinding) || (Object.create ? (function(o, m, k, k2) {
|
|
3
|
+
if (k2 === undefined) k2 = k;
|
|
4
|
+
var desc = Object.getOwnPropertyDescriptor(m, k);
|
|
5
|
+
if (!desc || ("get" in desc ? !m.__esModule : desc.writable || desc.configurable)) {
|
|
6
|
+
desc = { enumerable: true, get: function() { return m[k]; } };
|
|
7
|
+
}
|
|
8
|
+
Object.defineProperty(o, k2, desc);
|
|
9
|
+
}) : (function(o, m, k, k2) {
|
|
10
|
+
if (k2 === undefined) k2 = k;
|
|
11
|
+
o[k2] = m[k];
|
|
12
|
+
}));
|
|
13
|
+
var __exportStar = (this && this.__exportStar) || function(m, exports) {
|
|
14
|
+
for (var p in m) if (p !== "default" && !Object.prototype.hasOwnProperty.call(exports, p)) __createBinding(exports, m, p);
|
|
15
|
+
};
|
|
16
|
+
Object.defineProperty(exports, "__esModule", { value: true });
|
|
17
|
+
__exportStar(require("./open-router-chat-model.js"), exports);
|
|
@@ -0,0 +1,22 @@
|
|
|
1
|
+
import { OpenAIChatModel, type OpenAIChatModelOptions } from "@aigne/openai";
|
|
2
|
+
/**
|
|
3
|
+
* Implementation of the ChatModel interface for OpenRouter service
|
|
4
|
+
*
|
|
5
|
+
* OpenRouter provides access to a variety of large language models through a unified API.
|
|
6
|
+
* This implementation uses the OpenAI-compatible interface to connect to OpenRouter's service.
|
|
7
|
+
*
|
|
8
|
+
* Default model: 'openai/gpt-4o'
|
|
9
|
+
*
|
|
10
|
+
* @example
|
|
11
|
+
* Here's how to create and use an OpenRouter chat model:
|
|
12
|
+
* {@includeCode ../test/open-router-chat-model.test.ts#example-openrouter-chat-model}
|
|
13
|
+
*
|
|
14
|
+
* @example
|
|
15
|
+
* Here's an example with streaming response:
|
|
16
|
+
* {@includeCode ../test/open-router-chat-model.test.ts#example-openrouter-chat-model-streaming}
|
|
17
|
+
*/
|
|
18
|
+
export declare class OpenRouterChatModel extends OpenAIChatModel {
|
|
19
|
+
constructor(options?: OpenAIChatModelOptions);
|
|
20
|
+
protected apiKeyEnvName: string;
|
|
21
|
+
protected supportsParallelToolCalls: boolean;
|
|
22
|
+
}
|
|
@@ -0,0 +1,34 @@
|
|
|
1
|
+
"use strict";
|
|
2
|
+
Object.defineProperty(exports, "__esModule", { value: true });
|
|
3
|
+
exports.OpenRouterChatModel = void 0;
|
|
4
|
+
const openai_1 = require("@aigne/openai");
|
|
5
|
+
const OPEN_ROUTER_DEFAULT_CHAT_MODEL = "openai/gpt-4o";
|
|
6
|
+
const OPEN_ROUTER_BASE_URL = "https://openrouter.ai/api/v1";
|
|
7
|
+
/**
|
|
8
|
+
* Implementation of the ChatModel interface for OpenRouter service
|
|
9
|
+
*
|
|
10
|
+
* OpenRouter provides access to a variety of large language models through a unified API.
|
|
11
|
+
* This implementation uses the OpenAI-compatible interface to connect to OpenRouter's service.
|
|
12
|
+
*
|
|
13
|
+
* Default model: 'openai/gpt-4o'
|
|
14
|
+
*
|
|
15
|
+
* @example
|
|
16
|
+
* Here's how to create and use an OpenRouter chat model:
|
|
17
|
+
* {@includeCode ../test/open-router-chat-model.test.ts#example-openrouter-chat-model}
|
|
18
|
+
*
|
|
19
|
+
* @example
|
|
20
|
+
* Here's an example with streaming response:
|
|
21
|
+
* {@includeCode ../test/open-router-chat-model.test.ts#example-openrouter-chat-model-streaming}
|
|
22
|
+
*/
|
|
23
|
+
class OpenRouterChatModel extends openai_1.OpenAIChatModel {
|
|
24
|
+
constructor(options) {
|
|
25
|
+
super({
|
|
26
|
+
...options,
|
|
27
|
+
model: options?.model || OPEN_ROUTER_DEFAULT_CHAT_MODEL,
|
|
28
|
+
baseURL: options?.baseURL || OPEN_ROUTER_BASE_URL,
|
|
29
|
+
});
|
|
30
|
+
}
|
|
31
|
+
apiKeyEnvName = "OPEN_ROUTER_API_KEY";
|
|
32
|
+
supportsParallelToolCalls = false;
|
|
33
|
+
}
|
|
34
|
+
exports.OpenRouterChatModel = OpenRouterChatModel;
|
|
@@ -0,0 +1 @@
|
|
|
1
|
+
{"type": "commonjs"}
|
|
@@ -0,0 +1 @@
|
|
|
1
|
+
export * from "./open-router-chat-model.js";
|
|
@@ -0,0 +1,22 @@
|
|
|
1
|
+
import { OpenAIChatModel, type OpenAIChatModelOptions } from "@aigne/openai";
|
|
2
|
+
/**
|
|
3
|
+
* Implementation of the ChatModel interface for OpenRouter service
|
|
4
|
+
*
|
|
5
|
+
* OpenRouter provides access to a variety of large language models through a unified API.
|
|
6
|
+
* This implementation uses the OpenAI-compatible interface to connect to OpenRouter's service.
|
|
7
|
+
*
|
|
8
|
+
* Default model: 'openai/gpt-4o'
|
|
9
|
+
*
|
|
10
|
+
* @example
|
|
11
|
+
* Here's how to create and use an OpenRouter chat model:
|
|
12
|
+
* {@includeCode ../test/open-router-chat-model.test.ts#example-openrouter-chat-model}
|
|
13
|
+
*
|
|
14
|
+
* @example
|
|
15
|
+
* Here's an example with streaming response:
|
|
16
|
+
* {@includeCode ../test/open-router-chat-model.test.ts#example-openrouter-chat-model-streaming}
|
|
17
|
+
*/
|
|
18
|
+
export declare class OpenRouterChatModel extends OpenAIChatModel {
|
|
19
|
+
constructor(options?: OpenAIChatModelOptions);
|
|
20
|
+
protected apiKeyEnvName: string;
|
|
21
|
+
protected supportsParallelToolCalls: boolean;
|
|
22
|
+
}
|
|
@@ -0,0 +1 @@
|
|
|
1
|
+
export * from "./open-router-chat-model.js";
|
package/lib/esm/index.js
ADDED
|
@@ -0,0 +1 @@
|
|
|
1
|
+
export * from "./open-router-chat-model.js";
|
|
@@ -0,0 +1,22 @@
|
|
|
1
|
+
import { OpenAIChatModel, type OpenAIChatModelOptions } from "@aigne/openai";
|
|
2
|
+
/**
|
|
3
|
+
* Implementation of the ChatModel interface for OpenRouter service
|
|
4
|
+
*
|
|
5
|
+
* OpenRouter provides access to a variety of large language models through a unified API.
|
|
6
|
+
* This implementation uses the OpenAI-compatible interface to connect to OpenRouter's service.
|
|
7
|
+
*
|
|
8
|
+
* Default model: 'openai/gpt-4o'
|
|
9
|
+
*
|
|
10
|
+
* @example
|
|
11
|
+
* Here's how to create and use an OpenRouter chat model:
|
|
12
|
+
* {@includeCode ../test/open-router-chat-model.test.ts#example-openrouter-chat-model}
|
|
13
|
+
*
|
|
14
|
+
* @example
|
|
15
|
+
* Here's an example with streaming response:
|
|
16
|
+
* {@includeCode ../test/open-router-chat-model.test.ts#example-openrouter-chat-model-streaming}
|
|
17
|
+
*/
|
|
18
|
+
export declare class OpenRouterChatModel extends OpenAIChatModel {
|
|
19
|
+
constructor(options?: OpenAIChatModelOptions);
|
|
20
|
+
protected apiKeyEnvName: string;
|
|
21
|
+
protected supportsParallelToolCalls: boolean;
|
|
22
|
+
}
|
|
@@ -0,0 +1,30 @@
|
|
|
1
|
+
import { OpenAIChatModel } from "@aigne/openai";
|
|
2
|
+
const OPEN_ROUTER_DEFAULT_CHAT_MODEL = "openai/gpt-4o";
|
|
3
|
+
const OPEN_ROUTER_BASE_URL = "https://openrouter.ai/api/v1";
|
|
4
|
+
/**
|
|
5
|
+
* Implementation of the ChatModel interface for OpenRouter service
|
|
6
|
+
*
|
|
7
|
+
* OpenRouter provides access to a variety of large language models through a unified API.
|
|
8
|
+
* This implementation uses the OpenAI-compatible interface to connect to OpenRouter's service.
|
|
9
|
+
*
|
|
10
|
+
* Default model: 'openai/gpt-4o'
|
|
11
|
+
*
|
|
12
|
+
* @example
|
|
13
|
+
* Here's how to create and use an OpenRouter chat model:
|
|
14
|
+
* {@includeCode ../test/open-router-chat-model.test.ts#example-openrouter-chat-model}
|
|
15
|
+
*
|
|
16
|
+
* @example
|
|
17
|
+
* Here's an example with streaming response:
|
|
18
|
+
* {@includeCode ../test/open-router-chat-model.test.ts#example-openrouter-chat-model-streaming}
|
|
19
|
+
*/
|
|
20
|
+
export class OpenRouterChatModel extends OpenAIChatModel {
|
|
21
|
+
constructor(options) {
|
|
22
|
+
super({
|
|
23
|
+
...options,
|
|
24
|
+
model: options?.model || OPEN_ROUTER_DEFAULT_CHAT_MODEL,
|
|
25
|
+
baseURL: options?.baseURL || OPEN_ROUTER_BASE_URL,
|
|
26
|
+
});
|
|
27
|
+
}
|
|
28
|
+
apiKeyEnvName = "OPEN_ROUTER_API_KEY";
|
|
29
|
+
supportsParallelToolCalls = false;
|
|
30
|
+
}
|
|
@@ -0,0 +1 @@
|
|
|
1
|
+
{"type": "module"}
|
package/package.json
ADDED
|
@@ -0,0 +1,54 @@
|
|
|
1
|
+
{
|
|
2
|
+
"name": "@aigne/open-router",
|
|
3
|
+
"version": "0.1.0",
|
|
4
|
+
"description": "AIGNE OpenRouter SDK for accessing multiple AI models through a unified API",
|
|
5
|
+
"publishConfig": {
|
|
6
|
+
"access": "public"
|
|
7
|
+
},
|
|
8
|
+
"author": "Arcblock <blocklet@arcblock.io> https://github.com/blocklet",
|
|
9
|
+
"homepage": "https://github.com/AIGNE-io/aigne-framework",
|
|
10
|
+
"license": "Elastic-2.0",
|
|
11
|
+
"repository": {
|
|
12
|
+
"type": "git",
|
|
13
|
+
"url": "git+https://github.com/AIGNE-io/aigne-framework"
|
|
14
|
+
},
|
|
15
|
+
"files": [
|
|
16
|
+
"lib/cjs",
|
|
17
|
+
"lib/dts",
|
|
18
|
+
"lib/esm",
|
|
19
|
+
"LICENSE",
|
|
20
|
+
"README.md",
|
|
21
|
+
"CHANGELOG.md"
|
|
22
|
+
],
|
|
23
|
+
"type": "module",
|
|
24
|
+
"main": "./lib/cjs/index.js",
|
|
25
|
+
"module": "./lib/esm/index.js",
|
|
26
|
+
"types": "./lib/dts/index.d.ts",
|
|
27
|
+
"exports": {
|
|
28
|
+
".": {
|
|
29
|
+
"import": "./lib/esm/index.js",
|
|
30
|
+
"require": "./lib/cjs/index.js",
|
|
31
|
+
"types": "./lib/dts/index.d.ts"
|
|
32
|
+
}
|
|
33
|
+
},
|
|
34
|
+
"dependencies": {
|
|
35
|
+
"@aigne/openai": "^0.1.0"
|
|
36
|
+
},
|
|
37
|
+
"devDependencies": {
|
|
38
|
+
"@types/bun": "^1.2.12",
|
|
39
|
+
"@types/node": "^22.15.15",
|
|
40
|
+
"npm-run-all": "^4.1.5",
|
|
41
|
+
"rimraf": "^6.0.1",
|
|
42
|
+
"typescript": "^5.8.3",
|
|
43
|
+
"@aigne/core": "^1.16.0",
|
|
44
|
+
"@aigne/test-utils": "^0.3.0"
|
|
45
|
+
},
|
|
46
|
+
"scripts": {
|
|
47
|
+
"lint": "tsc --noEmit",
|
|
48
|
+
"build": "tsc --build scripts/tsconfig.build.json",
|
|
49
|
+
"clean": "rimraf lib test/coverage",
|
|
50
|
+
"test": "bun test",
|
|
51
|
+
"test:coverage": "bun test --coverage --coverage-reporter=lcov --coverage-reporter=text",
|
|
52
|
+
"postbuild": "echo '{\"type\": \"module\"}' > lib/esm/package.json && echo '{\"type\": \"commonjs\"}' > lib/cjs/package.json"
|
|
53
|
+
}
|
|
54
|
+
}
|