@inference-gateway/sdk 0.1.4

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
package/CHANGELOG.md ADDED
@@ -0,0 +1,56 @@
1
+ ## [0.1.4](https://github.com/inference-gateway/typescript-sdk/compare/v0.1.3...v0.1.4) (2025-01-23)
2
+
3
+ ### 🔧 Miscellaneous
4
+
5
+ * **release:** bump version to 0.1.2 in package.json ([ae64176](https://github.com/inference-gateway/typescript-sdk/commit/ae641767f3ba44edef0e9073d42421c2df05f36b))
6
+
7
+ ## [0.1.3](https://github.com/inference-gateway/typescript-sdk/compare/v0.1.2...v0.1.3) (2025-01-23)
8
+
9
+ ### 🐛 Bug Fixes
10
+
11
+ * Update release workflow environment variable and package.json repository URL format ([8ea1290](https://github.com/inference-gateway/typescript-sdk/commit/8ea1290ed6e2c122cbce7c311478e9814d09e36d))
12
+
13
+ ## [0.1.2](https://github.com/inference-gateway/typescript-sdk/compare/v0.1.1...v0.1.2) (2025-01-23)
14
+
15
+ ### 👷 CI
16
+
17
+ * Update permissions in release workflow for issues and pull requests ([ae1a835](https://github.com/inference-gateway/typescript-sdk/commit/ae1a83586b211a7b468fa2fc1b07f30eb02effb2))
18
+
19
+ ## [0.1.1](https://github.com/inference-gateway/typescript-sdk/compare/v0.1.0...v0.1.1) (2025-01-23)
20
+
21
+ ### ♻️ Improvements
22
+
23
+ * Refactor imports and update TypeScript configuration for improved module resolution and testing ([f74b6b1](https://github.com/inference-gateway/typescript-sdk/commit/f74b6b1dbc7371da01991ba832120c92b36d9c91))
24
+
25
+ ### 🐛 Bug Fixes
26
+
27
+ * Update tag format in .releaserc.yaml and add npm plugin ([6e55661](https://github.com/inference-gateway/typescript-sdk/commit/6e5566147c05e5ace4306197cc5250cca0e5a948))
28
+
29
+ ### 👷 CI
30
+
31
+ * Update Node.js version in CI workflow from 20.x to 22.x ([1ecf62a](https://github.com/inference-gateway/typescript-sdk/commit/1ecf62ab2af9787cbf9ca02fb84377d5c1a08255))
32
+ * Update Node.js version to 22.x and add global dependencies for semantic release ([0888fae](https://github.com/inference-gateway/typescript-sdk/commit/0888fae0c4a98a879808dc367e83e15d236dabab))
33
+
34
+ ### 🔧 Miscellaneous
35
+
36
+ * Add @semantic-release/npm to Dockerfile and release workflow ([8b94e8c](https://github.com/inference-gateway/typescript-sdk/commit/8b94e8c59f705d3c7e79e29275854dbd1ad21010))
37
+ * Add job names to CI and release workflows ([e053535](https://github.com/inference-gateway/typescript-sdk/commit/e05353554c1eb62b7f0fd6b20ac8f8c75ec0685b))
38
+ * Bump OS version to the latest with updated version of NodeJS and NPM ([8739216](https://github.com/inference-gateway/typescript-sdk/commit/8739216acfbf26eba724fabf68103ed59cf73439))
39
+ * **release:** 0.1.1 [skip ci] ([1c340b4](https://github.com/inference-gateway/typescript-sdk/commit/1c340b47fedd8f78220dc49b08acb72ba7f760fe))
40
+ * **release:** bump version to 0.1.1 in package.json ([bd9fbb2](https://github.com/inference-gateway/typescript-sdk/commit/bd9fbb2346adcb89e6377a1212c3f9c257d25c0a))
41
+ * Standardize quotes in .releaserc.yaml configuration ([b4c4f5b](https://github.com/inference-gateway/typescript-sdk/commit/b4c4f5bb31721dac3355b70bf3e04398c0f8491b))
42
+
43
+ ## [0.1.1](https://github.com/inference-gateway/typescript-sdk/compare/v0.1.0...0.1.1) (2025-01-23)
44
+
45
+ ### ♻️ Improvements
46
+
47
+ * Refactor imports and update TypeScript configuration for improved module resolution and testing ([f74b6b1](https://github.com/inference-gateway/typescript-sdk/commit/f74b6b1dbc7371da01991ba832120c92b36d9c91))
48
+
49
+ ### 👷 CI
50
+
51
+ * Update Node.js version in CI workflow from 20.x to 22.x ([1ecf62a](https://github.com/inference-gateway/typescript-sdk/commit/1ecf62ab2af9787cbf9ca02fb84377d5c1a08255))
52
+ * Update Node.js version to 22.x and add global dependencies for semantic release ([0888fae](https://github.com/inference-gateway/typescript-sdk/commit/0888fae0c4a98a879808dc367e83e15d236dabab))
53
+
54
+ ### 🔧 Miscellaneous
55
+
56
+ * Bump OS version to the latest with updated version of NodeJS and NPM ([8739216](https://github.com/inference-gateway/typescript-sdk/commit/8739216acfbf26eba724fabf68103ed59cf73439))
package/LICENSE ADDED
@@ -0,0 +1,21 @@
1
+ MIT License
2
+
3
+ Copyright (c) 2025 Inference Gateway
4
+
5
+ Permission is hereby granted, free of charge, to any person obtaining a copy
6
+ of this software and associated documentation files (the "Software"), to deal
7
+ in the Software without restriction, including without limitation the rights
8
+ to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
9
+ copies of the Software, and to permit persons to whom the Software is
10
+ furnished to do so, subject to the following conditions:
11
+
12
+ The above copyright notice and this permission notice shall be included in all
13
+ copies or substantial portions of the Software.
14
+
15
+ THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
16
+ IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
17
+ FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
18
+ AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
19
+ LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
20
+ OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
21
+ SOFTWARE.
package/README.md ADDED
@@ -0,0 +1,127 @@
1
+ # Inference Gateway Typescript SDK
2
+
3
+ An SDK written in Typescript for the [Inference Gateway](https://github.com/edenreich/inference-gateway).
4
+
5
+ - [Inference Gateway Typescript SDK](#inference-gateway-typescript-sdk)
6
+ - [Installation](#installation)
7
+ - [Usage](#usage)
8
+ - [Creating a Client](#creating-a-client)
9
+ - [Listing Models](#listing-models)
10
+ - [Generating Content](#generating-content)
11
+ - [Health Check](#health-check)
12
+ - [License](#license)
13
+
14
+ ## Installation
15
+
16
+ Run `npm i @inference-gateway/sdk`.
17
+
18
+ ## Usage
19
+
20
+ ### Creating a Client
21
+
22
+ ```typescript
23
+ import {
24
+ InferenceGatewayClient,
25
+ Message,
26
+ Provider,
27
+ } from '@inference-gateway/sdk';
28
+
29
+ async function main() {
30
+ const client = new InferenceGatewayClient('http://localhost:8080');
31
+
32
+ try {
33
+ // List available models
34
+ const models = await client.listModels();
35
+ models.forEach((providerModels) => {
36
+ console.log(`Provider: ${providerModels.provider}`);
37
+ providerModels.models.forEach((model) => {
38
+ console.log(`Model: ${model.id}`);
39
+ });
40
+ });
41
+
42
+ const response = await client.generateContent({
43
+ provider: Provider.Ollama,
44
+ model: 'llama2',
45
+ messages: [
46
+ {
47
+ role: 'system',
48
+ content: 'You are a helpful llama',
49
+ },
50
+ {
51
+ role: 'user',
52
+ content: 'Tell me a joke',
53
+ },
54
+ ],
55
+ });
56
+
57
+ console.log('Response:', response);
58
+ } catch (error) {
59
+ console.error('Error:', error);
60
+ }
61
+ }
62
+
63
+ main();
64
+ ```
65
+
66
+ ### Listing Models
67
+
68
+ To list available models, use the `listModels` method:
69
+
70
+ ```typescript
71
+ try {
72
+ const models = await client.listModels();
73
+ models.forEach((providerModels) => {
74
+ console.log(`Provider: ${providerModels.provider}`);
75
+ providerModels.models.forEach((model) => {
76
+ console.log(`Model: ${model.id}`);
77
+ });
78
+ });
79
+ } catch (error) {
80
+ console.error('Error:', error);
81
+ }
82
+ ```
83
+
84
+ ### Generating Content
85
+
86
+ To generate content using a model, use the `generateContent` method:
87
+
88
+ ```typescript
89
+ try {
90
+ const response = await client.generateContent({
91
+ provider: Provider.Ollama,
92
+ model: 'llama2',
93
+ messages: [
94
+ {
95
+ role: 'system',
96
+ content: 'You are a helpful llama',
97
+ },
98
+ {
99
+ role: 'user',
100
+ content: 'Tell me a joke',
101
+ },
102
+ ],
103
+ });
104
+
105
+ console.log('Provider:', response.provider);
106
+ console.log('Response:', response.response);
107
+ } catch (error) {
108
+ console.error('Error:', error);
109
+ }
110
+ ```
111
+
112
+ ### Health Check
113
+
114
+ To check if the Inference Gateway is running, use the `healthCheck` method:
115
+
116
+ ```typescript
117
+ try {
118
+ const isHealthy = await client.healthCheck();
119
+ console.log('API is healthy:', isHealthy);
120
+ } catch (error) {
121
+ console.error('Error:', error);
122
+ }
123
+ ```
124
+
125
+ ## License
126
+
127
+ This SDK is distributed under the MIT License, see [LICENSE](LICENSE) for more information.
@@ -0,0 +1,10 @@
1
+ import { GenerateContentRequest, GenerateContentResponse, ProviderModels } from './types';
2
+ export declare class InferenceGatewayClient {
3
+ private baseUrl;
4
+ private authToken?;
5
+ constructor(baseUrl: string, authToken?: string);
6
+ private request;
7
+ listModels(): Promise<ProviderModels[]>;
8
+ generateContent(params: GenerateContentRequest): Promise<GenerateContentResponse>;
9
+ healthCheck(): Promise<boolean>;
10
+ }
@@ -0,0 +1,51 @@
1
+ "use strict";
2
+ Object.defineProperty(exports, "__esModule", { value: true });
3
+ exports.InferenceGatewayClient = void 0;
4
+ class InferenceGatewayClient {
5
+ baseUrl;
6
+ authToken;
7
+ constructor(baseUrl, authToken) {
8
+ this.baseUrl = baseUrl.replace(/\/$/, '');
9
+ this.authToken = authToken;
10
+ }
11
+ async request(path, options = {}) {
12
+ const headers = new Headers({
13
+ 'Content-Type': 'application/json',
14
+ ...options.headers,
15
+ });
16
+ if (this.authToken) {
17
+ headers.set('Authorization', `Bearer ${this.authToken}`);
18
+ }
19
+ const response = await fetch(`${this.baseUrl}${path}`, {
20
+ ...options,
21
+ headers,
22
+ });
23
+ if (!response.ok) {
24
+ const error = await response.json();
25
+ throw new Error(error.error || `HTTP error! status: ${response.status}`);
26
+ }
27
+ return response.json();
28
+ }
29
+ async listModels() {
30
+ return this.request('/llms');
31
+ }
32
+ async generateContent(params) {
33
+ return this.request(`/llms/${params.provider}/generate`, {
34
+ method: 'POST',
35
+ body: JSON.stringify({
36
+ model: params.model,
37
+ messages: params.messages,
38
+ }),
39
+ });
40
+ }
41
+ async healthCheck() {
42
+ try {
43
+ await this.request('/health');
44
+ return true;
45
+ }
46
+ catch {
47
+ return false;
48
+ }
49
+ }
50
+ }
51
+ exports.InferenceGatewayClient = InferenceGatewayClient;
@@ -0,0 +1,2 @@
1
+ export * from './client';
2
+ export * from './types';
@@ -0,0 +1,18 @@
1
+ "use strict";
2
+ var __createBinding = (this && this.__createBinding) || (Object.create ? (function(o, m, k, k2) {
3
+ if (k2 === undefined) k2 = k;
4
+ var desc = Object.getOwnPropertyDescriptor(m, k);
5
+ if (!desc || ("get" in desc ? !m.__esModule : desc.writable || desc.configurable)) {
6
+ desc = { enumerable: true, get: function() { return m[k]; } };
7
+ }
8
+ Object.defineProperty(o, k2, desc);
9
+ }) : (function(o, m, k, k2) {
10
+ if (k2 === undefined) k2 = k;
11
+ o[k2] = m[k];
12
+ }));
13
+ var __exportStar = (this && this.__exportStar) || function(m, exports) {
14
+ for (var p in m) if (p !== "default" && !Object.prototype.hasOwnProperty.call(exports, p)) __createBinding(exports, m, p);
15
+ };
16
+ Object.defineProperty(exports, "__esModule", { value: true });
17
+ __exportStar(require("./client"), exports);
18
+ __exportStar(require("./types"), exports);
@@ -0,0 +1,36 @@
1
+ export declare enum Provider {
2
+ Ollama = "ollama",
3
+ Groq = "groq",
4
+ OpenAI = "openai",
5
+ Google = "google",
6
+ Cloudflare = "cloudflare",
7
+ Cohere = "cohere",
8
+ Anthropic = "anthropic"
9
+ }
10
+ export interface Message {
11
+ role: 'system' | 'user' | 'assistant';
12
+ content: string;
13
+ }
14
+ export interface Model {
15
+ id: string;
16
+ object: string;
17
+ owned_by: string;
18
+ created: number;
19
+ }
20
+ export interface ProviderModels {
21
+ provider: Provider;
22
+ models: Model[];
23
+ }
24
+ export interface GenerateContentRequest {
25
+ provider: Provider;
26
+ model: string;
27
+ messages: Message[];
28
+ }
29
+ export interface GenerateContentResponse {
30
+ provider: string;
31
+ response: {
32
+ role: 'assistant';
33
+ model: string;
34
+ content: string;
35
+ };
36
+ }
@@ -0,0 +1,13 @@
1
+ "use strict";
2
+ Object.defineProperty(exports, "__esModule", { value: true });
3
+ exports.Provider = void 0;
4
+ var Provider;
5
+ (function (Provider) {
6
+ Provider["Ollama"] = "ollama";
7
+ Provider["Groq"] = "groq";
8
+ Provider["OpenAI"] = "openai";
9
+ Provider["Google"] = "google";
10
+ Provider["Cloudflare"] = "cloudflare";
11
+ Provider["Cohere"] = "cohere";
12
+ Provider["Anthropic"] = "anthropic";
13
+ })(Provider || (exports.Provider = Provider = {}));
@@ -0,0 +1 @@
1
+ export {};
@@ -0,0 +1,98 @@
1
+ "use strict";
2
+ Object.defineProperty(exports, "__esModule", { value: true });
3
+ const client_1 = require("@/client");
4
+ const types_1 = require("@/types");
5
+ describe('InferenceGatewayClient', () => {
6
+ let client;
7
+ const mockBaseUrl = 'http://localhost:8080';
8
+ beforeEach(() => {
9
+ client = new client_1.InferenceGatewayClient(mockBaseUrl);
10
+ global.fetch = jest.fn();
11
+ });
12
+ describe('listModels', () => {
13
+ it('should fetch available models', async () => {
14
+ const mockResponse = [
15
+ {
16
+ provider: types_1.Provider.Ollama,
17
+ models: [
18
+ {
19
+ id: 'llama2',
20
+ object: 'model',
21
+ owned_by: 'ollama',
22
+ created: 1234567890,
23
+ },
24
+ ],
25
+ },
26
+ ];
27
+ global.fetch.mockResolvedValueOnce({
28
+ ok: true,
29
+ json: () => Promise.resolve(mockResponse),
30
+ });
31
+ const result = await client.listModels();
32
+ expect(result).toEqual(mockResponse);
33
+ expect(global.fetch).toHaveBeenCalledWith(`${mockBaseUrl}/llms`, expect.objectContaining({
34
+ headers: expect.any(Headers),
35
+ }));
36
+ });
37
+ });
38
+ describe('generateContent', () => {
39
+ it('should generate content with the specified provider', async () => {
40
+ const mockRequest = {
41
+ provider: types_1.Provider.Ollama,
42
+ model: 'llama2',
43
+ messages: [
44
+ { role: 'system', content: 'You are a helpful assistant' },
45
+ { role: 'user', content: 'Hello' },
46
+ ],
47
+ };
48
+ const mockResponse = {
49
+ provider: types_1.Provider.Ollama,
50
+ response: {
51
+ role: 'assistant',
52
+ model: 'llama2',
53
+ content: 'Hi there!',
54
+ },
55
+ };
56
+ global.fetch.mockResolvedValueOnce({
57
+ ok: true,
58
+ json: () => Promise.resolve(mockResponse),
59
+ });
60
+ const result = await client.generateContent(mockRequest);
61
+ expect(result).toEqual(mockResponse);
62
+ expect(global.fetch).toHaveBeenCalledWith(`${mockBaseUrl}/llms/${mockRequest.provider}/generate`, expect.objectContaining({
63
+ method: 'POST',
64
+ body: JSON.stringify({
65
+ model: mockRequest.model,
66
+ messages: mockRequest.messages,
67
+ }),
68
+ }));
69
+ });
70
+ });
71
+ describe('healthCheck', () => {
72
+ it('should return true when API is healthy', async () => {
73
+ global.fetch.mockResolvedValueOnce({
74
+ ok: true,
75
+ json: () => Promise.resolve({}),
76
+ });
77
+ const result = await client.healthCheck();
78
+ expect(result).toBe(true);
79
+ expect(global.fetch).toHaveBeenCalledWith(`${mockBaseUrl}/health`, expect.any(Object));
80
+ });
81
+ it('should return false when API is unhealthy', async () => {
82
+ global.fetch.mockRejectedValueOnce(new Error('API error'));
83
+ const result = await client.healthCheck();
84
+ expect(result).toBe(false);
85
+ });
86
+ });
87
+ describe('error handling', () => {
88
+ it('should throw error when API request fails', async () => {
89
+ const errorMessage = 'Bad Request';
90
+ global.fetch.mockResolvedValueOnce({
91
+ ok: false,
92
+ status: 400,
93
+ json: () => Promise.resolve({ error: errorMessage }),
94
+ });
95
+ await expect(client.listModels()).rejects.toThrow(errorMessage);
96
+ });
97
+ });
98
+ });
package/package.json ADDED
@@ -0,0 +1,74 @@
1
+ {
2
+ "name": "@inference-gateway/sdk",
3
+ "version": "0.1.4",
4
+ "description": "An SDK written in Typescript for the [Inference Gateway](https://github.com/inference-gateway/inference-gateway).",
5
+ "main": "dist/index.js",
6
+ "types": "dist/index.d.ts",
7
+ "type": "commonjs",
8
+ "private": false,
9
+ "keywords": [
10
+ "inference",
11
+ "inference-gateway",
12
+ "gateway",
13
+ "sdk",
14
+ "llm",
15
+ "ai",
16
+ "openai",
17
+ "anthropic",
18
+ "ollama",
19
+ "cloudflare",
20
+ "cohere",
21
+ "typescript"
22
+ ],
23
+ "author": "Eden Reich <eden.reich@gmail.com>",
24
+ "license": "MIT",
25
+ "files": [
26
+ "dist",
27
+ "README.md",
28
+ "LICENSE",
29
+ "CHANGELOG.md"
30
+ ],
31
+ "repository": {
32
+ "type": "git",
33
+ "url": "git+https://github.com/inference-gateway/typescript-sdk.git"
34
+ },
35
+ "bugs": {
36
+ "url": "https://github.com/inference-gateway/typescript-sdk/issues"
37
+ },
38
+ "homepage": "https://github.com/inference-gateway/typescript-sdk#README.md",
39
+ "peerDependencies": {
40
+ "node-fetch": "^2.7.0"
41
+ },
42
+ "peerDependenciesMeta": {
43
+ "node-fetch": {
44
+ "optional": true
45
+ }
46
+ },
47
+ "scripts": {
48
+ "build": "tsc",
49
+ "test": "jest",
50
+ "lint": "eslint src/**/*.ts",
51
+ "prepare": "npm run build"
52
+ },
53
+ "devDependencies": {
54
+ "@eslint/js": "^9.18.0",
55
+ "@types/jest": "^29.5.14",
56
+ "@types/node": "^22.10.9",
57
+ "@typescript-eslint/eslint-plugin": "^8.21.0",
58
+ "@typescript-eslint/parser": "^8.21.0",
59
+ "eslint": "^9.18.0",
60
+ "eslint-plugin-prettier": "^5.2.3",
61
+ "jest": "^29.7.0",
62
+ "ts-jest": "^29.2.5",
63
+ "typescript": "^5.7.3"
64
+ },
65
+ "engines": {
66
+ "node": ">=22.12.0",
67
+ "npm": ">=10.9.0"
68
+ },
69
+ "publishConfig": {
70
+ "registry": "https://registry.npmjs.org/",
71
+ "tag": "latest",
72
+ "access": "public"
73
+ }
74
+ }