@node-llm/core 1.4.0 → 1.4.2

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
package/README.md CHANGED
@@ -3,9 +3,13 @@
3
3
  </p>
4
4
 
5
5
  # NodeLLM
6
- **An opinionated architectural layer for using Large Language Models in Node.js.**
7
6
 
8
- Build chatbots, autonomous agents, and RAG pipelines without the SDK fatigue. NodeLLM provides a unified, production-oriented API for interacting with over **540+ models** across multiple providers (OpenAI, Gemini, Anthropic, DeepSeek, OpenRouter, Ollama, etc.) without coupling your application to any single SDK.
7
+
8
+ **An opinionated architectural layer for integrating Large Language Models in Node.js.**
9
+
10
+ **Provider-agnostic by design.**
11
+
12
+ Most LLM SDKs **tightly couple** your application to vendors, APIs, and churn. NodeLLM provides a unified, production-oriented API for interacting with over 540+ models across multiple providers (OpenAI, Gemini, Anthropic, DeepSeek, OpenRouter, Ollama, etc.) without the SDK fatigue.
9
13
 
10
14
  <br/>
11
15
 
@@ -36,7 +40,31 @@ Build chatbots, autonomous agents, and RAG pipelines without the SDK fatigue. No
36
40
 
37
41
  ---
38
42
 
39
- ## The Golden Path
43
+ ## 🛑 What NodeLLM is NOT
44
+
45
+ NodeLLM represents a clear architectural boundary between your system and LLM vendors.
46
+
47
+ NodeLLM is **NOT**:
48
+ - A wrapper around a single provider SDK (like `openai` or `@google/generative-ai`)
49
+ - A prompt-engineering framework
50
+ - An agent playground or experimental toy
51
+
52
+ ---
53
+
54
+ ## 🏗️ Why NodeLLM?
55
+
56
+ Most AI integrations today are provider-specific, SDK-driven, and leaky at abstraction boundaries. This creates long-term architectural risk. **LLMs should be treated as infrastructure**, and NodeLLM exists to help you integrate them without vendor lock-in.
57
+
58
+ NodeLLM exists to solve **architectural problems**, not just provide API access. It is the core architectural layer for LLMs in the Node.js ecosystem.
59
+
60
+ ### Strategic Goals
61
+ - **Provider Isolation**: Decouple your services from vendor SDKs.
62
+ - **Production-Ready**: Native support for streaming, retries, and unified error handling.
63
+ - **Predictable API**: Consistent behavior for Tools, Vision, and Structured Outputs across all models.
64
+
65
+ ---
66
+
67
+ ## ⚡ The Architectural Path
40
68
 
41
69
  ```ts
42
70
  import { NodeLLM } from "@node-llm/core";
@@ -55,26 +83,6 @@ for await (const chunk of chat.stream("Explain event-driven architecture")) {
55
83
  }
56
84
  ```
57
85
 
58
- ---
59
-
60
- ## Why`NodeLLM`?
61
-
62
- Most AI integrations today are provider-specific, SDK-driven, and leaky at abstraction boundaries. This creates long-term architectural risk. Switching models should not mean a total rewrite of your business logic.
63
-
64
- NodeLLMexists to solve **architectural problems**, not just provide API access.
65
-
66
- ### Strategic Goals
67
-
68
- - **Provider Isolation**: Your application logic never touches a provider-specific SDK.
69
- - **Unified Mental Model**: Chat, streaming, tools, and structured outputs feel identical across providers.
70
- - **Production-Ready**: Streaming, retries, and errors are first-class concerns.
71
- - **The "Standard Library" Voice**: It provides a beautiful, native-feeling API for modern Node.js.
72
-
73
- ### Non-Goals
74
-
75
- - It is **not** a thin wrapper that mirrors every provider's unique API knobs.
76
- - It is **not** a UI framework or a simple chatbot builder.
77
- - It prioritizes **architectural clarity** over raw SDK convenience.
78
86
 
79
87
  ---
80
88
 
@@ -0,0 +1,132 @@
1
+ {
2
+ "chatgpt-4o": {
3
+ "openai": "chatgpt-4o-latest",
4
+ "openrouter": "openai/chatgpt-4o-latest"
5
+ },
6
+ "claude-3-5-haiku": {
7
+ "anthropic": "claude-3-5-haiku-20241022",
8
+ "openrouter": "anthropic/claude-3.5-haiku",
9
+ "bedrock": "anthropic.claude-3-5-haiku-20241022-v1:0"
10
+ },
11
+ "claude-3-5-sonnet": {
12
+ "anthropic": "claude-3-5-sonnet-20240620",
13
+ "openrouter": "anthropic/claude-3.5-sonnet",
14
+ "bedrock": "anthropic.claude-3-5-sonnet-20240620-v1:0"
15
+ },
16
+ "claude-sonnet-4-5": {
17
+ "anthropic": "claude-sonnet-4-5-20250929"
18
+ },
19
+ "claude-sonnet-4": {
20
+ "anthropic": "claude-sonnet-4-20250514"
21
+ },
22
+ "claude-3-7-sonnet": {
23
+ "anthropic": "claude-3-7-sonnet-20250219",
24
+ "openrouter": "anthropic/claude-3.7-sonnet",
25
+ "bedrock": "us.anthropic.claude-3-7-sonnet-20250219-v1:0"
26
+ },
27
+ "claude-3-haiku": {
28
+ "anthropic": "claude-3-haiku-20240307",
29
+ "openrouter": "anthropic/claude-3-haiku",
30
+ "bedrock": "anthropic.claude-3-haiku-20240307-v1:0:200k"
31
+ },
32
+ "claude-3-opus": {
33
+ "anthropic": "claude-3-opus-20240229",
34
+ "openrouter": "anthropic/claude-3-opus",
35
+ "bedrock": "anthropic.claude-3-opus-20240229-v1:0:200k"
36
+ },
37
+ "claude-3-sonnet": {
38
+ "bedrock": "anthropic.claude-3-sonnet-20240229-v1:0"
39
+ },
40
+ "deepseek-chat": {
41
+ "deepseek": "deepseek-chat",
42
+ "openrouter": "deepseek/deepseek-chat"
43
+ },
44
+ "gemini-flash": {
45
+ "gemini": "gemini-flash-latest",
46
+ "vertexai": "gemini-flash-latest",
47
+ "openrouter": "google/gemini-flash-latest"
48
+ },
49
+ "gemini-pro": {
50
+ "gemini": "gemini-1.5-pro-001",
51
+ "vertexai": "gemini-1.5-pro-001",
52
+ "openrouter": "google/gemini-1.5-pro-001"
53
+ },
54
+ "gemini-1.5-flash": {
55
+ "gemini": "gemini-1.5-flash-001",
56
+ "vertexai": "gemini-1.5-flash-001",
57
+ "openrouter": "google/gemini-1.5-flash-001"
58
+ },
59
+ "gemini-1.5-pro": {
60
+ "gemini": "gemini-1.5-pro-001",
61
+ "vertexai": "gemini-1.5-pro-001",
62
+ "openrouter": "google/gemini-1.5-pro-001"
63
+ },
64
+ "gemini-2.0-flash": {
65
+ "gemini": "gemini-2.0-flash",
66
+ "vertexai": "gemini-2.0-flash"
67
+ },
68
+ "gemini-2.0-flash-001": {
69
+ "gemini": "gemini-2.0-flash-001",
70
+ "openrouter": "google/gemini-2.0-flash-001",
71
+ "vertexai": "gemini-2.0-flash-001"
72
+ },
73
+ "gpt-3.5-turbo": {
74
+ "openai": "gpt-3.5-turbo",
75
+ "openrouter": "openai/gpt-3.5-turbo"
76
+ },
77
+ "gpt-4": {
78
+ "openai": "gpt-4",
79
+ "openrouter": "openai/gpt-4"
80
+ },
81
+ "gpt-4-turbo": {
82
+ "openai": "gpt-4-turbo",
83
+ "openrouter": "openai/gpt-4-turbo"
84
+ },
85
+ "gpt-4o": {
86
+ "openai": "gpt-4o",
87
+ "openrouter": "openai/gpt-4o"
88
+ },
89
+ "gpt-4o-mini": {
90
+ "openai": "gpt-4o-mini",
91
+ "openrouter": "openai/gpt-4o-mini"
92
+ },
93
+ "llama-3-1-405b": {
94
+ "openrouter": "meta-llama/llama-3.1-405b"
95
+ },
96
+ "llama-3-1-405b-instruct": {
97
+ "openrouter": "meta-llama/llama-3.1-405b-instruct"
98
+ },
99
+ "llama-3-1-70b": {
100
+ "openrouter": "meta-llama/llama-3.1-70b"
101
+ },
102
+ "llama-3-1-70b-instruct": {
103
+ "openrouter": "meta-llama/llama-3.1-70b-instruct"
104
+ },
105
+ "llama-3-1-8b": {
106
+ "openrouter": "meta-llama/llama-3.1-8b"
107
+ },
108
+ "llama-3-1-8b-instruct": {
109
+ "openrouter": "meta-llama/llama-3.1-8b-instruct"
110
+ },
111
+ "llama-3-2-1b-instruct": {
112
+ "openrouter": "meta-llama/llama-3.2-1b-instruct"
113
+ },
114
+ "llama-3-2-3b-instruct": {
115
+ "openrouter": "meta-llama/llama-3.2-3b-instruct"
116
+ },
117
+ "llama-3-3-70b-instruct": {
118
+ "openrouter": "meta-llama/llama-3.3-70b-instruct"
119
+ },
120
+ "mistral-large": {
121
+ "mistral": "mistral-large-latest",
122
+ "openrouter": "mistralai/mistral-large"
123
+ },
124
+ "mistral-medium": {
125
+ "mistral": "mistral-medium-latest",
126
+ "openrouter": "mistralai/mistral-medium"
127
+ },
128
+ "mistral-small": {
129
+ "mistral": "mistral-small-latest",
130
+ "openrouter": "mistralai/mistral-small"
131
+ }
132
+ }
@@ -0,0 +1,21 @@
1
+ /**
2
+ * Utility functions for sanitizing sensitive data from error messages and objects
3
+ */
4
+ /**
5
+ * Sanitizes API keys and other sensitive data from strings
6
+ * @param text - The text to sanitize
7
+ * @returns Sanitized text with sensitive data redacted
8
+ */
9
+ export declare function sanitizeText(text: string): string;
10
+ /**
11
+ * Sanitizes an error object by redacting sensitive data
12
+ * @param error - The error object to sanitize
13
+ * @returns Sanitized error object
14
+ */
15
+ export declare function sanitizeError(error: any): any;
16
+ /**
17
+ * Logs an error with sanitized output
18
+ * @param error - The error to log
19
+ */
20
+ export declare function logSanitizedError(error: any): void;
21
+ //# sourceMappingURL=sanitize.d.ts.map
@@ -0,0 +1 @@
1
+ {"version":3,"file":"sanitize.d.ts","sourceRoot":"","sources":["../../src/utils/sanitize.ts"],"names":[],"mappings":"AAAA;;GAEG;AAEH;;;;GAIG;AACH,wBAAgB,YAAY,CAAC,IAAI,EAAE,MAAM,GAAG,MAAM,CAiCjD;AAED;;;;GAIG;AACH,wBAAgB,aAAa,CAAC,KAAK,EAAE,GAAG,GAAG,GAAG,CA+B7C;AAiBD;;;GAGG;AACH,wBAAgB,iBAAiB,CAAC,KAAK,EAAE,GAAG,GAAG,IAAI,CAGlD"}
@@ -0,0 +1,76 @@
1
+ /**
2
+ * Utility functions for sanitizing sensitive data from error messages and objects
3
+ */
4
+ /**
5
+ * Sanitizes API keys and other sensitive data from strings
6
+ * @param text - The text to sanitize
7
+ * @returns Sanitized text with sensitive data redacted
8
+ */
9
+ export function sanitizeText(text) {
10
+ if (!text)
11
+ return text;
12
+ // Sanitize API keys (various formats)
13
+ // OpenAI: sk-...
14
+ // Anthropic: sk-ant-...
15
+ // Generic patterns
16
+ let sanitized = text;
17
+ // Pattern: sk-ant-api03-... or sk-...
18
+ sanitized = sanitized.replace(/sk-ant-[a-zA-Z0-9_-]{8,}/g, (match) => `sk-ant-***${match.slice(-4)}`);
19
+ sanitized = sanitized.replace(/sk-[a-zA-Z0-9_-]{20,}/g, (match) => `sk-***${match.slice(-4)}`);
20
+ // Pattern: Bearer tokens
21
+ sanitized = sanitized.replace(/Bearer\s+[a-zA-Z0-9_-]{20,}/gi, (match) => `Bearer ***${match.slice(-4)}`);
22
+ // Pattern: API keys in error messages like "Incorrect API key provided: 8TAXX2TT***..."
23
+ sanitized = sanitized.replace(/API key[^:]*:\s*[a-zA-Z0-9]{4,}\**/gi, 'API key: [REDACTED]');
24
+ return sanitized;
25
+ }
26
+ /**
27
+ * Sanitizes an error object by redacting sensitive data
28
+ * @param error - The error object to sanitize
29
+ * @returns Sanitized error object
30
+ */
31
+ export function sanitizeError(error) {
32
+ if (!error)
33
+ return error;
34
+ // Handle Error instances
35
+ if (error instanceof Error) {
36
+ const sanitized = new Error(sanitizeText(error.message));
37
+ sanitized.name = error.name;
38
+ sanitized.stack = error.stack ? sanitizeText(error.stack) : undefined;
39
+ // Copy other properties
40
+ Object.keys(error).forEach(key => {
41
+ if (key !== 'message' && key !== 'stack' && key !== 'name') {
42
+ sanitized[key] = sanitizeErrorValue(error[key]);
43
+ }
44
+ });
45
+ return sanitized;
46
+ }
47
+ // Handle plain objects
48
+ if (typeof error === 'object') {
49
+ const sanitized = Array.isArray(error) ? [] : {};
50
+ for (const key in error) {
51
+ sanitized[key] = sanitizeErrorValue(error[key]);
52
+ }
53
+ return sanitized;
54
+ }
55
+ return error;
56
+ }
57
+ /**
58
+ * Sanitizes a single value (recursive helper)
59
+ */
60
+ function sanitizeErrorValue(value) {
61
+ if (typeof value === 'string') {
62
+ return sanitizeText(value);
63
+ }
64
+ if (typeof value === 'object' && value !== null) {
65
+ return sanitizeError(value);
66
+ }
67
+ return value;
68
+ }
69
+ /**
70
+ * Logs an error with sanitized output
71
+ * @param error - The error to log
72
+ */
73
+ export function logSanitizedError(error) {
74
+ const sanitized = sanitizeError(error);
75
+ console.error('[NodeLLM Error]', sanitized);
76
+ }
package/package.json CHANGED
@@ -1,6 +1,6 @@
1
1
  {
2
2
  "name": "@node-llm/core",
3
- "version": "1.4.0",
3
+ "version": "1.4.2",
4
4
  "type": "module",
5
5
  "main": "./dist/index.js",
6
6
  "types": "./dist/index.d.ts",
@@ -11,15 +11,55 @@
11
11
  }
12
12
  },
13
13
  "description": "A provider-agnostic LLM core for Node.js, inspired by ruby-llm.",
14
+ "keywords": [
15
+ "llm",
16
+ "large-language-models",
17
+ "ai",
18
+ "nodejs",
19
+ "llm-integration",
20
+ "architecture",
21
+ "integration-layer",
22
+ "provider-agnostic",
23
+ "vendor-agnostic",
24
+ "infrastructure",
25
+ "openai",
26
+ "anthropic",
27
+ "gemini",
28
+ "deepseek",
29
+ "openrouter",
30
+ "ollama",
31
+ "chat",
32
+ "streaming",
33
+ "rag",
34
+ "production"
35
+ ],
14
36
  "author": "NodeLLM contributors",
15
37
  "license": "MIT",
38
+ "repository": {
39
+ "type": "git",
40
+ "url": "https://github.com/eshaiju/node-llm",
41
+ "directory": "packages/core"
42
+ },
43
+ "homepage": "https://node-llm.eshaiju.com",
44
+ "bugs": {
45
+ "url": "https://github.com/eshaiju/node-llm/issues"
46
+ },
16
47
  "engines": {
17
48
  "node": ">=20.0.0"
18
49
  },
19
50
  "files": [
20
51
  "dist",
21
- "README.md"
52
+ "README.md",
53
+ "LICENSE"
22
54
  ],
55
+ "scripts": {
56
+ "build": "tsc -p tsconfig.json",
57
+ "dev": "tsc -w",
58
+ "lint": "tsc --noEmit",
59
+ "test": "vitest run",
60
+ "test:watch": "vitest",
61
+ "prepublishOnly": "npm run build"
62
+ },
23
63
  "dependencies": {
24
64
  "zod": "^3.23.8",
25
65
  "zod-to-json-schema": "^3.25.1"
@@ -29,12 +69,5 @@
29
69
  "@pollyjs/adapter-node-http": "^6.0.6",
30
70
  "@pollyjs/core": "^6.0.6",
31
71
  "@pollyjs/persister-fs": "^6.0.6"
32
- },
33
- "scripts": {
34
- "build": "tsc -p tsconfig.json",
35
- "dev": "tsc -w",
36
- "lint": "tsc --noEmit",
37
- "test": "vitest run",
38
- "test:watch": "vitest"
39
72
  }
40
- }
73
+ }