@node-llm/core 1.6.0 → 1.6.2

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
Files changed (36) hide show
  1. package/README.md +65 -233
  2. package/dist/aliases.d.ts +4 -0
  3. package/dist/aliases.d.ts.map +1 -1
  4. package/dist/aliases.js +243 -239
  5. package/dist/chat/Chat.d.ts +11 -0
  6. package/dist/chat/Chat.d.ts.map +1 -1
  7. package/dist/chat/Chat.js +19 -0
  8. package/dist/chat/ChatResponse.d.ts +25 -0
  9. package/dist/chat/ChatResponse.d.ts.map +1 -1
  10. package/dist/chat/ChatResponse.js +32 -0
  11. package/dist/chat/ChatStream.d.ts.map +1 -1
  12. package/dist/chat/Tool.d.ts +7 -7
  13. package/dist/chat/Tool.d.ts.map +1 -1
  14. package/dist/errors/index.d.ts +19 -0
  15. package/dist/errors/index.d.ts.map +1 -1
  16. package/dist/errors/index.js +28 -0
  17. package/dist/index.d.ts +1 -0
  18. package/dist/index.d.ts.map +1 -1
  19. package/dist/index.js +1 -0
  20. package/dist/models/ModelRegistry.d.ts.map +1 -1
  21. package/dist/models/ModelRegistry.js +2 -1
  22. package/dist/models/PricingRegistry.d.ts.map +1 -1
  23. package/dist/models/PricingRegistry.js +10 -2
  24. package/dist/models/models.d.ts.map +1 -1
  25. package/dist/models/models.js +7444 -5954
  26. package/dist/providers/BaseProvider.d.ts.map +1 -1
  27. package/dist/providers/Provider.d.ts.map +1 -1
  28. package/dist/providers/deepseek/Capabilities.d.ts.map +1 -1
  29. package/dist/providers/gemini/Chat.js +1 -1
  30. package/dist/providers/gemini/Models.d.ts.map +1 -1
  31. package/dist/providers/ollama/Models.d.ts.map +1 -1
  32. package/dist/providers/openai/Chat.d.ts.map +1 -1
  33. package/dist/providers/openai/Models.d.ts.map +1 -1
  34. package/dist/providers/openrouter/Models.d.ts.map +1 -1
  35. package/dist/streaming/Stream.d.ts.map +1 -1
  36. package/package.json +1 -1
package/README.md CHANGED
@@ -1,295 +1,127 @@
1
+ # @node-llm/core
2
+
1
3
  <p align="left">
2
4
  <a href="https://node-llm.eshaiju.com/">
3
- <img src="docs/assets/images/logo.jpg" alt="NodeLLM logo" width="300" />
5
+ <img src="https://node-llm.eshaiju.com/assets/images/logo.jpg" alt="NodeLLM logo" width="300" />
4
6
  </a>
5
7
  </p>
6
8
 
7
- # NodeLLM
8
-
9
- **An architectural layer for integrating Large Language Models in Node.js.**
10
-
11
- **Provider-agnostic by design.**
12
-
13
- Integrating multiple LLM providers often means juggling different SDKs, API styles, and update cycles. NodeLLM provides a single, unified, production-oriented API for interacting with over 540+ models across multiple providers (OpenAI, Gemini, Anthropic, DeepSeek, OpenRouter, Ollama, etc.) that stays consistent even when providers change.
14
-
15
- <p align="left">
16
- <img src="https://registry.npmmirror.com/@lobehub/icons-static-svg/latest/files/icons/openai.svg" height="28" />
17
- <img src="https://registry.npmmirror.com/@lobehub/icons-static-svg/latest/files/icons/openai-text.svg" height="22" />
18
- &nbsp;&nbsp;&nbsp;&nbsp;
19
- <img src="https://registry.npmmirror.com/@lobehub/icons-static-svg/latest/files/icons/anthropic-text.svg" height="18" />
20
- &nbsp;&nbsp;&nbsp;&nbsp;
21
- <img src="https://registry.npmmirror.com/@lobehub/icons-static-svg/latest/files/icons/gemini-color.svg" height="28" />
22
- <img src="https://registry.npmmirror.com/@lobehub/icons-static-svg/latest/files/icons/gemini-text.svg" height="20" />
23
- &nbsp;&nbsp;&nbsp;&nbsp;
24
- <img src="https://registry.npmmirror.com/@lobehub/icons-static-svg/latest/files/icons/deepseek-color.svg" height="28" />
25
- <img src="https://registry.npmmirror.com/@lobehub/icons-static-svg/latest/files/icons/deepseek-text.svg" height="20" />
26
- &nbsp;&nbsp;&nbsp;&nbsp;
27
- <img src="https://registry.npmmirror.com/@lobehub/icons-static-svg/latest/files/icons/openrouter.svg" height="28" />
28
- <img src="https://registry.npmmirror.com/@lobehub/icons-static-svg/latest/files/icons/openrouter-text.svg" height="22" />
29
- &nbsp;&nbsp;&nbsp;&nbsp;
30
- <img src="https://registry.npmmirror.com/@lobehub/icons-static-svg/latest/files/icons/ollama.svg" height="28" />
31
- <img src="https://registry.npmmirror.com/@lobehub/icons-static-svg/latest/files/icons/ollama-text.svg" height="18" />
32
- </p>
33
-
34
- <br/>
35
-
36
9
  [![npm version](https://img.shields.io/npm/v/@node-llm/core.svg)](https://www.npmjs.com/package/@node-llm/core)
37
10
  [![License: MIT](https://img.shields.io/badge/License-MIT-yellow.svg)](https://opensource.org/licenses/MIT)
38
11
  [![TypeScript](https://img.shields.io/badge/TypeScript-Ready-blue.svg)](https://www.typescriptlang.org/)
39
12
 
40
- ---
41
-
42
- ## 🛑 What NodeLLM is NOT
43
-
44
- NodeLLM represents a clear architectural boundary between your system and LLM vendors.
13
+ **The production-grade LLM engine for Node.js. Provider-agnostic by design.**
45
14
 
46
- NodeLLM is **NOT**:
47
-
48
- - A wrapper around a single provider SDK (like `openai` or `@google/generative-ai`)
49
- - A prompt-engineering framework
50
- - An agent playground or experimental toy
51
-
52
- ---
53
-
54
- ## 🏗️ Why NodeLLM?
55
-
56
- Direct integrations often become tightly coupled to specific providers, making it difficult to adapt as models evolve. **LLMs should be treated as infrastructure**, and NodeLLM helps you build a stable foundation that persists regardless of which model is currently "state of the art."
57
-
58
- NodeLLM helps solve **architectural problems**, not just provide API access. It serves as the core integration layer for LLMs in the Node.js ecosystem.
59
-
60
- ### Strategic Goals
61
-
62
- - **Provider Isolation**: Decouple your services from vendor SDKs.
63
- - **Production-Ready**: Native support for streaming, automatic retries, and unified error handling.
64
- - **Predictable API**: Consistent behavior for Tools, Vision, and Structured Outputs across all models, **now including full parity for streaming**.
15
+ `@node-llm/core` provides a single, unified API for interacting with over 540+ models across all major providers. It is built for developers who need stable infrastructure, standard streaming, and automated tool execution without vendor lock-in.
65
16
 
66
17
  ---
67
18
 
68
- ## The Architectural Path
69
-
70
- ```ts
71
- import { NodeLLM } from "@node-llm/core";
72
-
73
- // 1. Zero-Config (NodeLLM automatically reads NODELLM_PROVIDER and API keys)
74
- const chat = NodeLLM.chat("gpt-4o");
75
-
76
- // 2. Chat (High-level request/response)
77
- const response = await chat.ask("Explain event-driven architecture");
78
- console.log(response.content);
19
+ ## 🚀 Key Features
79
20
 
80
- // 3. Streaming (Standard AsyncIterator)
81
- for await (const chunk of chat.stream("Explain event-driven architecture")) {
82
- process.stdout.write(chunk.content);
83
- }
84
- ```
85
-
86
- ### 🎯 Real-World Example: Brand Perception Checker
87
-
88
- Built with NodeLLM - Multi-provider AI analysis, tool calling, and structured outputs working together.
89
-
90
- **[View Example →](examples/brand-perception-checker/)**
21
+ - **Unified API**: One interface for OpenAI, Anthropic, Gemini, DeepSeek, OpenRouter, and Ollama.
22
+ - **Automated Tool Loops**: Recursive tool execution handled automatically—no manual loops required.
23
+ - **Streaming + Tools**: Seamlessly execute tools and continue the stream with the final response.
24
+ - **Structured Output**: Native Zod support for rigorous schema validation (`.withSchema()`).
25
+ - **Multimodal engine**: Built-in handling for Vision, Audio (Whisper), and Video (Gemini).
26
+ - **Security-First**: Integrated circuit breakers for timeouts, max tokens, and infinite tool loops.
91
27
 
92
28
  ---
93
29
 
94
- ## 🔧 Strategic Configuration
95
-
96
- NodeLLM provides a flexible, **lazy-initialized** configuration system designed for enterprise usage. It is safe for ESM and resolved only when your first request is made, eliminating the common `dotenv` race condition.
97
-
98
- ```ts
99
- // Recommended for multi-provider pipelines
100
- const llm = createLLM({
101
- openaiApiKey: process.env.OPENAI_API_KEY,
102
- anthropicApiKey: process.env.ANTHROPIC_API_KEY,
103
- ollamaApiBase: process.env.OLLAMA_API_BASE
104
- });
105
-
106
- // Support for Custom Endpoints (e.g., Azure or LocalAI)
107
- const llm = createLLM({
108
- openaiApiKey: process.env.AZURE_KEY,
109
- openaiApiBase: "https://your-resource.openai.azure.com/openai/deployments/..."
110
- });
111
- ```
112
-
113
- **[Full Configuration Guide →](docs/getting_started/configuration.md)**
30
+ ## 📋 Supported Providers
114
31
 
115
- ---
32
+ | Provider | Supported Features |
33
+ | :------------- | :--------------------------------------------------------------- |
34
+ | **OpenAI** | Chat, Streaming, Tools, Vision, Audio, Images, Reasoning (o1/o3) |
35
+ | **Anthropic** | Chat, Streaming, Tools, Vision, PDF Support (Claude 3.5) |
36
+ | **Gemini** | Chat, Streaming, Tools, Vision, Audio, Video, Embeddings |
37
+ | **DeepSeek** | Chat (V3), Reasoning (R1), Streaming + Tools |
38
+ | **OpenRouter** | 540+ models via a single API with automatic capability detection |
39
+ | **Ollama** | Local LLM inference with full Tool and Vision support |
116
40
 
117
41
  ---
118
42
 
119
- ## 🔮 Capabilities
43
+ ## Quick Start
120
44
 
121
- ### 💬 Unified Chat
45
+ ### Installation
122
46
 
123
- Stop rewriting code for every provider. `NodeLLM` normalizes inputs and outputs into a single, predictable mental model.
124
-
125
- ```ts
126
- import { NodeLLM } from "@node-llm/core";
127
-
128
- // Uses NODELLM_PROVIDER from environment (defaults to GPT-4o)
129
- const chat = NodeLLM.chat();
130
- await chat.ask("Hello world");
47
+ ```bash
48
+ npm install @node-llm/core
131
49
  ```
132
50
 
133
- ### 👁️ Smart Vision & Files
51
+ ### Basic Chat & Streaming
134
52
 
135
- Pass images, PDFs, or audio files directly to **both `ask()` and `stream()`**. We handle the heavy lifting: fetching remote URLs, base64 encoding, and MIME type mapping.
53
+ NodeLLM automatically reads your API keys from environment variables (e.g., `OPENAI_API_KEY`).
136
54
 
137
55
  ```ts
138
- await chat.ask("Analyze this interface", {
139
- files: ["./screenshot.png", "https://example.com/spec.pdf"]
140
- });
141
- ```
142
-
143
- ### 🛠️ Auto-Executing Tools
144
-
145
- Define tools once;`NodeLLM` manages the recursive execution loop for you, keeping your controller logic clean. **Works seamlessly with both regular chat and streaming!**
56
+ import { createLLM } from "@node-llm/core";
146
57
 
147
- ```ts
148
- import { Tool, z } from "@node-llm/core";
58
+ const llm = createLLM({ provider: "openai" });
149
59
 
150
- // Class-based DSL
151
- class WeatherTool extends Tool {
152
- name = "get_weather";
153
- description = "Get current weather";
154
- schema = z.object({ location: z.string() });
60
+ // 1. Standard Request
61
+ const res = await llm.chat("gpt-4o").ask("What is the speed of light?");
62
+ console.log(res.content);
155
63
 
156
- async execute({ location }) {
157
- return `Sunny in ${location}`;
158
- }
64
+ // 2. Real-time Streaming
65
+ for await (const chunk of llm.chat().stream("Tell me a long story")) {
66
+ process.stdout.write(chunk.content);
159
67
  }
160
-
161
- // Now the model can use it automatically
162
- await chat.withTool(WeatherTool).ask("What's the weather in Tokyo?");
163
-
164
- // Lifecycle Hooks for Error & Flow Control
165
- chat.onToolCallError((call, err) => "STOP");
166
- ```
167
-
168
- **[Full Tool Calling Guide →](https://node-llm.eshaiju.com/core-features/tool-calling)**
169
-
170
- ### 🔍 Comprehensive Debug Logging
171
-
172
- Enable detailed logging for all API requests and responses across every feature and provider:
173
-
174
- ```ts
175
- // Set environment variable
176
- process.env.NODELLM_DEBUG = "true";
177
-
178
- // Now see detailed logs for every API call:
179
- // [NodeLLM] [OpenAI] Request: POST https://api.openai.com/v1/chat/completions
180
- // { "model": "gpt-4o", "messages": [...] }
181
- // [NodeLLM] [OpenAI] Response: 200 OK
182
- // { "id": "chatcmpl-123", ... }
183
68
  ```
184
69
 
185
- **Covers:** Chat, Streaming, Images, Embeddings, Transcription, Moderation - across all providers!
186
-
187
- ### ✨ Structured Output
70
+ ### Structured Output (Zod)
188
71
 
189
- Get type-safe, validated JSON back using **Zod** schemas.
72
+ Stop parsing markdown. Get typed objects directly.
190
73
 
191
74
  ```ts
192
75
  import { z } from "@node-llm/core";
193
- const Product = z.object({ name: z.string(), price: z.number() });
194
-
195
- const res = await chat.withSchema(Product).ask("Generate a gadget");
196
- console.log(res.parsed.name); // Full type-safety
197
- ```
198
-
199
- ### 🎨 Image Generation
200
76
 
201
- ```ts
202
- await NodeLLM.paint("A cyberpunk city in rain");
203
- ```
77
+ const PlayerSchema = z.object({
78
+ name: z.string(),
79
+ powerLevel: z.number(),
80
+ abilities: z.array(z.string())
81
+ });
204
82
 
205
- ### 🎤 Audio Transcription
83
+ const chat = llm.chat("gpt-4o-mini").withSchema(PlayerSchema);
84
+ const response = await chat.ask("Generate a random RPG character");
206
85
 
207
- ```ts
208
- await NodeLLM.transcribe("meeting-recording.wav");
86
+ console.log(response.parsed.name); // Fully typed!
209
87
  ```
210
88
 
211
- ### ⚡ Scoped Parallelism
212
-
213
- Run multiple providers in parallel safely without global configuration side effects using isolated contexts.
214
-
215
- ```ts
216
- const [gpt, claude] = await Promise.all([
217
- // Each call branch off into its own isolated context
218
- NodeLLM.withProvider("openai").chat("gpt-4o").ask(prompt),
219
- NodeLLM.withProvider("anthropic").chat("claude-3-5-sonnet").ask(prompt)
220
- ]);
221
- ```
89
+ ---
222
90
 
223
- ### 🧠 Deep Reasoning
91
+ ## 🛡️ Security Circuit Breakers
224
92
 
225
- Direct access to the thought process of models like **DeepSeek R1** or **OpenAI o1/o3** using the `.reasoning` field.
93
+ NodeLLM protects your production environment with four built-in safety pillars:
226
94
 
227
95
  ```ts
228
- const res = await NodeLLM.chat("deepseek-reasoner").ask("Solve this logical puzzle");
229
- console.log(res.reasoning); // Chain-of-thought
96
+ const llm = createLLM({
97
+ requestTimeout: 15000, // 15s DoS Protection
98
+ maxTokens: 4096, // Cost Protection
99
+ maxRetries: 3, // Retry Storm Protection
100
+ maxToolCalls: 5 // Infinite Loop Protection
101
+ });
230
102
  ```
231
103
 
232
104
  ---
233
105
 
234
- ## 🚀 Why use this over official SDKs?
235
-
236
- | Feature | NodeLLM | Official SDKs | Architectural Impact |
237
- | :-------------------- | :---------------------------- | :-------------------------- | :------------------------ |
238
- | **Provider Logic** | Transparently Handled | Exposed to your code | **Low Coupling** |
239
- | **Streaming** | Standard `AsyncIterator` | Vendor-specific Events | **Predictable Data Flow** |
240
- | **Streaming + Tools** | Automated Execution | Manual implementation | **Seamless UX** |
241
- | **Tool Loops** | Automated Recursion | Manual implementation | **Reduced Boilerplate** |
242
- | **Files/Vision** | Intelligent Path/URL handling | Base64/Buffer management | **Cleaner Service Layer** |
243
- | **Configuration** | Centralized & Global | Per-instance initialization | **Easier Lifecycle Mgmt** |
244
-
245
- ---
246
-
247
- ## 📋 Supported Providers
248
-
249
- | Provider | Supported Features |
250
- | :----------------------------------------------------------------------------------------------------------------------------------- | :------------------------------------------------------------------------------- |
251
- | <img src="https://registry.npmmirror.com/@lobehub/icons-static-svg/latest/files/icons/openai.svg" height="18"> **OpenAI** | Chat, **Streaming + Tools**, Vision, Audio, Images, Transcription, **Reasoning** |
252
- | <img src="https://registry.npmmirror.com/@lobehub/icons-static-svg/latest/files/icons/gemini-color.svg" height="18"> **Gemini** | Chat, **Streaming + Tools**, Vision, Audio, Video, Embeddings |
253
- | <img src="https://registry.npmmirror.com/@lobehub/icons-static-svg/latest/files/icons/anthropic-text.svg" height="12"> **Anthropic** | Chat, **Streaming + Tools**, Vision, PDF, Structured Output |
254
- | <img src="https://registry.npmmirror.com/@lobehub/icons-static-svg/latest/files/icons/deepseek-color.svg" height="18"> **DeepSeek** | Chat (V3), **Reasoning (R1)**, **Streaming + Tools** |
255
- | <img src="https://registry.npmmirror.com/@lobehub/icons-static-svg/latest/files/icons/openrouter.svg" height="18"> **OpenRouter** | **Aggregator**, Chat, Streaming, Tools, Vision, Embeddings, **Reasoning** |
256
- | <img src="https://registry.npmmirror.com/@lobehub/icons-static-svg/latest/files/icons/ollama.svg" height="18"> **Ollama** | **Local Inference**, Chat, Streaming, Tools, Vision, Embeddings |
257
-
258
- ---
259
-
260
- ## 📚 Documentation & Installation
261
-
262
- ```bash
263
- npm install @node-llm/core
264
- ```
265
-
266
- **[View Full Documentation ↗](https://node-llm.eshaiju.com/)**
106
+ ## 💾 Ecosystem
267
107
 
268
- ### 🍿 Try the Live Demo
108
+ Looking for persistence? use **[@node-llm/orm](https://www.npmjs.com/package/@node-llm/orm)**.
269
109
 
270
- Want to see it in action? Run this in your terminal:
271
-
272
- ```bash
273
- git clone https://github.com/node-llm/node-llm.git
274
- cd node-llm
275
- npm install
276
- npm run demo
277
- ```
110
+ - Automatically saves chat history to PostgreSQL/MySQL/SQLite via Prisma.
111
+ - Tracks tool execution results and API metrics (latency, cost, tokens).
278
112
 
279
113
  ---
280
114
 
281
- ## 🤝 Contributing
282
-
283
- We welcome contributions! Please see our **[Contributing Guide](CONTRIBUTING.md)** for more details on how to get started.
284
-
285
- ---
115
+ ## 📚 Full Documentation
286
116
 
287
- ## 🫶 Credits
117
+ Visit **[node-llm.eshaiju.com](https://node-llm.eshaiju.com/)** for:
288
118
 
289
- Heavily inspired by the elegant design of [RubyLLM](https://rubyllm.com/).
119
+ - [Deep Dive into Tool Calling](https://node-llm.eshaiju.com/core-features/tools)
120
+ - [Multi-modal Vision & Audio Guide](https://node-llm.eshaiju.com/core-features/multimodal)
121
+ - [Custom Provider Plugin System](https://node-llm.eshaiju.com/advanced/custom-providers)
290
122
 
291
123
  ---
292
124
 
293
- ## 📄 License
125
+ ## License
294
126
 
295
- MIT © [NodeLLM contributors]
127
+ MIT © [NodeLLM Contributors]
package/dist/aliases.d.ts CHANGED
@@ -375,6 +375,10 @@ declare const _default: {
375
375
  readonly openai: "gpt-5.2-chat-latest";
376
376
  readonly openrouter: "openai/gpt-5.2-chat-latest";
377
377
  };
378
+ readonly "gpt-5.2-codex": {
379
+ readonly openai: "gpt-5.2-codex";
380
+ readonly openrouter: "openai/gpt-5.2-codex";
381
+ };
378
382
  readonly "gpt-5.2-pro": {
379
383
  readonly openai: "gpt-5.2-pro";
380
384
  readonly openrouter: "openai/gpt-5.2-pro";
@@ -1 +1 @@
1
- {"version":3,"file":"aliases.d.ts","sourceRoot":"","sources":["../src/aliases.ts"],"names":[],"mappings":";;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;AAAA,wBA+mBW"}
1
+ {"version":3,"file":"aliases.d.ts","sourceRoot":"","sources":["../src/aliases.ts"],"names":[],"mappings":";;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;AAAA,wBAmnBW"}