@push.rocks/smartai 0.3.0 → 0.3.2

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
@@ -3,7 +3,7 @@
3
3
  */
4
4
  export const commitinfo = {
5
5
  name: '@push.rocks/smartai',
6
- version: '0.3.0',
6
+ version: '0.3.2',
7
7
  description: 'A TypeScript library for integrating and interacting with multiple AI models, offering capabilities for chat and potentially audio responses.'
8
8
  };
9
9
  //# sourceMappingURL=data:application/json;base64,eyJ2ZXJzaW9uIjozLCJmaWxlIjoiMDBfY29tbWl0aW5mb19kYXRhLmpzIiwic291cmNlUm9vdCI6IiIsInNvdXJjZXMiOlsiLi4vdHMvMDBfY29tbWl0aW5mb19kYXRhLnRzIl0sIm5hbWVzIjpbXSwibWFwcGluZ3MiOiJBQUFBOztHQUVHO0FBQ0gsTUFBTSxDQUFDLE1BQU0sVUFBVSxHQUFHO0lBQ3hCLElBQUksRUFBRSxxQkFBcUI7SUFDM0IsT0FBTyxFQUFFLE9BQU87SUFDaEIsV0FBVyxFQUFFLCtJQUErSTtDQUM3SixDQUFBIn0=
package/package.json CHANGED
@@ -1,6 +1,6 @@
1
1
  {
2
2
  "name": "@push.rocks/smartai",
3
- "version": "0.3.0",
3
+ "version": "0.3.2",
4
4
  "private": false,
5
5
  "description": "A TypeScript library for integrating and interacting with multiple AI models, offering capabilities for chat and potentially audio responses.",
6
6
  "main": "dist_ts/index.js",
package/readme.md CHANGED
@@ -1,82 +1,118 @@
1
1
  # @push.rocks/smartai
2
2
 
3
- Provides a standardized interface for integrating and conversing with multiple AI models, supporting operations like chat, streaming interactions, and audio responses.
4
-
5
- ## Install
6
-
7
- To add @push.rocks/smartai to your project, run the following command in your terminal:
3
+ [![npm version](https://badge.fury.io/js/%40push.rocks%2Fsmartai.svg)](https://www.npmjs.com/package/@push.rocks/smartai)
4
+
5
+ SmartAi is a comprehensive TypeScript library that provides a standardized interface for integrating and interacting with multiple AI models. It supports a range of operations from synchronous and streaming chat to audio generation, document processing, and vision tasks.
6
+
7
+ ## Table of Contents
8
+
9
+ - [Features](#features)
10
+ - [Installation](#installation)
11
+ - [Supported AI Providers](#supported-ai-providers)
12
+ - [Quick Start](#quick-start)
13
+ - [Usage Examples](#usage-examples)
14
+ - [Chat Interactions](#chat-interactions)
15
+ - [Streaming Chat](#streaming-chat)
16
+ - [Audio Generation](#audio-generation)
17
+ - [Document Processing](#document-processing)
18
+ - [Vision Processing](#vision-processing)
19
+ - [Error Handling](#error-handling)
20
+ - [Development](#development)
21
+ - [Running Tests](#running-tests)
22
+ - [Building the Project](#building-the-project)
23
+ - [Contributing](#contributing)
24
+ - [License](#license)
25
+ - [Legal Information](#legal-information)
26
+
27
+ ## Features
28
+
29
+ - **Unified API:** Seamlessly integrate multiple AI providers with a consistent interface.
30
+ - **Chat & Streaming:** Support for both synchronous and real-time streaming chat interactions.
31
+ - **Audio & Vision:** Generate audio responses and perform detailed image analysis.
32
+ - **Document Processing:** Analyze PDFs and other documents using vision models.
33
+ - **Extensible:** Easily extend the library to support additional AI providers.
34
+
35
+ ## Installation
36
+
37
+ To install SmartAi, run the following command:
8
38
 
9
39
  ```bash
10
40
  npm install @push.rocks/smartai
11
41
  ```
12
42
 
13
- This command installs the package and adds it to your project's dependencies.
43
+ This will add the package to your projects dependencies.
14
44
 
15
45
  ## Supported AI Providers
16
46
 
17
- @push.rocks/smartai supports multiple AI providers, each with its own unique capabilities:
47
+ SmartAi supports multiple AI providers. Configure each provider with its corresponding token or settings:
18
48
 
19
49
  ### OpenAI
20
- - Models: GPT-4, GPT-3.5-turbo, GPT-4-vision-preview
21
- - Features: Chat, Streaming, Audio Generation, Vision, Document Processing
22
- - Configuration:
50
+
51
+ - **Models:** GPT-4, GPT-3.5-turbo, GPT-4-vision-preview
52
+ - **Features:** Chat, Streaming, Audio Generation, Vision, Document Processing
53
+ - **Configuration Example:**
54
+
23
55
  ```typescript
24
56
  openaiToken: 'your-openai-token'
25
57
  ```
26
58
 
27
59
  ### X.AI
28
- - Models: Grok-2-latest
29
- - Features: Chat, Streaming, Document Processing
30
- - Configuration:
60
+
61
+ - **Models:** Grok-2-latest
62
+ - **Features:** Chat, Streaming, Document Processing
63
+ - **Configuration Example:**
64
+
31
65
  ```typescript
32
66
  xaiToken: 'your-xai-token'
33
67
  ```
34
68
 
35
69
  ### Anthropic
36
- - Models: Claude-3-opus-20240229
37
- - Features: Chat, Streaming, Vision, Document Processing
38
- - Configuration:
70
+
71
+ - **Models:** Claude-3-opus-20240229
72
+ - **Features:** Chat, Streaming, Vision, Document Processing
73
+ - **Configuration Example:**
74
+
39
75
  ```typescript
40
76
  anthropicToken: 'your-anthropic-token'
41
77
  ```
42
78
 
43
79
  ### Perplexity
44
- - Models: Mixtral-8x7b-instruct
45
- - Features: Chat, Streaming
46
- - Configuration:
80
+
81
+ - **Models:** Mixtral-8x7b-instruct
82
+ - **Features:** Chat, Streaming
83
+ - **Configuration Example:**
84
+
47
85
  ```typescript
48
86
  perplexityToken: 'your-perplexity-token'
49
87
  ```
50
88
 
51
89
  ### Groq
52
- - Models: Llama-3.3-70b-versatile
53
- - Features: Chat, Streaming
54
- - Configuration:
90
+
91
+ - **Models:** Llama-3.3-70b-versatile
92
+ - **Features:** Chat, Streaming
93
+ - **Configuration Example:**
94
+
55
95
  ```typescript
56
96
  groqToken: 'your-groq-token'
57
97
  ```
58
98
 
59
99
  ### Ollama
60
- - Models: Configurable (default: llama2, llava for vision/documents)
61
- - Features: Chat, Streaming, Vision, Document Processing
62
- - Configuration:
100
+
101
+ - **Models:** Configurable (default: llama2; use llava for vision/document tasks)
102
+ - **Features:** Chat, Streaming, Vision, Document Processing
103
+ - **Configuration Example:**
104
+
63
105
  ```typescript
64
- baseUrl: 'http://localhost:11434' // Optional
65
- model: 'llama2' // Optional
66
- visionModel: 'llava' // Optional, for vision and document tasks
106
+ ollama: {
107
+ baseUrl: 'http://localhost:11434', // Optional
108
+ model: 'llama2', // Optional
109
+ visionModel: 'llava' // Optional for vision and document tasks
110
+ }
67
111
  ```
68
112
 
69
- ## Usage
113
+ ## Quick Start
70
114
 
71
- The `@push.rocks/smartai` package is a comprehensive solution for integrating and interacting with various AI models, designed to support operations ranging from chat interactions to audio responses. This documentation will guide you through the process of utilizing `@push.rocks/smartai` in your applications.
72
-
73
- ### Getting Started
74
-
75
- Before you begin, ensure you have installed the package as described in the **Install** section above. Once installed, you can start integrating AI functionalities into your application.
76
-
77
- ### Initializing SmartAi
78
-
79
- The first step is to import and initialize the `SmartAi` class with appropriate options for the AI services you plan to use:
115
+ Initialize SmartAi with the provider configurations you plan to use:
80
116
 
81
117
  ```typescript
82
118
  import { SmartAi } from '@push.rocks/smartai';
@@ -96,35 +132,34 @@ const smartAi = new SmartAi({
96
132
  await smartAi.start();
97
133
  ```
98
134
 
99
- ### Chat Interactions
135
+ ## Usage Examples
100
136
 
101
- #### Synchronous Chat
137
+ ### Chat Interactions
102
138
 
103
- For simple question-answer interactions:
139
+ **Synchronous Chat:**
104
140
 
105
141
  ```typescript
106
142
  const response = await smartAi.openaiProvider.chat({
107
143
  systemMessage: 'You are a helpful assistant.',
108
144
  userMessage: 'What is the capital of France?',
109
- messageHistory: [] // Previous messages in the conversation
145
+ messageHistory: [] // Include previous conversation messages if applicable
110
146
  });
111
147
 
112
148
  console.log(response.message);
113
149
  ```
114
150
 
115
- #### Streaming Chat
151
+ ### Streaming Chat
116
152
 
117
- For real-time, streaming interactions:
153
+ **Real-Time Streaming:**
118
154
 
119
155
  ```typescript
120
156
  const textEncoder = new TextEncoder();
121
157
  const textDecoder = new TextDecoder();
122
158
 
123
- // Create input and output streams
159
+ // Create a transform stream for sending and receiving data
124
160
  const { writable, readable } = new TransformStream();
125
161
  const writer = writable.getWriter();
126
162
 
127
- // Send a message
128
163
  const message = {
129
164
  role: 'user',
130
165
  content: 'Tell me a story about a brave knight'
@@ -132,91 +167,92 @@ const message = {
132
167
 
133
168
  writer.write(textEncoder.encode(JSON.stringify(message) + '\n'));
134
169
 
135
- // Process the response stream
170
+ // Start streaming the response
136
171
  const stream = await smartAi.openaiProvider.chatStream(readable);
137
172
  const reader = stream.getReader();
138
173
 
139
174
  while (true) {
140
175
  const { done, value } = await reader.read();
141
176
  if (done) break;
142
- console.log('AI:', value); // Process each chunk of the response
177
+ console.log('AI:', value);
143
178
  }
144
179
  ```
145
180
 
146
181
  ### Audio Generation
147
182
 
148
- For providers that support audio generation (currently OpenAI):
183
+ Generate audio (supported by providers like OpenAI):
149
184
 
150
185
  ```typescript
151
186
  const audioStream = await smartAi.openaiProvider.audio({
152
187
  message: 'Hello, this is a test of text-to-speech'
153
188
  });
154
189
 
155
- // Handle the audio stream (e.g., save to file or play)
190
+ // Process the audio stream, for example, play it or save to a file.
156
191
  ```
157
192
 
158
193
  ### Document Processing
159
194
 
160
- For providers that support document processing (OpenAI, Ollama, and Anthropic):
195
+ Analyze and extract key information from documents:
161
196
 
162
197
  ```typescript
163
- // Using OpenAI
164
- const result = await smartAi.openaiProvider.document({
198
+ // Example using OpenAI
199
+ const documentResult = await smartAi.openaiProvider.document({
165
200
  systemMessage: 'Classify the document type',
166
201
  userMessage: 'What type of document is this?',
167
202
  messageHistory: [],
168
- pdfDocuments: [pdfBuffer] // Uint8Array of PDF content
203
+ pdfDocuments: [pdfBuffer] // Uint8Array containing the PDF content
169
204
  });
205
+ ```
206
+
207
+ Other providers (e.g., Ollama and Anthropic) follow a similar pattern:
170
208
 
171
- // Using Ollama with llava
172
- const analysis = await smartAi.ollamaProvider.document({
209
+ ```typescript
210
+ // Using Ollama for document processing
211
+ const ollamaResult = await smartAi.ollamaProvider.document({
173
212
  systemMessage: 'You are a document analysis assistant',
174
- userMessage: 'Extract the key information from this document',
213
+ userMessage: 'Extract key information from this document',
175
214
  messageHistory: [],
176
- pdfDocuments: [pdfBuffer] // Uint8Array of PDF content
215
+ pdfDocuments: [pdfBuffer]
177
216
  });
217
+ ```
178
218
 
179
- // Using Anthropic with Claude 3
180
- const anthropicAnalysis = await smartAi.anthropicProvider.document({
181
- systemMessage: 'You are a document analysis assistant',
182
- userMessage: 'Please analyze this document and extract key information',
219
+ ```typescript
220
+ // Using Anthropic for document processing
221
+ const anthropicResult = await smartAi.anthropicProvider.document({
222
+ systemMessage: 'Analyze the document',
223
+ userMessage: 'Please extract the main points',
183
224
  messageHistory: [],
184
- pdfDocuments: [pdfBuffer] // Uint8Array of PDF content
225
+ pdfDocuments: [pdfBuffer]
185
226
  });
186
227
  ```
187
228
 
188
- Both providers will:
189
- 1. Convert PDF documents to images
190
- 2. Process each page using their vision models
191
- 3. Return a comprehensive analysis based on the system message and user query
192
-
193
229
  ### Vision Processing
194
230
 
195
- For providers that support vision tasks (OpenAI, Ollama, and Anthropic):
231
+ Analyze images with vision capabilities:
196
232
 
197
233
  ```typescript
198
- // Using OpenAI's GPT-4 Vision
199
- const description = await smartAi.openaiProvider.vision({
200
- image: imageBuffer, // Buffer containing the image data
234
+ // Using OpenAI GPT-4 Vision
235
+ const imageDescription = await smartAi.openaiProvider.vision({
236
+ image: imageBuffer, // Uint8Array containing image data
201
237
  prompt: 'What do you see in this image?'
202
238
  });
203
239
 
204
- // Using Ollama's Llava model
205
- const analysis = await smartAi.ollamaProvider.vision({
240
+ // Using Ollama for vision tasks
241
+ const ollamaImageAnalysis = await smartAi.ollamaProvider.vision({
206
242
  image: imageBuffer,
207
243
  prompt: 'Analyze this image in detail'
208
244
  });
209
245
 
210
- // Using Anthropic's Claude 3
211
- const anthropicAnalysis = await smartAi.anthropicProvider.vision({
246
+ // Using Anthropic for vision analysis
247
+ const anthropicImageAnalysis = await smartAi.anthropicProvider.vision({
212
248
  image: imageBuffer,
213
- prompt: 'Please analyze this image and describe what you see'
249
+ prompt: 'Describe the contents of this image'
214
250
  });
215
251
  ```
216
252
 
217
253
  ## Error Handling
218
254
 
219
- All providers implement proper error handling. It's recommended to wrap API calls in try-catch blocks:
255
+ Always wrap API calls in try-catch blocks to manage errors effectively:
220
256
 
221
257
  ```typescript
222
258
  try {
@@ -225,26 +261,71 @@ try {
225
261
  userMessage: 'Hello!',
226
262
  messageHistory: []
227
263
  });
228
- } catch (error) {
264
+ console.log(response.message);
265
+ } catch (error: any) {
229
266
  console.error('AI provider error:', error.message);
230
267
  }
231
268
  ```
232
269
 
233
- ## License and Legal Information
270
+ ## Development
271
+
272
+ ### Running Tests
273
+
274
+ To run the test suite, use the following command:
275
+
276
+ ```bash
277
+ npm run test
278
+ ```
279
+
280
+ Ensure your environment is configured with the appropriate tokens and settings for the providers you are testing.
281
+
282
+ ### Building the Project
283
+
284
+ Compile the TypeScript code and build the package using:
285
+
286
+ ```bash
287
+ npm run build
288
+ ```
289
+
290
+ This command prepares the library for distribution.
234
291
 
235
- This repository contains open-source code that is licensed under the MIT License. A copy of the MIT License can be found in the [license](license) file within this repository.
292
+ ## Contributing
236
293
 
237
- **Please note:** The MIT License does not grant permission to use the trade names, trademarks, service marks, or product names of the project, except as required for reasonable and customary use in describing the origin of the work and reproducing the content of the NOTICE file.
294
+ Contributions are welcome! Please follow these steps:
295
+
296
+ 1. Fork the repository.
297
+ 2. Create a feature branch:
298
+ ```bash
299
+ git checkout -b feature/my-feature
300
+ ```
301
+ 3. Commit your changes with clear messages:
302
+ ```bash
303
+ git commit -m 'Add new feature'
304
+ ```
305
+ 4. Push your branch to your fork:
306
+ ```bash
307
+ git push origin feature/my-feature
308
+ ```
309
+ 5. Open a Pull Request with a detailed description of your changes.
310
+
311
+ ## License
312
+
313
+ This project is licensed under the [MIT License](LICENSE).
314
+
315
+ ## Legal Information
238
316
 
239
317
  ### Trademarks
240
318
 
241
- This project is owned and maintained by Task Venture Capital GmbH. The names and logos associated with Task Venture Capital GmbH and any related products or services are trademarks of Task Venture Capital GmbH and are not included within the scope of the MIT license granted herein. Use of these trademarks must comply with Task Venture Capital GmbH's Trademark Guidelines, and any usage must be approved in writing by Task Venture Capital GmbH.
319
+ This project is owned and maintained by Task Venture Capital GmbH. The names and logos associated with Task Venture Capital GmbH and its related products or services are trademarks of Task Venture Capital GmbH and are not covered by the MIT License. Use of these trademarks must comply with Task Venture Capital GmbH's Trademark Guidelines.
242
320
 
243
321
  ### Company Information
244
322
 
245
323
  Task Venture Capital GmbH
246
- Registered at District court Bremen HRB 35230 HB, Germany
324
+ Registered at District Court Bremen HRB 35230 HB, Germany
325
+ Contact: hello@task.vc
326
+
327
+ By using this repository, you agree to the terms outlined in this section.
247
328
 
248
- For any legal inquiries or if you require further information, please contact us via email at hello@task.vc.
329
+ ---
249
330
 
250
- By using this repository, you acknowledge that you have read this section, agree to comply with its terms, and understand that the licensing of the code does not imply endorsement by Task Venture Capital GmbH of any derivative works.
331
+ Happy coding with SmartAi!
@@ -3,6 +3,6 @@
3
3
  */
4
4
  export const commitinfo = {
5
5
  name: '@push.rocks/smartai',
6
- version: '0.3.0',
6
+ version: '0.3.2',
7
7
  description: 'A TypeScript library for integrating and interacting with multiple AI models, offering capabilities for chat and potentially audio responses.'
8
8
  }