llmjs2 1.0.6 → 1.0.8

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
package/README.md CHANGED
@@ -1,6 +1,6 @@
1
1
  # llmjs2
2
2
 
3
- `llmjs2` is a zero-dependency Node.js library that provides a small, robust interface for calling Ollama and Ollama Cloud from Node 18+.
3
+ `llmjs2` is a lightweight llm wrapper for building simple / personal AI applications.
4
4
 
5
5
  ## Features
6
6
 
package/package.json CHANGED
@@ -1,6 +1,6 @@
1
1
  {
2
2
  "name": "llmjs2",
3
- "version": "1.0.6",
3
+ "version": "1.0.8",
4
4
  "description": "Minimal zero-dependency Node.js client for Ollama and Ollama Cloud.",
5
5
  "type": "module",
6
6
  "main": "index.js",
@@ -9,6 +9,11 @@
9
9
  "import": "./index.js"
10
10
  }
11
11
  },
12
+ "files": [
13
+ "index.js",
14
+ "index.d.ts",
15
+ "README.md"
16
+ ],
12
17
  "types": "index.d.ts",
13
18
  "keywords": [
14
19
  "llm",
package/grapes.jpg DELETED
Binary file
package/spec.txt DELETED
@@ -1,73 +0,0 @@
1
- This specification defines **llmjs2**, a minimalist Node.js library designed to provide a robust, standardized interface for interacting with LLMs. It focuses on a "zero-config" developer experience, initially targeting **Ollama** with a fallback to **Ollama Cloud**.
2
-
3
- ---
4
-
5
- ## 1. Project Identity
6
- * **Name:** `llmjs2`
7
- * **Mission:** To be the most lightweight, concise, and robust bridge between Node.js applications and AI models.
8
- * **Core Principle:** Favor convention over configuration. Use OpenAI-compatible schemas to minimize the learning curve.
9
-
10
- ---
11
-
12
- ## 2. Technical Requirements
13
- * **Runtime:** Node.js 18.0.0+ (Required for native `fetch` and `web streams`).
14
- * **Module System:** ESM (EcmaScript Modules).
15
- * **Dependencies:** **Zero.** The library must not depend on external packages to ensure a small footprint and security.
16
-
17
- ---
18
-
19
- ## 3. Configuration & Environment
20
- The library automatically resolves connection details using the following hierarchy:
21
- 1. **Explicit Config:** Passed during initialization (if implemented).
22
- 2. **Environment Variables:** * `OLLAMA_BASE_URL`: The target API host.
23
- * `OLLAMA_API_KEY`: The bearer token for authenticated proxies or Cloud access.
24
- 3. **Default Fallback:** `https://api.ollama.com` (Ollama Cloud).
25
-
26
- ---
27
-
28
- ## 4. API Specification
29
-
30
- ### The `completion` Function
31
- The library exports a single overloaded function: `completion`.
32
-
33
- **Signatures:**
34
- * `completion(model: string, prompt: string): Promise<string>`
35
- * `completion(options: CompletionOptions): Promise<string>`
36
-
37
- **Input Object (`CompletionOptions`):**
38
- | Property | Type | Description |
39
- | :--- | :--- | :--- |
40
- | **model** | `string` | Format: `provider/model-name` (e.g., `ollama/llama3`). |
41
- | **messages** | `Array` | Standard OpenAI `role`/`content` objects. |
42
-
43
- **Output:** * Returns a **Promise** that resolves to a **string** containing the assistant's response.
44
-
45
- ---
46
-
47
- ## 5. Internal Architecture
48
-
49
- ### A. Provider Routing
50
- The library utilizes a "prefix-router." It splits the `model` string at the first `/`.
51
- * If prefix is `ollama`, the request is routed to the `OllamaProvider`.
52
- * The prefix is stripped before the request is sent to the provider's API.
53
-
54
- ### B. Request Normalization
55
- To maintain simplicity, **llmjs2** ignores hyper-parameters like `temperature` or `max_tokens` in the high-level API, allowing the model's internal defaults to govern the output. This ensures the library remains "future-proof" against changing API parameters.
56
-
57
- ### C. Error Handling
58
- The library must catch and wrap low-level network errors into high-level, actionable messages:
59
- * **Connection Error:** "llmjs2: Could not connect to [URL]. Check your OLLAMA_BASE_URL."
60
- * **Model Error:** "llmjs2: Model [name] not found on provider [provider]."
61
-
62
- ---
63
-
64
- ## 6. Implementation Checklist
65
-
66
- * [ ] **Env Loader:** Logic to check `process.env` and apply fallbacks.
67
- * [ ] **URL Parser:** Logic to ensure the base URL and `/api/chat` path are joined correctly without double slashes.
68
- * [ ] **Fetch Wrapper:** A standard `POST` implementation using `Headers` and `body`.
69
- * [ ] **Response Extractor:** Logic to navigate the JSON response (e.g., `json.message.content`) and return the raw string.
70
-
71
- ---
72
-
73
- **Would you like me to generate the actual `index.ts` file that implements this full specification?**
@@ -1,100 +0,0 @@
1
- import { generate } from './index.js';
2
-
3
- const MODEL = 'ollama/qwen3.5:397b-cloud';
4
-
5
- const tools = [
6
- {
7
- name: 'get_weather',
8
- description: 'Get the current weather for a location',
9
- parameters: {
10
- location: {
11
- type: 'string',
12
- required: true,
13
- description: 'The city and state, e.g. San Francisco, CA',
14
- },
15
- unit: {
16
- type: 'string',
17
- enum: ['celsius', 'fahrenheit'],
18
- description: 'Temperature unit',
19
- },
20
- },
21
- handler: ({ location, unit = 'fahrenheit' }) => {
22
- const weatherData = {
23
- 'San Francisco, CA': { temp: 72, condition: 'Sunny' },
24
- 'New York, NY': { temp: 45, condition: 'Cloudy' },
25
- 'London, UK': { temp: 48, condition: 'Rainy' },
26
- };
27
-
28
- const data = weatherData[location] || { temp: 70, condition: 'Unknown' };
29
- const temp = unit === 'celsius' ? Math.round((data.temp - 32) * (5 / 9)) : data.temp;
30
- return `Weather in ${location}: ${temp}°${unit === 'celsius' ? 'C' : 'F'}, ${data.condition}`;
31
- },
32
- },
33
- ];
34
-
35
- const tests = [
36
- {
37
- name: 'Tool call with default unit',
38
- input: {
39
- model: MODEL,
40
- userPrompt: 'Please use get_weather to fetch the weather for San Francisco, CA.',
41
- tools,
42
- },
43
- expected: /San Francisco, CA/i,
44
- },
45
- {
46
- name: 'Tool call with explicit celsius',
47
- input: {
48
- model: MODEL,
49
- userPrompt: 'Please use get_weather to fetch the weather for London, UK in celsius.',
50
- tools,
51
- },
52
- expected: /London, UK/i,
53
- },
54
- {
55
- name: 'No tool needed direct answer',
56
- input: {
57
- model: MODEL,
58
- userPrompt: 'What is 2 + 2?',
59
- tools,
60
- },
61
- expected: /4|four/i,
62
- },
63
- {
64
- name: 'Explicit messages payload with tool definitions',
65
- input: {
66
- model: MODEL,
67
- messages: [
68
- { role: 'system', content: 'You are a tool-aware assistant.' },
69
- { role: 'user', content: 'Use get_weather for New York, NY.' },
70
- ],
71
- tools,
72
- },
73
- expected: /Weather in New York, NY/i,
74
- },
75
- ];
76
-
77
- const runTest = async (test) => {
78
- console.log(`\n=== ${test.name} ===`);
79
- try {
80
- const result = await generate(test.input);
81
- console.log('Result:');
82
- console.log(result);
83
- if (test.expected && !test.expected.test(result)) {
84
- console.warn('Warning: result did not match expected pattern.');
85
- }
86
- } catch (error) {
87
- console.error('Error:', error?.message ?? error);
88
- }
89
- };
90
-
91
- const runAll = async () => {
92
- for (const test of tests) {
93
- await runTest(test);
94
- }
95
- };
96
-
97
- runAll().catch((error) => {
98
- console.error('Unexpected failure:', error);
99
- process.exit(1);
100
- });
@@ -1,57 +0,0 @@
1
- import { generate } from './index.js';
2
-
3
- const MODEL = 'ollama/qwen3.5:397b-cloud';
4
-
5
- const tools = [
6
- {
7
- name: 'get_weather',
8
- description: 'Get the current weather for a location',
9
- parameters: {
10
- location: {
11
- type: 'string',
12
- required: true,
13
- description: 'The city and state, e.g. San Francisco, CA',
14
- },
15
- unit: {
16
- type: 'string',
17
- enum: ['celsius', 'fahrenheit'],
18
- description: 'Temperature unit',
19
- },
20
- },
21
- handler: ({ location, unit = 'fahrenheit' }) => {
22
- const weatherData = {
23
- 'San Francisco, CA': { temp: 72, condition: 'Sunny' },
24
- 'New York, NY': { temp: 45, condition: 'Cloudy' },
25
- 'London, UK': { temp: 48, condition: 'Rainy' },
26
- };
27
-
28
- const data = weatherData[location] || { temp: 70, condition: 'Unknown' };
29
- const temp =
30
- unit === 'celsius' ? Math.round((data.temp - 32) * (5 / 9)) : data.temp;
31
- return `Weather in ${location}: ${temp}°${unit === 'celsius' ? 'C' : 'F'}, ${data.condition}`;
32
- },
33
- },
34
- ];
35
-
36
- async function run() {
37
- const input = {
38
- model: MODEL,
39
- userPrompt: 'Call get_weather for San Francisco, CA and return the result directly.',
40
- images: [],
41
- references: ['Use the weather tool if the user asks for weather.'],
42
- systemPrompt: 'You are a tool-aware assistant. Use available tools when appropriate.',
43
- tools,
44
- };
45
-
46
- try {
47
- console.log('Running generate() with tool support...');
48
- const response = await generate(input);
49
- console.log('\nFinal response:');
50
- console.log(response);
51
- } catch (error) {
52
- console.error('generate() failed:', error?.message ?? error);
53
- process.exit(1);
54
- }
55
- }
56
-
57
- run();
package/test-generate.js DELETED
@@ -1,31 +0,0 @@
1
- import { generate } from './index.js';
2
-
3
- const MODEL = 'ollama/qwen3.5:397b-cloud';
4
- const PROMPT = 'what is in the image? make it concise';
5
-
6
- const IMAGES = [
7
- 'https://thumbs.dreamstime.com/b/sample-jpeg-heartwarming-close-up-endearing-small-mouse-happily-munching-piece-cheese-357411130.jpg',
8
- // './grapes.jpg',
9
- ];
10
-
11
- const REFERENCES = [
12
- // 'llmjs2 is a zero-dependency Node.js library for Ollama.',
13
- //'https://en.wikipedia.org/wiki/History_of_Hong_Kong',
14
- ];
15
-
16
- async function run() {
17
- console.log('Testing generate() with prompt, images, and references...');
18
-
19
- try {
20
- const result = await generate(MODEL, PROMPT, IMAGES, REFERENCES);
21
- console.log('generate() result:');
22
- console.log(result);
23
- } catch (error) {
24
- console.error('generate() failed:', error?.message ?? error);
25
- }
26
- }
27
-
28
- run().catch((error) => {
29
- console.error('Unexpected failure:', error);
30
- process.exit(1);
31
- });
package/test.js DELETED
@@ -1,33 +0,0 @@
1
- import { completion } from './index.js';
2
-
3
- const MODEL = process.env.TEST_MODEL || 'ollama/qwen3.5:397b-cloud';
4
- const PROMPT = 'Hello~';
5
-
6
- async function run() {
7
- try {
8
- console.log('Testing direct completion call...');
9
- const result = await completion(MODEL, PROMPT);
10
- console.log('Result:', result);
11
- } catch (error) {
12
- console.error('Direct call failed:', error.message);
13
- }
14
-
15
- try {
16
- console.log('\nTesting options object call...');
17
- const result = await completion({
18
- model: MODEL,
19
- messages: [
20
- { role: 'system', content: 'You are a concise assistant.' },
21
- { role: 'user', content: PROMPT },
22
- ],
23
- });
24
- console.log('Result:', result);
25
- } catch (error) {
26
- console.error('Options call failed:', error.message);
27
- }
28
- }
29
-
30
- run().catch((error) => {
31
- console.error('Unexpected error:', error);
32
- process.exit(1);
33
- });